Fluoroscopic X-ray image guidance for manual and robotic orthopedic surgery. Thesis submitted for the degree of Doctor of Philosophy by Ziv R.

Size: px
Start display at page:

Download "Fluoroscopic X-ray image guidance for manual and robotic orthopedic surgery. Thesis submitted for the degree of Doctor of Philosophy by Ziv R."

Transcription

1 Fluoroscopic X-ray image guidance for manual and robotic orthopedic surgery Thesis submitted for the degree of Doctor of Philosophy by Ziv R. Yaniv Submitted to the Senate of the Hebrew University April 2004

2 This work was carried out under the supervision of Prof. Leo Joskowicz

3 Abstract Current orthopedic practice relies heavily on the use of intra-operative fluoroscopic X- ray images as a means of navigation. Based on these images the surgeon determines the relative positions of anatomical structures, surgical tools and implants. While readily available, fluoroscopic X-ray images are uncorrelated, have a small field of view, are geometrically distorted, and are a static representation of a dynamic spatial situation. To overcome these limitations, surgeons repeatedly acquire images from several viewpoints, to mentally recreate the underlying spatial situation. This leads to undesired high cumulative radiation exposure. This thesis describes methods and systems which enhance or altogether replace the use of conventional fluoroscopic X-ray images as a means of navigation in orthopedics. The first part of the thesis presents a method for aligning a pre-operative CT to the intra-operative situation using a few fluoroscopic X-ray images (2D/3D rigid registration). The anatomy-based method enables less invasive procedures, and has a mean accuracy below 2mm, which is adequate for most orthopedic procedures. This is the key component of a virtual reality navigation system which replaces the use of fluoroscopic X-ray images. The system displays three dimensional dynamic virtual views of the intra-operative situation. This alleviates the problems associated with the use of uncorrelated, static, small field of view fluoroscopic X-ray images, and eliminates the radiation exposure. The second part of the thesis presents a method for creating panoramic images. Panoramic images enable measurement of long bones and serve to document surgical outcomes. Individual overlapping fluoroscopic X-ray images are combined into a single image which represents the underlying anatomy. This alleviates the problems associated with the small field of view of the original images. The panoramic images enable the surgeon to assess the intra-operative situation quantitatively by performing measurements which are currently not available using conventional fluoroscopic X-ray imaging. The third part of the thesis presents methods for the alignment of a robotic surgical assistant system for distal locking of long bone intramedullary nails. A patientmounted robot is automatically positioned to provide mechanical guidance for handheld drilling of the distal screw s pilot holes. The robot is aligned using only a few fluoroscopic X-ray images of the drill guide and distal locking nail holes. This system increases surgeon accuracy by providing mechanical guidance during drilling, and reduces their radiation exposure by eliminating the need for fluoroscopic X-ray images associated with the free hand technique.

4 Acknowledgments The first thanks goes to my advisor, Prof. Leo Joskowicz, for introducing me to the field of computer assisted surgery, for his continuing support, and for teaching me that research requires a marathon runner s tenacity (luckily for me just the tenacity and not the running). I thank my colleagues throughout the years at the Computer-Assisted Surgery and Medical Image Processing Laboratory. Those that I worked with, Ofri Sadowsky and Harel Livyatan, and those whose company I enjoyed, Dotan Knaan and Ruby Shamir. I thank the people at the Hadassah Medical Center, Ein Karem: Prof. Charles Milgrom, Dr. Ariel Simkin, Prof. Iri Leibergall and Dr. Rami Mosheiff, for their advice and for providing the data sets used throughout this work.

5 Contents 1 Introduction Intra-operative navigation in CAOS Thesis overview D/3D registration of a pre-operative CT to intra-operative X-ray fluoroscopy Comparative in-vitro study of contact and image-based rigid registration for computer-aided surgery Gradient-Based 2D/3D Rigid Registration of Fluoroscopic X-ray to CT 23 3 Creation of intra-operative panoramic images from fluoroscopic X- ray images Long Bone Panoramas from Fluoroscopic X-ray Images Robot-assisted Distal Locking of Long Bone Intramedullary Nails: Localization, Registration, and In-Vitro Experiments Introduction Previous work System concept Drill guide and nail hole identification Drill guide and nail pose estimation Experimental results Conclusions Point based pose estimation Discussion Contributions and novel aspects Future work

6 Chapter 1 Introduction Recent worldwide clinical trends point towards precise, minimally invasive surgery as the method of choice for many surgeries. Coupled with new medical imaging and computer graphics technology, they have shown the potential for better clinical results, reduced morbidity, shorter recovery and hospital stay times, and lower overall costs. However, much work remains to be done to achieve the full benefits of these new procedures and technologies. Computer Assisted Surgery (CAS) aims at developing practical techniques and systems to aid in diagnostics, training of medical personnel and surgical interventions. These systems complement and enhance the surgeon s skills while leaving them in control. The development of CAS systems relies on the availability of medical images, allowing for pre-operative planning or augmenting the surgeons abilities during surgical intervention. The efforts towards developing such systems have started to produce integrated prototypes for a growing number of procedures [28, 44], most notably in neurosurgery, orthopedics and endoscopic surgery. Orthopedic surgery is a medical specialty with a large annual number of surgical procedures. According to the American Academy of Orthopedic Surgeons, more visits to physicians offices were made for musculoskeletal conditions in 2000 than for any other reason. The number of people which underwent musculoskeletal procedures in 1996 is approximately 6.5 million. These procedures require high accuracy while at the same time being minimally invasive. To address these challenges, orthopedic surgeons use pre-operative and intra-operative imaging. Current imaging modalities available to orthopedic surgeons are pre-operative Computed Tomography (CT) and Magnetic Resonance Imaging (MRI), and intraoperative conventional film X-rays, fluoroscopic X-ray images and ultrasound (US). Pre-operative images are not readily available for intra-operative use. Patients are exposed to harmful radiation when undergoing CT scans [29, 37], and surgeons are exposed to cumulative radiation when using fluoroscopic X-ray imaging [42, 46]. Since the intra-operative images are only two-dimensional the surgeon mentally recreates the three-dimensional situation of the underlying anatomy and surgical tools. This 2

7 requires significant skill and experience, and leads to positioning errors and complications in a non-negligible number of cases [4, 7]. To achieve better accuracy the surgeon will repeatedly acquire images, monitoring progress and re-evaluating the underlying three-dimensional situation. When using X-ray fluoroscopy this leads to significant cumulative exposure to the surgeon. Computer Assisted Orthopedic Surgery (CAOS) is the area of CAS in which the research presented in this thesis was conducted. The main goals of CAOS systems include: 1. Reduction of radiation exposure for surgical staff and patients. 2. Improving surgeon precision. 3. Shortening surgery time. 4. Enabling new minimally invasive surgical procedures. Accurate intra-operative navigation which replaces the frequent use of X-ray fluoroscopy is currently the most promising approach towards achieving these goals. This is the approach taken in this work. 1.1 Intra-operative navigation in CAOS CAOS systems for intra-operative navigation can be classified into five classes (Figure 1.1): 1. Fluoroscopy-based systems. 2. CT-based systems. 3. CT+fluoroscopy systems. 4. Image-less systems. 5. Robotic systems. 1. Fluoroscopy based systems Fluoroscopy-based systems augment the static fluoroscopic images with additional data. This type of system usually simulates continuous fluoroscopy by augmenting static fluoroscopic images with projections of tools which are updated in real time. The surgeon acquires a few fluoroscopic images at carefully chosen viewpoints which enable him to mentally recreate the underlying 3D anatomy. The system then corrects the images for geometric distortion, performs calibration, and tracks the anatomy and tools in real time. The static images are augmented in real time with projections of 3

8 (a) Fluoroscopy-based system (b) CT-based and CT+fluoroscopy system (c) Image-less system (d) Robotic system Figure 1.1: CAOS systems: (a) Fluoroscopy-based system (StealthStation from Medtronic, U.S.A.) (b) CT-based system (StealthStation from Medtronic, U.S.A.), CT+fluoroscopy systems display the same information (c) Image-less system (OrthoPilot from Aesculap, Germany) (d) Robotic system (Acrobot from Imperial College, U.K.). 4

9 the tools, similar to continuous fluoroscopy. This type of navigation reduces the radiation exposure to the surgeon and patient, as only a few images are acquired. Mentally correlating the underlying anatomy and tools is also easier, as there are multiple images which are simultaneously displayed and not just a single image. An example of this type of system is described in [21] and is commercially available from Medtronic (U.S.A.) and Praxim (France). 2. CT based systems CT based systems replace the use of intra-operative fluoroscopic images with virtual views of the 3D anatomy and instruments. These systems allows the surgeon to view the 3D anatomy from any desired viewpoint. As the viewed objects are displayed in 3D, there is no need for mental correlation between 2D imagery and 3D anatomy as is the case with fluoroscopy based systems. The anatomy is displayed using the preoperative CT data, either as a surface model or as a volumetric rendering. Acquisition of a pre-operative CT enables the surgeon to perform pre-operative planning, yet at the same time exposes the patient to radiation. Application of the pre-operative plan intra-operatively requires that the CT data be correlated with the intra-operative setting. Correlation is obtained by performing contact based rigid registration. Points are acquired either on the surface of the bone or by touching fiducial markers which are implanted pre-operatively. This precludes the use of this method for percutaneous procedures. Once the correlation is obtained, the bone and instruments are tracked in real time and the 3D virtual views are updated accordingly. An example of this type of system is described in [41], and is commercially available from Medtronic (U.S.A.) and Praxim (France). 3. CT+fluoroscopy systems CT+fluoroscopy based systems are like CT based systems, except that the contact based registration is replaced with an image based registration procedure. A few fluoroscopic X-ray images of the anatomy are acquired, corrected for geometric distortion and calibrated. Rigid registration is then performed using these intra-operative images and the pre-operative CT. This type of system enjoys the benefits of conventional CT based systems while at the same time eliminating the invasiveness associated with contact based registration. The key challenge in this type of system is the image based registration of anatomical structures. Research on anatomy based rigid registration started in 1994 [17, 33] and has been active since then [15, 32]. However, to date only the Cybernife system [1] uses this approach clinically. The reasons that image based registration of anatomical structures still remains a research subject are related to insufficient algorithm accuracy and robustness, and slow computation times. Recent research on image based rigid registration of anatomical structures [20, 38, 47] shows that overcoming these challenges is within reach. 4. Image-less systems Image-less systems replace the use of intra-operative imaging with schematic virtual views and parameter values. Instead of using intra-operative images, schematic models of the bone are displayed. These models are created from intra-operatively sampled 5

10 points on the bone surface. Kinematic parameter values are acquired by flexing and moving the anatomy. Surgery is performed without the need for pre-operative or intra-operative imaging, using only the schematic models and the kinematic parameter values. An example of this type of system is described in [25], and is commercially available from Aesculap (Germany). 5. Robotic systems Robotic systems are designed to assist the surgeon in implementing the preoperative plan by mechanically positioning and sometimes executing the surgical action itself [8, 45]. The robots are either adapted floor-standing industrial robots, or tablemounted custom-designed serial robots. Similarly to CT based systems, correlation of the pre-operative plan to the intra-operative setting is done by contact based rigid registration. The relative configuration of the bone with respect to the robot is known at all times either by immobilizing the anatomy or by real-time dynamic tracking. This type of system eliminates the need for intra-operative fluoroscopic images and precisely implements the pre-operative plan. An example of this type of system is described in [24], and a commercial system is available from Integrated Surgical Systems (U.S.A.). 1.2 Thesis overview The goal of this thesis is to develop methods and systems for intra-operative navigation using fluoroscopic X-ray images, thus reducing radiation exposure to surgical staff and patients and enhancing the surgeon s abilities to perform minimally invasive procedures. The work consists of three parts: 1. 2D/3D rigid registration of a pre-operative CT to intra-operative X-ray fluoroscopy (type 3 system). 2. Intra-operative panoramic image from fluoroscopic X-ray images (type 1 system). 3. Fluoroscopic image based guidance of a robotic system for distal locking of long bone intramedullary nails (type 5 system). The first part consists of two papers which present results on 2D/3D rigid registration between pre-operative CT and intra-operative fluoroscopic X-rays. The first paper describes a comparative evaluation between 2D/3D rigid registration using X-ray fluoroscopy and contact based rigid registration. We concluded that the image based registration algorithm must be improved before it can be used clinically. This lead us to develop the algorithms described in the second paper. These algorithms achieve an accuracy below 2mm both on in-vitro and cadaver studies, which is clinically acceptable. 6

11 Comparative in-vitro study of contact and image-based rigid registration for computer-aided surgery, O. Sadowsky, Z. Yaniv, L. Joskowicz, Computer- Aided Surgery, Vol. 7(4), pp , Gradient-Based 2D/3D Rigid Registration of Fluoroscopic X-ray to CT, H. Livyatan, Z. Yaniv, L. Joskowicz, IEEE Trans. on Medical Imaging, special issue on Medical Image Registration, M. Fitzpatrick and J. Pluim eds., Vol. 22(11), pp , The second part consists of one paper which presents a system that creates a single panoramic image of a long bone from several individual fluoroscopic X-ray images. The panoramic image can be used for intra-operative measurements, it allows the surgeon to assess the positions of long implants, and it can be used to document surgical outcomes. These measurements and images are difficult or impossible to obtain with existing methods and can help to improve diagnosis, shorten surgery time, and improve outcomes. Long Bone Panoramas from Fluoroscopic X-ray Images, Z. Yaniv, L. Joskowicz, IEEE Trans. on Medical Imaging, Vol. 23(1), pp , The third part presents an image-guided robotic system to assist orthopedic surgeons in performing distal locking of long bone intramedullary nails. The system consists of a bone-mounted miniature robot fitted with a drill guide that provides rigid mechanical guidance for hand-held drilling of the distal screws pilot holes. The robot is automatically positioned using a few fluoroscopic X-ray images. The system achieves a mean angular error of 1.3 o (std = 0.4 o ) between the computed drill guide axes and the actual locking holes axes, and a mean 3.0mm error (std = 1.1mm) in the entry and exit drill point, which is adequate for successfully locking the nail. Robot-assisted Distal Locking of Long Bone Intramedullary Nails: Localization, Registration, and In-Vitro Experiments, Z. Yaniv, L. Joskowicz, Technical Report, Leibniz Research Center, The Hebrew University of Jerusalem, April,

12 Chapter 2 2D/3D registration of a pre-operative CT to intra-operative X-ray fluoroscopy 1. Comparative in-vitro study of contact and image-based rigid registration for computer-aided surgery, O. Sadowsky, Z. Yaniv, L. Joskowicz, Computer- Aided Surgery, Vol. 7(4), pp , Gradient-Based 2D/3D Rigid Registration of Fluoroscopic X-ray to CT, H. Livyatan, Z. Yaniv, L. Joskowicz, IEEE Trans. on Medical Imaging, special issue on Medical Image Registration, M. Fitzpatrick and J. Pluim eds., Vol. 22(11), pp ,

13 Computer Aided Surgery 7: (2002) Biomedical Paper Comparative In Vitro Study of Contact- and Image- Based Rigid Registration for Computer-Aided Surgery Ofri Sadowsky, Ziv Yaniv, and Leo Joskowicz Computer-Aided Surgery and Medical Image Processing Laboratory, School of Computer Science and Engineering, The Hebrew University of Jerusalem, Jerusalem, Israel ABSTRACT We present an in vitro study of rigid registration methods for computer-aided surgery. The goals of the study were to obtain accuracy measures empirically under optimal laboratory conditions, and to identify the weak links in the registration chain. Specifically, we investigated two common registration methods (contact-based registration and image-based landmark registration) and established a framework for comparing the accuracy of both methods. The phantoms, protocols, and algorithms for tool tip calibration, contact-based registration with an optical tracker, fluoroscopic X-ray camera calibration, and fluoroscopic X-ray image-based landmark registration are described. Average accuracies of 0.5 mm (1.5 mm maximum) and 2.75 mm (3.4 mm maximum) were found for contact-based and image-based landmark registration, respectively. Based on these findings, the camera calibration was identified as being the main source of error in image-based landmark registration. Protocol improvements and algorithmic refinements to improve the accuracy of image-based landmark registration are proposed. Comp Aid Surg 7: (2002) Wiley-Liss, Inc. Key words: registration; image-based registration; contact-based registration; accuracy measurements; fluoroscopy; tracking INTRODUCTION Registration is the task of finding a transformation from the coordinate system of one modality data set to another so that all features appearing in one modality are aligned with their appearance in the second. Registration is an essential step in most computer-aided surgery (CAS) systems, because it is necessary to match information from different data modalities obtained at different points in time. It is required to match the preoperative images and plans to the intraoperative situation, and to determine the relative positions of surgical tools and anatomical structures. Examples of deployed CAS systems include preoperative planning, intraoperative navigation, and robotic systems for orthopedic surgery, 1 4 neurosurgery, 5,6 and maxillofacial surgery, 7 among many others. Practical, accurate, and robust registration has emerged as one of the key technical challenges in the field. 3 Much recent research has been devoted to the development and validation of registration methods (see ref. 8 for an excellent survey). This article presents an in vitro study of rigid registration methods for CAS. The purpose of the study was to obtain accuracy measures empirically under optimal laboratory conditions and to identify Received August 1, 2001; accepted May 9, Address correspondence/reprint requests to: Prof. Leo Joskowicz, School of Computer Science and Engineering, The Hebrew University of Jerusalem, Givat Ram, Jerusalem 91904, Israel. josko@cs.huji.ac.il Published online in Wiley InterScience ( DOI: /igs Wiley-Liss, Inc. 9

14 224 Sadowsky et al.: Contact- versus Image-Based Registration Methods the weak links in the registration chain. Although the setup was different from the one used clinically, the study aimed at obtaining a lower bound on the achievable accuracy, and at identifying the main sources of error. Establishing the accuracy and robustness of the registration methods is essential for determining their potential applicability in different clinical settings. Understanding of the factors that affect the accuracy and robustness of the registration process provides quantitative criteria to support the selection of registration methods, and indicates where technical improvements are necessary. We addressed the rigid registration of the preoperative three-dimensional (3D) model of an object with its intraoperative pose. This type of registration is very common in orthopedics, where mesh models of bone structures are constructed preoperatively from CT scans, and a surgical plan, consisting of landmark points, axes, or implants, is elaborated based on the images and models. The plan must then be registered to the intraoperative situation so that it can be carried out precisely with the help of a navigation system or robotic device. The registration is usually performed by using an instrumented pointer to touch implanted fiducials, anatomical landmarks, or points on the surface of the anatomy to obtain their precise spatial locations. The points are then matched with the corresponding points in the 3D model to obtain the set of three rotation and three translation parameters of the rigid transformation that achieves the best coincidence between them. This procedure is also used to register nearly rigid structures, such as brain structures in neurosurgery, for which adhesive fiducials are placed on the patient s forehead. 9 Although effective and accurate, contactbased rigid registration methods have two main drawbacks: they require part of the anatomy of interest to be exposed, and they can be time-consuming and error-prone. Intraoperative exposure of all or part of the anatomy of interest to enable it to be touched with a pointer can result in additional undesired surgical incisions. In percutaneous procedures, such as needle insertion or long bone closed fracture reduction, the additional surgical incisions defeat the purpose of the minimally invasive procedure. Additional incisions may also be necessary in some more invasive procedures, such as pelvic fracture reduction, to obtain the desired accuracy with an even distribution of points on the anatomical structure. It is only practical to acquire a few dozen points, because each acquisition is time-consuming. Additional errors are also introduced due to the presence of tissue and fat on the anatomical surface. An alternative to contact-based registration is image-based registration. In image-based registration, one or more intraoperative images of the anatomy of interest are acquired at known camera poses. Feature points, such as fiducial centers, anatomical landmarks, or anatomical contours, are extracted from the images, and their spatial locations are computed from the camera pose and internal parameters. The most commonly used intraoperative imaging devices are mobile fluoroscopic X-ray C-arms and ultrasound units. The advantage of image-based registration is that it does not require contact or additional surgical exposure, and uses readily available intraoperative imaging devices. It has the potential to be faster and more stable than contact-based registration, because many points can be extracted accurately and automatically with advanced image-processing techniques. However, image-based registration is technically much more challenging, because it depends on many more factors than contact-based registration. These factors include the geometric characteristics of the imaging camera and its pose, the image quality, and the quality of feature localization in the images. This study establishes a technical framework for comparing the accuracy of both contact-based and image-based landmark registration methods. 10,11 Specifically, we focus on registration using an optical tracker and fluoroscopic X-ray images obtained by common mobile intraoperative C-arm units. To this end, we developed phantoms, protocols, and algorithms for tool tip calibration, contact-based registration, fluoroscopic X-ray camera distortion correction and calibration, and fluoroscopic X-ray image-based landmark registration. We designed and conducted in vitro experiments to test the accuracy and reliability of algorithms and protocols under optimal conditions. The individual steps of the registration algorithms were evaluated independently and lower bounds were established. Previous Work This section reviews previous work on contact and image-based rigid registration methods, and theoretical and experimental accuracy studies. Contact-based registration methods are currently in use in many commercial systems. These methods match the actual location of implanted fiducials, anatomical landmarks, or points on the surface of the anatomy (a cloud of points) to the corresponding points in the preoperative model. 10

15 Sadowsky et al.: Contact- versus Image-Based Registration Methods 225 The rigid transformation for fiducial and landmarkbased registration is obtained directly with Horn s closed form solution, 12 or iteratively by distance minimization. 13 Cloud-of-points registration is performed with the Iterative Closest Point (ICP) method. 14 Several studies have reported millimetric accuracy in clinical settings for contact-based registration. 6,15 19,21 Several image-based registration methods have been proposed recently, although no commercial system, with the exception of the CyberKnife system for radiation therapy of brain tumors, 20 uses them. The registration can be based on geometric features in the image or on pixel intensity values Geometric registration requires feature segmentation (e.g., fiducial center location or contour edge detection), while intensitybased registration requires generation of digitally reconstructed radiographic images and their comparison with the actual fluoroscopic X-ray images. The main difficulty with geometric registration is the accuracy and robustness of feature segmentation. The main difficulty with intensity-based registration is the size of the search space and the existence of many local minima. Several groups have reported millimetric accuracy for image-based registration: Lavallée and Hamadeh 23,25 reported an in vitro accuracy of 2 mm on a dry vertebra; Tang 26 reported an in vitro accuracy of 3 mm on long-bone foam models with six metal fiducials and a single image; and Larose 27 reported an in vitro accuracy of 1.3 mm on a mm 3 phantom. A necessary first step for image-based registration is to obtain an accurate model of the imaging process. The camera has to be calibrated and the image corrected for distortion. Much recent work, including our own, 10,33 has been devoted to fluoroscopic X-ray image distortion correction 2,26,34,35 and pinhole camera calibration. 36,37 Several theoretical studies on the accuracy of rigid registration have been conducted. Fitzpatrick and West analyzed the target registration error (TRE), which is defined as the error between the measured and expected position of a point after registration as a function of the fiducial localization error. They showed that the target registration error does not depend on the initial displacement between the model and the samples, and characterized the dependency of this error s distribution on the spatial configuration of the landmarks or fiducials. Pennec and Thirion 41 presented a statistical framework for point-based registration where geometric features are described as couples, data (coordinates), and uncertainty (covariance matrix). They described a rigid registration algorithm that yields both the motion and its uncertainty and allows the computation of the expected error at every object point. Ellis et al. 42 presented a framework for registration stability evaluation, given known localization error bounds in the registered modalities. MATERIALS AND METHODS Algorithms The goal of our study was to establish a common framework that would allow direct comparison of contact and image-based landmark registration methods. This section presents the generic registration algorithm and a brief review of the algorithms that were used in our study (for full details, see ref. 11). The registration process consists of five steps: 1. Calibration: Calibrate the data acquisition devices (optical tracker, fluoroscopic X-ray C-arm) and correct for distortions. 2. Feature extraction: Find features in both data sets, such that a feature in one set can be matched with a corresponding feature in the other. 3. Feature pairing: Match the corresponding features in both data sets. Eliminate outlier pairings. 4. Similarity formulation: Define a disparity function, which is a global similarity measure between the two data sets, based on the pairings. 5. Dissimilarity reduction: Reduce the dissimilarity between the two data sets by minimizing the disparity function. Steps 2 5 are repeated until convergence is reached. Contact-Based Registration Method Contact-based registration consists of matching two 3D point sets. Because the point sets are given, no feature extraction is necessary. When using fiducials or landmarks, the pairing is known since it is determined a priori from the point acquisition protocol. The similarity measure between the data sets is the sum of the squared distances between pairs of points in both data sets. We use Horn s closed form solution 12 for point-based registration, and the Iterative Closest Point (ICP) method 14 for cloud-of-points registration. Each sample point is iteratively matched with its nearest neighbor in the model set, and a transformation that minimizes the 11

16 226 Sadowsky et al.: Contact- versus Image-Based Registration Methods Fig. 1. Photograph of the CalTrax unit calibrating an active pointer. distance between them is then computed using Horn s formula. The algorithm is guaranteed to reach a local minimum, which is also the right one when the initial pose estimate is close enough (e.g., one obtained by approximate landmark). For contact-based registration, the optical tracker and tool must be calibrated. The tracker does not require calibration because it comes precalibrated from the factory. We calibrate the tool to determine the exact position of the tool tip using a custom calibration algorithm based on the Cal- Trax calibration tool (Traxtal Technologies, Toronto, Canada) shown in Figure 1. The algorithm derives the position of the tool tip from the geometry and position of the tracked pointer and the calibration tool. The geometry is determined by the diameter of the tool and the geometric characteristics of the calibration tool. Image-Based Landmark Registration Method Image-based landmark registration consists of matching a set of 3D points (the model) with a set of 2D points extracted from the images in this case, fiducials. A prerequisite for fluoroscopic X-ray imagebased registration is image distortion correction and calibration, for which we use the algorithms described in ref. 33. The fluoroscopic X-ray C-arm is modeled as a pinhole camera, as this has been shown to be a very good approximation of the X-ray imaging process. We use local bi-linear interpolation on a dense grid of points to compute a distortion map. For camera calibration, we use the pinhole camera calibration algorithm based on constrained optimization, as described by Faugeras. 36 This computes the internal camera parameters (focal length, image center, and scaling) and the external parameters corresponding to the camera pose. The algorithm is more robust and reliable than Tsai s method, 37 which we used originally, although it is very sensitive to small differences in fiducial centers. Because we perform image-based landmark registration with a phantom consisting of spherical metal balls as fiducials, the feature extraction and feature pairing steps are straightforward. To extract the fiducial centers to subpixel accuracy, we use the circle center Hough transform, 43 followed by graylevel thresholding segmentation. The pairing between the model and computed center is then done manually. We minimize the error measure consisting of the sum of distances between the model points and the closest points on the rays emanating from the camera source and passing through the image points. Equipment The equipment used in the study consisted of a standard PC computer with a video card and a monitor, a Phillips BV 29 mobile fluoroscopic X- ray (C-arm) unit with a 9 field of view (Phillips, Amsterdam, The Netherlands), a hybrid optical tracker (Polaris, Northern Digital Inc., Ontario, Canada), and tracking instruments from Traxtal Technologies (Toronto, Canada). We used both flat active tracking plates and crosslike optical passive trackers as dynamic reference frames, actively and passively tracked pointers for landmark acquisition, and the CalTrax calibration device for pointer tip calibration (Fig. 1). Images are directly downloaded from the fluoroscopic unit to the PC computer via the video output port with the GrabIt Pro II analog-to-digital frame grabber. Phantoms We designed and built four custom phantoms for the in vitro registration experiments: a dewarp grid, a camera calibration phantom, a contact-based registration phantom, and a four-way registration phantom. With the exception of the dewarp grid, all phantoms have attached to them an active tracking plate that serves as a dynamic reference frame. The holes in the phantoms, which are used as precise landmarks for contact-based registration, are cone shaped, so that the center of a spherical tip pointer inside the hole is invariant with respect to the pointer s orientation. The dewarp grid is used to correct the images for geometric distortion (Fig. 2). It is a 7-mm-thick coated aluminum alloy plate with mm-diameter holes uniformly distributed on a grid at 10-mm intervals machined to 0.02-mm precision. It attaches to the C-arm image intensifier via existing screw holes. This grid is simpler and cheaper to make than the commonly used steel balls or cross hairs mounted on a radiolucent plate. The grid 12

17 Sadowsky et al.: Contact- versus Image-Based Registration Methods 227 Fig. 3. Camera calibration tower on the C-arm image intensifier: (a) photograph, and (b) fluoroscopic image. Fig. 2. Dewarp grid on the C-arm image intensifier: (a) photograph, and (b) fluoroscopic image. [Color figure can be viewed in the online issue, which is available at features are sufficiently dense and yield very accurate results. 44 The camera calibration phantom is used to obtain the intrinsic imaging parameters of the fluoroscopic X-ray unit (Fig. 3). It is a radiolucent, three-step hexagonal tower, with 13 positional holes drilled into it and 12 steel balls pressed into it with a positional accuracy of 0.05 mm. The tower dimensions are 200 mm in height and 60 mm,

18 228 Sadowsky et al.: Contact- versus Image-Based Registration Methods Fig. 4. Photograph of the contact registration phantom with a pointer touching one of the holes on its side. mm, and 140 mm in external diameter for the upper, middle, and lower steps, respectively. The tower is hollow, with cutout windows on the faces of the middle and upper steps to reduce its weight and increase its radiolucency. It is made out of Delrin, weighs approximately 1.5 kg, and attaches to the C-arm image intensifier via existing screw holes. The camera calibration phantom was designed to have three reference planes, and to allow for robust and accurate ball-center computation. The holes are used for contact-based landmark registration to determine the positions of the steel balls during the calibration. The steel balls were placed right above the cutout windows, so that their appearance is sharp and contrasted in the fluoroscopic images. The placement pattern of the holes and balls was designed to avoid radial and mirror symmetry, thus allowing unambiguous automatic pairing. Reducing the weight was important to minimize the C-arm deflection. To verify that our phantom did indeed not affect the C-arm deflection, we attached tracking units to the source and image intensifier and measured their relative position with and without the phantom in several C-arm orientations. No significant difference was detected between measurements with and without the phantom. The contact-based registration phantom is used for landmark and cloud-of-points contact registration (Fig. 4). It is a two-step hexagonal tower with 31 positional holes whose depth has been measured with a precision of 0.05 mm. The tower dimensions are 250 mm in height and 70 mm and 100 mm in external diameter for the upper and lower levels, respectively. The tower is a solid piece of Delrin whose dimensions are made to mm accuracy. The holes in the phantom are distributed so as to maximize the number of different distances between them. The surfaces of the object can be used to obtain sampled points from several planes for cloud-of-points contact registration. The four-way registration phantom can be used for both contact-based landmark and cloudof-points contact registration, and for landmark image-based and contour image-based registration (Fig. 5). It is an L-shaped base with small L-shaped blocks on top. It has 11 steel balls pressed into it, and nine positional holes drilled into it with a positional accuracy of 0.05 mm. The L-shaped base has a length of 85 mm, a width of 70 mm, and a height of 40 mm, and the small blocks glued to it are mm 3. The phantom is made out of Delrin, and was designed to fit in its entirety in the fluoroscopic image. The holes in the phantom are arranged so as to allow easy identification and to maximize the distances between them. The steel balls are spatially distributed so as to avoid overlaps from a wide range of viewing directions. The surfaces of the object can be used to obtain sampled points from several planes for cloud-of-points contact registration. The object shape can also be used for contour-based registration, which is not described in this article. Protocols We defined the following protocols for tool-tip calibration, contact-based registration, fluoroscopic camera distortion correction and calibration, and image-based landmark registration. For tool-tip calibration, we place the cylindrical body of the tracked pointer on the CalTrax groove and push it until the spherical tip touches the unit s calibration wall (Fig. 1). From the unit geometry and tool diameter, the tool-tip position is determined directly. For contact-based landmark registration, we touch the holes with the calibrated pointer and apply Horn s closed form solution. 12 For contactbased cloud-of-points registration, we first obtain an approximate initial guess by touching the holes with the calibrated pointer, adding random error to the measurement, and solving in closed form. We then acquire a set of points on the surface of the phantom and apply iterative optimization to obtain the rigid transformation. These methods are applied to all three phantom objects. For fluoroscopic X-ray camera calibration, we compute the distortion map and the intrinsic camera parameters at predefined C-arm poses. The 14

19 Sadowsky et al.: Contact- versus Image-Based Registration Methods 229 second tracking plate is attached to the C-arm s image intensifier, and serves as a dynamic reference frame for the C-arm camera. Figure 3 shows the actual setup for calibration. The algorithm then automatically identifies from the fluoroscopic images the centers of the steel balls, whose diameter is 5 10 pixels, to subpixel accuracy, and computes the internal camera parameters using Faugeras camera calibration method 36 (earlier experiments with Tsai s method 37 were not sufficiently robust). Figure 6 shows the registration chain for the camera calibration protocol. For image-based landmark registration, we use the four-way calibration object. We place the four-way calibration object on a radiolucent table whose height is roughly level with where a patient lies, and take images from the predetermined poses. After automatically identifying the centers of the steel balls in each image using the same method as above, we manually pair their centers with those of the model and compute the transformation. RESULTS We designed and conducted experiments to determine the accuracy of the tracking system, contactbased landmark and cloud-of-points registration, Fig. 5. Four-way registration phantom on a radiolucent table: (a) photograph, and (b) fluoroscopic image acquired from above. [Color figure can be viewed in the online issue, which is available at images used for calibration are acquired with power settings of 48 and 52 kv, respectively. The image size is pixels, and the pixel is mm 2 after dewarping. To compute the dewarp map, we attach the dewarp grid to the image intensifier, acquire an image, and transfer it to the computer. The plate is then removed, and the calibration phantom is attached in its place. An active tracking plate is fixed on the calibration phantom, and the registration between this plate and the phantom s steel balls is determined using the contact-based landmark registration method. A Fig. 6. Schematic view of the C-arm calibration process. The coordinate systems are as follows: W, the global coordinate system, which coincides with the tracker s coordinate system; CARM, the local coordinate system of the tracking unit attached to the C-arm s image intensifier; CAL the local coordinate system of the calibration object; TCAL, the local coordinate system of the tracking unit attached to the calibration phantom; and CAM, the virtual coordinate system of the camera. The transformations between coordinate systems are: T CARM TCAL CAM between the camera and C-arm; T CAL between the calibration phantom and its tracking plate; W T TCAL between the phantom calibration plate and the world; W and T CARM between the C-arm and the world. The goal is to compute T CARM CAM and the internal camera parameters. 15

20 230 Sadowsky et al.: Contact- versus Image-Based Registration Methods Table 1. Results of Contact-Based Landmark Registration Test (in mm) No. Mean Std dev Max Min (a) Target registration error with five fiducials all (b) Target registration error with 10 fiducials all camera calibration, and image-based landmark registration. Tracking System To establish a ground-truth basis for the tracker, we estimated the positioning accuracy for a static tool using the Polaris tracking system. The magnitude of this noise defines a limit on the accuracy of all other measurements performed with the system. We placed two calibrated tools on a table and recorded 15,000 samples of their coordinate frames and tool tips. We obtained a deviation of the tools distance from the tracker s origin in the range of mm, and mm for the tool tips. The larger deviation accounts for the amplification of the tool orientation by the distance along the tools axis. The largest deviation is along the optical axis of the tracker camera. Contact-Based Landmark Registration Accuracy The goal of this experiment was to measure the accuracy of the contact-based landmark registration method and to determine if there was a significant improvement when more than five landmark points were used. To this end, we selected five spatially distributed holes on the contact registration phantom, touched them with the pointer to acquire their positions, and computed from them the registration matrix. We then compared the expected position of all 31 holes, including those used for the registration, with the computed ones and averaged the difference to obtain the TRE. We repeated the experiment with ten landmarks, four times each. Table 1 summarizes the results. We conclude that our tool calibration and fiducial registration algorithm result in submillimetric accuracy, with a mean of 0.55 mm and standard deviation of 0.22 mm with five fiducials, and a mean of 0.51 mm and standard deviation of 0.29 mm with ten fiducials. Note that the improvement with 10 rather than 5 landmarks is relatively small and does not provide a real advantage. This is probably because the results obtained with only five fiducials are already near optimal when considering the error bounds of the tracking system. Contact-Based Cloud-of-Points Registration Accuracy The goal of this experiment was to measure the accuracy of the cloud-of-points contact registration and its sensitivity to the initial guess computed from approximate landmarks. We used the same contact-based registration phantom as in the previous section. To simulate the position uncertainty of the landmarks, we added a 2.5-mm error to the hole depth and acquired three landmark positions in different spatial configurations. We used three landmarks for the initial registration and measured its accuracy as described in the previous section. We then acquired a cloud of 15 points on the surface of the phantom, computed the new registration, and compared the results. The experiment was repeated for three different configurations, as shown in Figure 7. Table 2 summarizes the results. We note that the landmark configuration affects the bias caused by the added error. In the first configuration, two of the landmarks were selected on opposite walls of the hexagon, and the third on the face between them. This caused cancellation of the error, and therefore the initial error was the smallest. In the second configuration, two landmarks on opposite walls were selected, with the third not lying between them. In the third configuration, all landmarks were on the same side of the tower, thereby introducing a bias. As expected, this configuration yielded the worst results. The average error after the cloud-of-points optimization was 1.26 mm, with a standard deviation of 0.68 mm. We conclude that the cloud-of-points registration significantly reduces the error of the initial guess, as expected, but is still dependent on it. Camera Calibration Accuracy and Sensitivity The goal of these experiments was to measure the accuracy and sensitivity of the camera calibration process. Prior to the experiments, we calibrated the fluoroscopic camera with the calibration phantom attached to the image intensifier as described above. 16

21 Sadowsky et al.: Contact- versus Image-Based Registration Methods 231 Fig. 7. Three configurations of landmarks for contact-based registration with the contact registration phantom. [Color figure can be viewed in the online issue, which is available at The first experiment was designed to estimate the calibration accuracy. For this, we used the calibration phantom and imaged it from different angles and at different heights. In the first setup, we placed the calibration phantom on the image intensifier and imaged it at predefined camera angles. In the second setup, we placed the calibration phantom on a radiolucent table and imaged it at a fixed Table 2. Results of the Contact Based Registration Experiment (all errors are in mm) No. Mean Std dev Max Min (a) Coarse registration with three landmarks with 2.5-mm error in hole depth all (b) Fine registration with a cloud of 15 points and three landmarks with 2.5-mm error all camera angle, but at different heights, by raising and lowering the C-arm. In both cases, we computed the error as the distance between the actual position of the fiducial center (as provided by the tracked position of the phantom and its 3D model) and the back-projected ray emanating from the center of the steel ball in the image to the camera focal point (whose location was known from its tracked position). Table 3 summarizes the results. The first three rows are the results for the calibration phantom on the image intensifier at three different camera angles. The fourth is the result of the calibration phantom on the radiolucent table imaged at different camera heights. The fifth row averages these results. The mean error was 0.84 mm, with a worst case of 2.64 mm. The sixth row shows the residual calibration error computed by taking the same image used for calibration and projecting the ballcenters on it after the calibration parameters were computed. Because this residual error is much smaller than the calibration error (0.15 mm vs mm, on average), it can be neglected. The second experiment was designed to eval- 17

22 232 Sadowsky et al.: Contact- versus Image-Based Registration Methods Table 3. Results of the First Camera Calibration Accuracy Experiment Image Camera angle Number of fiducials Mean X shift Mean Y shift Mean Z shift Mean Std dev Max Min all The first and second columns show the image number and camera angle. Columns 3 to 6 show the average fiducial center shift along each of the three directions relative to the closest point on the ray. The remaining columns show the distances of the fiducials from the closest point on the ray (all values except the C-arm camera angles in the second column are in mm). uate the sensitivity of the calibration parameters to the fiducial center locations in the image. For each image that was used to compute the calibration parameters, we extracted the fiducial locations with two different gray-level thresholds. This yielded two sets of slightly different fiducial centers, with which two sets of calibration parameters were computed. Tables 4 and 5 summarize the results. In Table 4, we observe that a variation of 0.23 pixels (0.11 mm) can cause variations of up to 10 mm in some of the projection parameters. However, because the calibration parameters are not independent of each other, a direct comparison of the individual values is not very meaningful. To assess the effect of all the parameters, we computed for the same images the distance between the known fiducial location centers and the back-projected ray for each set of calibration parameters. The results are summarized in Table 5. We note that, despite the fact that the individual parameter changes are large, the mean distance difference is very small (about 0.06 mm). Image-Based Landmark Registration Accuracy The goal of this experiment was to quantify the accuracy of image-based landmark registration with the four-way registration phantom. First, we calibrated the fluoroscopic camera as described previously, then took one or more images of the phantom. Using these images, the rigid transformation was computed from the fiducial centers in the image. The accuracy of the resulting transformation was evaluated in two ways. First, the phantom positional holes were used as the target points and the target registration error was measured. Second, we compared the transformation to the gold standard, which is the transformation obtained by contact-based registration. Table 6 summarizes the results of the target registration error analysis. We observe that using one AP image produces an error of over 6 mm, most of it along the axis orthogonal to the image plane. This suggests that determining the distance of the registered object along the fluoroscopic cam- Table 4. Results of the Second Camera Calibration Sensitivity Experiment, Showing the Sensitivity of Computed Camera Parameters to Noise in the Fiducial Detection Algorithm Mean fiducial Focal Distance Center coordinates Image centers distance Setting 1 Setting 2 Setting 1 Setting X X (0.16) Y Y Z Z X X (0.14) Y Y Z Z X X (0.23) Y Y Z Z The first column is the image number. The second column shows the average distance in mm (pixels in parenthesis) between the detected fiducial on each image in two different settings. The next two columns show the computed camera focal distance (in mm) for each setting. The last two columns show the coordinates (in mm) of the projection center relative to the tracking unit on the C-arm. 18

23 Sadowsky et al.: Contact- versus Image-Based Registration Methods 233 Table 5. Results of the Second Camera Calibration Sensitivity Test Showing the Actual Effect of the Change in Camera Parameters on the Back-Projection Process (all values are in mm) Image # No. of fiducials Mean X Mean Y Mean Z Mean The X, Y, and Z values denote the differences between the closest points in the back-projection process. The last column displays the mean distance between the closest points. era s optical axis is the most difficult part of imagebased registration. Note that, despite the failure when using the lateral fluoroscopic image of the phantom for registration, combining the data from both camera angles using the least-squares formulation 25 succeeded and improved the registration results significantly. Nevertheless, the overall results of the image-based registration were still poor compared to those obtained with contact-based methods, with an average distance error of 2.75 mm, and 3.4 mm in the worst case when using images from two angles. Table 7 summarizes the results of the comparison to the gold standard. We observe that the rotation difference is less than 1.5 around each axis in most cases, and about 1 around each axis in the average case. This difference causes errors of about 2 mm when the radius of rotation is 100 mm. Note that larger differences appear in the translational part of the transformation, and that these differences are usually reduced when more images are used as input for the registration procedure. This suggests that the error in the calibration results is related more to the position of the radiation source than to the orientation of the image plane. DISCUSSION The goal of our experiments was to establish a framework for comparing the accuracy of contact and image-based landmark rigid registration. For this purpose, we developed phantoms and protocols for testing algorithms for tool tip calibration, contact-based registration with an optical tracker, fluoroscopic X-ray camera calibration, and fluoroscopic X-ray image-based landmark registration. All the experiments were conducted under optimal laboratory conditions, which are not necessarily those of the operating room. The goal was to establish a lower bound on the system accuracy and to identify and quantify the weak links of the registration framework. Understanding of the factors that affect the accuracy and robustness of the registration process provides quantitative criteria to support the selection of registration methods, and indicates where technical improvements are necessary. We found an average accuracy of 0.5 mm (1.5 mm maximum) and 2.75 mm (3.4 mm maximum) for contact-based and image-based landmark registration, respectively. We also found that, for contact-based registration, five fiducials are enough to produce near-optimal results, most likely due to the accuracy of the tracking systems. Although the accuracy of contact-based landmark registration is close to the accuracy of the tracking system, the accuracy decreases with cloud-of-points and image-based landmark registration. We identified the fluoroscopic X-ray camera calibration process as being the main source of error in image-based registration. This is because the projection parameters computed for one fluoroscopic image do not accurately fit the projection of other images taken while the fluoroscope s position remains fixed. Despite the errors in our experimental results, we have shown that our protocols can be used to compute Table 6. Results of the Image-Based Landmark Registration Test Registration Mean Shift Std shift Distance base X Y Z X Y Z mean std max min Contact AP Lateral -Failed- AP lateral Three computations of image-based registration were made, two of them based on a single image (anterior/posterior (AP) and lateral) and one on both. Columns 2 4 show the mean shift of the phantom positional holes relative to the hole position computed from the registration matrix. Columns 5 7 show the standard deviation of this shift. Columns 8 11 show the mean, standard deviation, maximum, and minimum distance between these points (all values are in mm). 19

24 234 Sadowsky et al.: Contact- versus Image-Based Registration Methods Table 7. Results of the Comparison between Contact and Image Based Landmark Registration Tests Data Set/Number Rotation variation Translation variation of Images X Y Z X Y Z Series 1 One image Two images Three images Mean Series 2 One image Two images Mean Relative values mean Absolute values mean The values are the difference between the transformation matrix rotation and translation parameters obtained with image-based landmark registration and those obtained with contact-based landmark registration (rotation values are in degrees, translation values are in mm). rigid registration with a millimetric accuracy under optimal conditions. CONCLUSION AND FUTURE WORK We have described a methodology and an in vitro study of two types of rigid registration methods for computer-aided surgery: contact-based and imagebased landmark registration. We have described phantoms, protocols, and algorithms for tool tip calibration, contact-based registration with an optical tracker, fluoroscopic X-ray camera calibration, and fluoroscopic X-ray image-based landmark registration. Camera calibration was identified as the main source of error in image-based registration. These results indicate that, in contrast to contactbased registration, image-based landmark registration requires further improvement before it can be used clinically. Because camera calibration was identified as the single most important factor of error in imagebased registration, a new calibration and tracking ring for the C-arm was designed and built, and the calibration algorithm has been improved. 45 Preliminary studies show a twofold improvement in the accuracy, demonstrating that image-based registration has potential for clinical use. We are currently developing registration algorithms for contourbased registration, which are being validated with dry bones. Plans for future work include in vivo studies. ACKNOWLEDGMENT This research was supported in part by a grant from the Israel Ministry of Industry and Trade for the IZMEL Consortium on Image-Guided Therapy. We thank Neil Glossop from Traxtal Technologies for his advise and support. REFERENCES 1. DiGioia AM, Simon DA, Jaramaz B, Blackwell M, Morgan F, O Toole RV, Colgan B. HipNav: Preoperative planning and intraoperative navigational guidance for acetabular implant placement in total hip replacement surgery. In: Nolte LP, Ganz R, editors: Computer Assisted Orthopedic Surgery. Bern, Switzerland: Hogrefe and Huber Publishers, Hofstetter R, Slomczykowski M, Sati M, Nolte LP. Fluoroscopy as an imaging means for computer assisted surgical navigation. Comp Aid Surg 1999;4(2): Taylor RH, Lavallée S, Burdea G, Mösges R, editors: Computer-Integrated Surgery: Technology and Clinical Applications. Cambridge, MA: MIT Press, Joskowicz L, Milgrom C, Simkin A, Tockus L, Yaniv Z. FRACAS: A system for computer-aided imageguided long bone fracture surgery. Comp Aid Surg 1998;3(6): Zamorano L, Matter A, Saenz A, Buciuc R, Diaz F. Interactive image-guided resection of cerebral cavernous malformations. Comp Aid Surg 1997;2(6): Smith K, Frank KJ, Bucholz R. The Neurostation: a highly accurate, minimally invasive solution to frameless stereotactic neurosurgery. Comp Med Imag Graphics 1994;18(1): Hassfeld S, Mühling J. Navigation in maxillofacial and craniofacial surgery. Comp Aid Surg 1998;3(1): Maintz JBA, Viergever MA. A survey of medical image registration. Med Image Anal 1998;2(1): Darabi K, Grunert P, Perneczky A. Accuracy of intraoperative navigation using skin markers. In: Lemke HU, Vannier MW, Inamura K, editors: Computer Assisted Radiology and Surgery. Proceedings of the 11th International Symposium and Exhibition (CAR 97), Berlin, June Amsterdam: Elsevier, p Yaniv Z, Sadowsky O, Joskowicz L. In-vitro accuracy 20

25 Sadowsky et al.: Contact- versus Image-Based Registration Methods 235 study of contact and image-based registration: materials, methods, and experimental results. In: Lemke HU, Vannier MW, Inamura K, Farman AG, Doi K, editors: Computer Assisted Radiology and Surgery. Proceedings of the 14th International Congress and Exhibition (CARS 2000), San Francisco, 28 June 1 July Amsterdam: Elsevier, p Sadowsky O. Contact and image-based rigid registration in computer-assisted surgery: materials, methods, and experimental results. Master s thesis, School of Computer Science and Engineering, The Hebrew University of Jerusalem, Horn BKP. Closed-form solution of absolute orientation using unit quaternions. J Optical Soc Am A 1987;4(4): Lavallée S. Registration for computer-integrated surgery: methodology, state of the art. In: Taylor RH, Lavallée S, Burdea G, Mösges R, editors: Computer- Integrated Surgery: Technology and Clinical Applications. Cambridge, MA: MIT Press, p Besl PJ, McKay ND. A method for registration of 3D shapes. IEEE Trans Pattern Anal Machine Intell 1992; 14(2): Bolger C, Wigfield C, Melkent T, Smith K. Frameless stereotaxy and anterior cervical surgery. Comp Aid Surg 1999;4(6): Gong J, Bächler R, Sati M, Nolte LP. Restricted surface matching, a new approach to registration in computer assisted surgery. In: Troccaz J, Grimson E, Mösges R, editors: Proceedings of First Joint Conference on Computer Vision, Virtual Reality and Robotics in Medicine and Medical Robotics and Computer Assisted Surgery (CVRMed-MRCAS 97), Grenoble, France, March Lecture Notes in Computer Science Berlin: Springer, p Germano IM, Queenan JV. Clinical experience with intracranial brain needle biopsy using frameless surgical navigation. Comp Aid Surg 1998;3(1): Schmerber S, Chassat F. Accuracy evaluation of a CAS system: laboratory protocol and results with 6D localizers, and clinical experiences in otorhinolaryngology. Comp Aid Surg 2001;13(1): Simon DA, Jaramaz B, Blackwell M, Morgan F, Di- Gioia AM, Kischell E, Colgan B, Kanade T. Development and validation of a navigational guidance system for acetabular implant placement. In: Troccaz J, Grimson E, Mösges R, editors: Proceedings of First Joint Conference on Computer Vision, Virtual Reality and Robotics in Medicine and Medical Robotics and Computer Assisted Surgery (CVRMed-MRCAS 97), Grenoble, France, March Lecture Notes in Computer Science Berlin: Springer, p Adler JR, Murphy MJ, Chang SD, Hancock SL. Image-guided robotic radiosurgery. Neurosurgery 1999; 44(6): Villalobos H, Germano IM. Clinical evaluation of multimodality registration in frameless stereotaxy. Comp Aid Surg 1999;4(1): Guéziec A, Kazanzides P, Williamson B, Taylor RH. Anatomy based registration of CT-scan and intraoperative X-ray images for guiding a surgical robot. IEEE Trans Med Imaging 1998;17(5): Hamadeh A, Lavallée S, Cinquin P. Automated 3-dimensional computed tomographic and fluoroscopic image registration. Comp Aid Surg 1998;3(1): Hamadeh A, Sautot P, Lavallée S, Cinquin P. Towards automatic registration between CT and X-ray images: cooperation between 3D/2D registration and 2D edge detection. Proceedings of Second Annual Symposium on Medical Robotics and Computer Assisted Surgery, Baltimore, Maryland, November New York: Wiley, p Lavallée S, Szeliski R, Brunie L. Anatomy-based registration of 3D medical images, X-ray projections, and 3D models using octree-splines. In: Taylor RH, Lavallée S, Burdea G, Mösges R, editors: Computer- Integrated Surgery: Technology and Clinical Applications. Cambridge, MA: MIT Press, p Tang TSY. Calibration and point based registration of fluoroscopic images. Master s thesis, Department of Computing and Information Science, Queen s University, Kingston, Ontario, Canada, LaRose D. Iterative X-ray/CT registration using accelerated volume rendering. PhD thesis, Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, May Lemieux L, Jagoe R, Fish DR, Kitchen ND, Thomas GT. A patient-to-computed-tomography image registration method based on digitally reconstructed radiographs. Med Phys 1994;21(11): Murphy MJ. An automatic six-degree-of-freedom registration algorithm for image-guided frameless stereotaxic radiosurgery. Med Phys 1997;24(6): Penney GP, Weese J, Little JA, et al. A comparison of similarity measures for use in 2D-3D medical image registration. IEEE Trans Med Imaging 1998;17: Roth M, Brack C, Burgkart R, Czopf A, Götte H, Schweikard A. Multi-view contourless registration of bone structures using a single calibrated X-ray fluoroscope. In: Lemke HU, Vannier MW, Inamura K, Farman AG, editors: Computer Assisted Radiology and Surgery. Proceedings of the 13th International Congress and Exhibition (CARS 99), Paris, France, June Amsterdam: Elsevier, p Pluim JPW, Maintz JBA, Viergever MA. Image registration by maximization of combined mutual information and gradient information. In: Delp SL, DiGioia AM, Jaramaz B, editors: Proceedings of Third International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2000), Pittsburgh, PA, October Lecture Notes in Computer Science Berlin: Springer, p

26 236 Sadowsky et al.: Contact- versus Image-Based Registration Methods 33. Yaniv Z. Fluoroscopic X-ray image processing and registration for computer-aided orthopedic surgery. Master s thesis, Institute of Computer Science, The Hebrew University of Jerusalem, Brack C, Burgkart R, Czopf A, Götte H, Roth M, Radig B, Schweikard A. Accurate X-ray-based navigation in computer-assisted orthopedic surgery. In: Lemke HU, Vannier MW, Inamura K, Farman AG, editor: Computer Assisted Radiology and Surgery. Proceedings of the 12th International Symposium and Exhibition (CAR 98). Amsterdam: Elsevier, p Schreiner S, Funda J, Barnes AC, Anderson JH. Accuracy assessment of a clinical biplane fluoroscope for three-dimensional measurements and targeting. In: Proceedings of SPIE Medical Imaging, Faugeras O. Three-Dimensional Computer Vision: A Geometric Viewpoint. Cambridge, MA: MIT Press, Tsai R. A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE Trans Robotics Automat 1987;RA-3(4): Fitzpatrick JM, West JB, Maurer CR Jr. Predicting error in rigid-body, point-based registration. IEEE Trans Med Imaging 1998;17(5): Fitzpatrick JM, West JB. The distribution of target registration error in rigid-body, point-based registration. IEEE Trans Med Imaging 2001;20: West JB. Predicting error in point-based registration. PhD thesis, Department of Computer Science, Vanderbilt University, Nashville, Tennessee, Pennec X, Thirion JP. A framework for uncertainty and validation of 3D registration methods based on points and frames. Int J Comp Vision 1997;25(1): Ellis RE, Fleet DJ, Bryant JT, Rudan J, Fenton P. A method for evaluating CT-based surgical registration. In: Troccaz J, Grimson E, Mösges R, editors: Proceedings of First Joint Conference on Computer Vision, Virtual Reality and Robotics in Medicine and Medical Robotics and Computer Assisted Surgery (CVRMed- MRCAS 97), Grenoble, France, March Lecture Notes in Computer Science Berlin: Springer, p Leavers VF. Which Hough transform? Comput Vision Graphics Image Process Image Understand 1993;58(2): Yaniv Z, Joskowicz L, Simkin A, Garza-Jinich M, Milgrom C. Fluoroscopic image processing for computer-aided orthopaedic surgery. In: Wells WM, Colchester A, Delp S, editors: Proceedings of First International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 98), Cambridge, MA, October Lecture Notes in Computer Science Berlin: Springer, p Livyatan H, Yaniv Z, Joskowicz L. Robust automatic C-arm calibration for fluoroscopy-based navigation: a practical approach. In: Dohi T, Kikinis R, editors: Proceedings of the 5th International Conference on Medical Image Computing and Computer-Aided Intervention (MICCAI 2002) Tokyo, Japan, October Lecture Notes in Computer Science, Berlin: Springer, Vol 2. p

27 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 22, NO. 11, NOVEMBER Gradient-Based 2-D/3-D Rigid Registration of Fluoroscopic X-Ray to CT Harel Livyatan, Ziv Yaniv, Student Member, IEEE, and Leo Joskowicz*, Senior Member, IEEE Abstract We present a gradient-based method for rigid registration of a patient preoperative computed tomography (CT) to its intraoperative situation with a few fluoroscopic X-ray images obtained with a tracked C-arm. The method is noninvasive, anatomybased, requires simple user interaction, and includes validation. It is generic and easily customizable for a variety of routine clinical uses in orthopaedic surgery. Gradient-based registration consists of three steps: 1) initial pose estimation; 2) coarse geometry-based registration on bone contours, and; 3) fine gradient projection registration (GPR) on edge pixels. It optimizes speed, accuracy, and robustness. Its novelty resides in using volume gradients to eliminate outliers and foreign objects in the fluoroscopic X-ray images, in speeding up computation, and in achieving higher accuracy. It overcomes the drawbacks of intensity-based methods, which are slow and have a limited convergence range, and of geometry-based methods, which depend on the image segmentation quality. Our simulated, in vitro, and cadaver experiments on a human pelvis CT, dry vertebra, dry femur, fresh lamb hip, and human pelvis under realistic conditions show a mean mm ( mm maximum) target registration accuracy. Index Terms Fluoroscopic X-ray to CT registration, gradient based, image registration, 2D/3D rigid registration. I. INTRODUCTION REGISTRATION is the task of finding a transformation from the coordinate system of one data set to another so that all features that appear in both data sets are aligned. Registration is an essential step in most computer-aided surgery (CAS) systems, since it is necessary to match information from different data modalities obtained at different times. In image-guided surgery, it is required to match the preoperative images and plans to the intraoperative situation, and to determine the relative positions of surgical tools and anatomical structures. Examples of deployed CAS systems include preoperative planning, intraoperative navigation and robotics systems for orthopaedic surgery [1] [4], for neurosurgery [5], [6], and for radiosurgery [7], among many others. Practical, accurate, and robust registration has emerged as one of the key technical challenges of CAS. Manuscript received June 4, 2003; revised August 4, This research was supported in part by a grant from the Israel Ministry of Industry and Trade for the IZMEL Consortium on Image-Guided Therapy. Asterisk indicates corresponding author. H. Livyatan and Z. Yaniv are with the School of Engineering and Computer Science, The Hebrew University of Jerusalem, Jerusalem 91904, Israel. *L. Joskowicz is with the School of Engineering and Computer Science, The Hebrew University of Jerusalem, Jerusalem 91904, Israel ( josko@cs.huji.ac.il). Digital Object Identifier /TMI One of the most sought after methods is anatomy imagebased rigid registration between preoperative and intraoperative data sets. The goal is to enable surgeons to use preoperative plans and computed tomography (CT) and magnetic resonance imaging (MRI) data in the operating room for imageguided navigation and robot positioning. The registration can be performed with a few intraoperative fluoroscopic X-ray or ultrasound images, which are ubiquitous, noninvasive, and easy to acquire. Current CAS systems rely on implanted fiducials, which require an additional surgical procedure, or on points obtained by direct contact from the anatomy surface, which require additional exposure of the anatomy and can be time-consuming and error-prone. The alternative is to use the imaged bone shapes, which are rigid, to perform the registration. This allows for less invasive procedures, is faster and less human-error prone, and does not require surgeon training. Anatomy image-based rigid registration is technically much harder than fiducial or contact-based registration because it requires analyzing the intraoperative images. The images may include foreign objects such as surgical tools and implants not present in the preoperative data. Fluoroscopic X-ray images have a small field of view, limited resolution, and orientation-dependent geometric and intensity distortions. Research on anatomy image-based registration started in 1994 [8], [9] and is very active [10] [20]. However, with the exception of the Cyberknife radiosurgery system [7], none of these methods is in routine clinical use. The main obstacles are robustness, accuracy, computation time, and lack of integration. In this paper, we present a new gradient-based method for rigid registration of a patient preoperative CT to its intraoperative situation with a few fluoroscopic X-ray images acquired by a tracked C-arm [21]. The method is noninvasive, requires simple user interaction, and includes validation. It is generic and easily customizable for a variety of routine orthopaedic procedures. It consists of three steps: 1) initial pose estimation; 2) coarse geometry-based registration using bone contours; and 3) fine gradient projection registration (GPR) using edge pixels. This hybrid method optimizes speed, accuracy, and robustness. Its novelty resides in using the relationship between CT and fluoroscopic X-ray image gradients instead of geometric or intensity information. Volume gradients and their projections help eliminate foreign objects present in fluoroscopic X-ray images and achieve higher accuracy. Our method overcomes the drawbacks of intensity-based methods, which are slow and have a narrow convergence range, and those of geometry-based methods, which depend on the contour segmentation quality of the fluoroscopic X-ray and CT images /03$ IEEE 23

28 1396 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 22, NO. 11, NOVEMBER 2003 Fig. 1. Classification of rigid registration methods. II. PREVIOUS WORK For a comprehensive survey of medical image registration methods, see [22]. We classify rigid registration algorithms along a line, based on how much of the original data is used to compute the transformation (Fig. 1). On one side are geometry-based algorithms, which use a few selected points or features. On the other are intensity-based algorithms, which use most of the intensity information in both data sets. Geometry-based registration algorithms match selected geometric features from each data set by finding the transformation that minimizes the sum of distances between paired features. Features can be implanted fiducials, anatomical landmarks, or surface contour features. The algorithms can be classified into four categories: 1) point/point: contact or image landmark points on anatomy surface to CT landmark points (5 10 points in each) [4]; 2) point/surface: contact cloud of points on anatomy surface to CT surface (10 30 points versus surface points) [23] [25]; 3) contour/surface: contours in fluoroscopic X-ray or ultrasound images to CT surface ( points or 1 10 splines versus surface points) [9], [11], [13], [26]; 4) surface/surface: CT or skin surface data from scanning laser to CT ( points, ridges/surfaces) [27]. Geometry-based registration consists of four steps: 1) feature extraction: choosing the features of interest in each data set; 2) feature pairing: establishing correspondences between features of each data set; 3) dissimilarity formulation and outliers removal: quantifying the dissimilarity between paired features, e.g., the sum of pairwise distances; and 4) dissimilarity reduction: finding the transformation that optimally minimizes the dissimilarity. Steps 2) 4) are repeated until convergence. Feature extraction requires segmenting the CT and fluoroscopic X-ray images. Features are paired by finding for each feature in one data set, the closest (distance-wise) feature on the other data set [23], [24]. Removal of outliers can be either explicit [24] or implicit by weighing the paired features [25]. Algorithms for geometry-based point to CT registration include the iterative closest point (ICP) algorithm [23], [24] and its variations [25], [28] [30]. X-ray contours to CT surface mesh algorithms have been developed by Hamadeh et al. [9], [11] and Guéziec et al. [13], [26]. Fitzpatrick et al. [31] show how to estimate the registration error of point-based rigid registration. The key characteristic of geometry-based methods is that they use a small fraction of the image data, usually fiducial centers and anatomy surface points, whose location is assumed to be 24 known very accurately. Geometry-based registration works best with a high quality segmentation, an efficient feature pairing scheme, and a good outlier removal. For point/point registration, an initial position independent, closed-form solution that minimizes the sum of distances is known [32]. In practice, robustness is achieved by first performing coarse registration with landmarks followed by fine registration with surfaces. Commercial systems rely on implanted fiducials and on cloud of points to perform the registration. Geometry-based registration between fluoroscopic X-ray images and CT has not yet reached the market, most likely because robustly segmenting fluoroscopic X-ray images is technically challenging. Intensity-based algorithms match the intensities of one image data set with the intensity of the other by minimizing a measure of difference between them, such as histogram difference, normalized cross-correlation, or mutual information [33] [35]. The matching can be restricted to regions of interest (ROIs) in the image, such as regions around bone surfaces in CT and fluoroscopic X-ray images. In this case, the matching is closer to geometry-based registration. Algorithms can be classified into two categories: 1) ROIs/ROIs: ROIs in both data sets, usually in the vicinity of the anatomy surface ( - pixels/voxels) for each data set [10], [12], [14], [36], [16] [18], and 2) image/image: the entire CT or X-ray image is used ( - for CT data sets, 2 10 images, each about pixels for X-ray images) [8]. Intensity-based registration consists of three steps: 1) generation of digitally reconstructed radiographs (DRRs) for each camera pose; 2) measurement of the pose difference by comparing the DRRs with the real fluoroscopic X-ray images; and 3) computation of a pose that reduces the difference. The first step requires precomputation and fast DRR generation [15], [36]. The second step requires computing a similarity measure which is not guaranteed to lead to an optimal solution [37]. Algorithms for intensity-based registration between X-rays and CT started with Lemieux et al. [8], which was followed by many others [10], [12], [14], [16], [17], [36], [37]. The Cyberknife radiosurgery system [7] is the only commercial system in routine clinical use that uses this registration method. The key characteristic of intensity-based registration is that it does not require segmentation. The rationale is that using as much information as available and averaging it out reduces the influence of outliers and is, thus, more robust. However, this approach is computationally expensive since it requires generating high-quality DRRs and searching a six-dimensional (6-D) space with local minima which depend on the similarity measure employed. It requires an initial pose guess close to the final pose and the definition of ROIs. Very recent work, conducted independently to ours, describes gradient-based registration between X-rays and CT or MR images [20], [38]. The idea is to compute projections of the volumetric data gradients, compare them with X-ray image gradients, and adjust the volumetric data set pose accordingly. The gradients need not be computed on all rays, but rather on selected rays in the vicinity of the anatomy contours, as proposed in [39]. While the idea of using gradients to establish the correspondence is similar to ours, the algorithm described in [20] relies on the pairing between rays emanating from the camera focal point and passing through image pixels and points on the

29 LIVYATAN et al.: GRADIENT-BASED 2-D/3-D RIGID REGISTRATION OF FLUOROSCOPIC X-RAY TO CT 1397 Fig. 2. (a) Registration chain between the actual bone and its preoperative CT and (b) 2-D/3-D image registration geometry between X-ray and CT. bone surface. This pairing is very sensitive to discontinuities on both data sets and can produce outliers, which degrade the accuracy of the computed transformation. The experimental results reported in [20] assume distortion-less X-ray images, initial pose guesses near the final pose, and do not account for tracking errors, as our experiments do. III. GOALS AND SPECIFICATIONS Our goal is to develop a practical anatomical image-based rigid registration protocol and algorithm for preoperative CT to intraoperative fluoroscopic X-ray registration. The method should be generic and easily customizable to a variety of rigid anatomical structures (pelvis, vertebra, femur, tibia) and conditions (healthy, fractured, with tumors). Following a careful analysis of the most common orthopaedic procedures, we compiled the following specifications. The system requirements are: 1) accuracy: a target registration error of mm on average (2- to 3-mm worst case) measured on the bone surface; 2) robustness: the registration succeeds on the first try at least 95% of the time with an error of at most 2 mm; 3) speed: the registration process takes at most 1 min; 4) user interaction: simple and minimal preoperative and intraoperative user interaction; and 5) validation: both qualitative and quantitative after the registration. The data characteristics are: 1) a CT data set with 0.5- to 1.5-mm-thick slices 1 3 mm apart, 12-bit gray-scale, and pixel size of 0.5 mm or less; 2) two to five fluoroscopic X-ray images pixels, 8-bit gray-scale and pixel size of 0.5 mm or less, possibly including anatomy and surgical objects not present in the CT; and 3) C-arm position and orientation are computed using an optically tracked target which is rigidly attached to the C-arm image intensifier. Target position is known to an accuracy of mm at a distance of 1 2 m (the current performance of commercial optical trackers). The system consists of: 1) a PC with a monitor; 2) a video frame grabber; 3) a position sensor for the C-arm (e.g., an optical tracker); and 4) a calibration and distortion correction grid mounted on the C-arm. IV. PROBLEM DEFINITION The problem consists of finding the rigid transformation that relates the preoperative CT bone model to the intraoperative bone coordinate frame. This transformation can be obtained with a location tracker and a fluoroscopic X-ray imaging system by constructing the transformations chain shown in Fig. 2(a). The C-arm is modeled as a pinhole camera, with the camera focal point at the X-ray source and the image plane at the image intensifier. Since the imaging characteristics of the C-arm are orientation dependent, this calibration is computed anew for every orientation [40]. Fig. 2(b) illustrates the registration geometry. The transformation chain consists of five transformations where indicates the C-arm viewpoint. and are given directly by the tracker. is the orientation-dependent calibration transformation, which is computed as described in [40]. are the desired transformations relating the camera poses with the bone model as shown in Fig. 2(b). is the initial bone pose estimate, which is successively refined with the two-dimensional (2-D)/three-dimensional (3-D) rigid registration algorithm described in Section V until convergence. V. REGISTRATION PROTOCOL AND ALGORITHM The registration protocol is as follows. Preoperatively, we obtain the CT data and automatically compute off-line from it three data structures: a bone surface mesh, a bounding sphere octree, and a volume gradient vector field. Intraoperatively, before the surgery starts, the tracking system is set up and the calibration grid is mounted on the C-arm. Then, the patient is prepared, and 2 5 fluoroscopic X-ray images from various orientations are taken. The fluoroscopic X-ray images are corrected for distortion and the camera parameters are computed for each pose [40]. The transformation is then computed with the algorithm described below. For validation, the algorithm shows 25

30 1398 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 22, NO. 11, NOVEMBER 2003 a projection of the volume gradients projected onto the fluoroscopic X-ray images. When there is close correspondence, the CT data set pose is very close to the actual patient pose. The algorithm consists of three steps: 1) initial pose estimation; 2) coarse geometry-based registration on the bone contours; and 3). fine GPR on edge pixels. Each step has a funnelling effect: it brings the data sets closer, has a narrower convergence range, uses more information, and is more accurate than its predecessor. The first two steps, which we describe first, are based on previous work. VI. VOLUME GRADIENT PROJECTIONS The initial pose can be obtained in several ways, depending on the type of surgery and data available: 1) from the clinical setup, which usually indicates the position of the patient on the operating table (e.g., supine, on the side) and the C-arm imaging views (e.g., anterior-posterior, lateral); 2) intraoperatively, by having the surgeon touch implanted fiducials or by acquiring landmark points on the anatomy surface with a tracked pointer; 3) by placing skin markers prior to the CT scan and having the surgeon touch them with a tracked pointer intraoperatively; and 4) intraoperatively by having the user identify a few matching landmarks on X-ray images, estimating their actual location as the intersection of rays, and performing weighted point-based registration as described in [25]. Options 1, 3, and 4 are appropriate for percutaneous procedures. Regardless of the method employed, the initial pose estimate is usually within mm and 5 15 of the final pose. Coarse registration further reduces the distance between the bone surface mesh and sampled points on the fluoroscopic X-ray bone contours with the ICP method [23], [24]. It yields the best transformation that can be obtained from the segmented images, which provide an estimate of the real contour location and have occlusions and may contain foreign objects. GPR further reduces the difference by incorporating contour pixel and volume gradient data. It eliminates foreign objects which appear in the X-ray images and not in the CT data. It is more efficient than intensity-based registration with predefined ROIs, although it has a narrow convergence range. Coarse geometry-based registration computes a transformation that positions the bone surface mesh such that the rays emanating from the camera focal point passing through the bone contours on the fluoroscopic X-ray images are tangent to it. It optimizes the distances between the rays and apparent bone surface mesh contour. The bone contour is extracted from the fluoroscopic X-ray images using a livewire segmentation algorithm [41]. Then, contour points are sampled and matched to the corresponding bone surface mesh points with the 2-D/3-D ICP registration method [13]. The basic operation is to find for each ray the closest point on the bone apparent contour. To speed up the search for the nearest point on the apparent contour, we construct a hierarchical structure, called the bounding sphere octree, in which we place the bone surface mesh edges (Fig. 3). Each edge holds the normal information of its coincident faces. The tree is recursively constructed as follows: initially, the entire bone surface mesh is enclosed in the smallest bounding sphere [42], [43]. The sphere s bounding 26 Fig. 3. Successive smallest bounding sphere approximations of a proximal femur surface mesh. Fig. 4. The integral gradient projection property. box is decomposed into eight cells and the edges are placed in the containing cells. Mesh edges which belong to several cells are split into segments, with each segment placed in its corresponding cell. The minimal bounding sphere for the edges in each cell is computed and is recursively subdivided until the sphere size is below a predefined threshold. The closest point to a ray is found by traversing the tree and using a priority queue according to the ray sphere distance, considering only the edges which are part of the apparent contour. We now present the gradient projection property, which is the basis of the GPR step. We define the following entities: region of space containing the imaged anatomy; CT image of the anatomy in region ; X-ray image of the anatomy in region ; point in region ; CT density value of point ; X-ray intensity value of the image point corresponding to ; 3-D gradient field of ; 2-D gradient field of. We model the imaging process as follows (Fig. 4). The pinhole camera is defined by its focal length positioned at focal

31 LIVYATAN et al.: GRADIENT-BASED 2-D/3-D RIGID REGISTRATION OF FLUOROSCOPIC X-RAY TO CT 1399 point and oriented in viewing direction. The image plane of camera is, where is the image principle point and and are the horizontal and vertical directions of the image plane ( ). We denote by the projection of a point on the image plane. The ray emanates from the camera focal point and passes through point. A point on this ray is given by the line equation where and. The distance between the point and the camera focal point is. Gradient Projection Theorem: The image gradient of a point in the image plane is linearly proportional to the integral of the weighted volume gradients of points along the ray emanating from the camera focal point and passing through the point (1) where is the pixel intensity value of image point.for every point along the projection ray,. Thus The image gradient is defined as the vector of partial image derivatives in the horizontal and vertical directions is de- The partial derivative of the X-ray image in direction fined as Substituting, we obtain Proof: Following the physical model of X-ray propagation presented in [44], the ratio between the number of photons that enter and exit the imaged object for a given ray is where and are number of exiting and entering photons, is the material attenuation coefficient, and is an element of length along the ray. In our context, the number of exiting photons logarithmically corresponds to the pixel intensity value of an image point. The number of photons entering the imaged object corresponds to the initial intensity of the ray, which is constant and equal for all rays. The attenuation coefficient per element of length along the ray corresponds to the intensity value of the CT voxels along the ray,, where is a point on the ray. Substituting into the above equation, we obtain Let be the distance between the camera focal point and a point along the ray. can be written as (2) Similarly, the partial derivative of the X-ray image in direction is Combining the two expressions we obtain The X-ray image gradient is equal to the integral over the weighted projections of the volume gradient onto the image plane, where the weight is the relative distance of the 3-D point from the focal point. Note that the weight increases as the 3-D point is further away from the focal point because variations in the 2-D image are a result of larger variations in the 3-D volume. differentiating, we get Substituting into (2) and omitting the constant we get Assuming a standard logarithmic sensor response, the above equation becomes 27 VII. FINE GRADIENT PROJECTION REGISTRATION (GPR) We perform fine registration based on the gradient projection property. This step is based on the following observation: when the CT is aligned with the anatomy in the world, the rays emanating from the camera focal point that pass through contour pixels in the fluoroscopic X-ray images are tangent to the bone surface, as illustrated in Fig. 2(b). In this case, these rays pass through local magnitude maxima of the 3-D gradient vector field, since they are tangent to the surface. The desired transformation is, thus, the one that maximizes the sum of 3-D gradient magnitudes which are incident on these rays. Formally, let be a 6-D pose transformation matrix, and the horizontal and vertical directions of the image plane (Fig. 4). Let be a point on a ray emanating from the camera focal point and passing through an edge pixel, in the

32 1400 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 22, NO. 11, NOVEMBER 2003 Fig. 5. Experimental setup: in vitro (left) and cadaver study (right). The insert on the upper right corner shows the implanted aluminum sphere. fluoroscopic X-ray image. The expected fluoroscopic X-ray gradient,, at edge pixel for a given pose is, according to (1) (3) segmentation. Like the intensity-based approach, it selectively uses all the CT information, without relying on segmentation or pairing between fluoroscopic X-ray pixels and CT voxels. Unlike it, it automatically defines focused ROIs, which speeds up the computation. (we omit and to speed up the computation since their influence on the optimization is minor). The goal is to find the transformation that maximizes the sum of gradient projections over all image edges, that is The fine GPR step is as follows. Preoperatively, we compute the CT volume gradient by convolving it with a Gaussian derivative kernel and up-sample it to a 0.5 mm resolution to obtain a fast, high-quality nearest neighbor ray sampling. Intraoperatively, we extract edge pixels from each fluoroscopic X-ray image with the Canny edge detector [45] and construct the set of rays emanating from the camera focal point and passing through the pixels. We then apply the Downhill Simplex method [46] on the function defined in (4). The value of is computed by sampling each ray in 1 millimeter intervals. To achieve high accuracy, it is essential to filter out outlier edge pixels from the fluoroscopic X-ray images. Outlier edges are edges from foreign objects or from other anatomical structures. Outlier edges that are far from the anatomy of interest will be automatically filtered out: the gradient projection value of their rays will be small because there is no corresponding object in the CT. Outliers that are close to the anatomy of interest are eliminated by comparing the direction of the gradient projection and the actual gradient. When the directions diverge, the magnitude of the gradient projection is set to zero. GPR combines the advantages of both geometry and intensity-based registration while overcoming their deficiencies. Like the geometry-based approach, it uses only edge pixels (from both the inner and outer bone contours). These pixels are only a small fraction of all pixels, so the computation time is significantly reduced. Unlike it, it does not rely on (4) 28 VIII. EXPERIMENTAL RESULTS We have implemented the gradient based algorithm and have validated it with the proposed protocol on three types of situations: 1) simulation experiments with clinical CT data and simulated fluoroscopic X-rays; 2) in vitro experiments with dry bones; and 3) two cadaver experiments. The simulation experiments establish a lower bound on the error for ideal fluoroscopic X-ray imaging and tracking conditions and show how the algorithm copes with soft tissue and partial occlusions. The in vitro experiments establish a lower bound on the error for real CT and fluoroscopic X-ray images for ideal conditions. The cadaver experiments emulate the surgical situation and establish the expected error for intraoperative navigation with CT images. To demonstrate the generality of our method, we applied it to four different structures: human femur, spine, pelvis, and lamb hip. We used a CT scanner, a 9 in BV29 C-arm (Phillips, The Netherlands), a Polaris optical tracking camera (NDI, Calgary, AB, Canada), a FluoroTrax C-arm calibration ring and active optical trackers (Traxtal, Toronto, ON, Canada), and a Matrox Meteor II digital frame grabber. Processing was on a 2.4-GHz, 1-GB RAM PC running Windows XP. Fig. 5 shows the experimental setup. A. Registration Error Measurement and Validation To quantify the registration error, we use the target registration error (TRE) as defined in [31]. The target registration error is defined as the distance between the actual and the computed position of selected target features, which can be landmark points or the bone surface itself. The difficulty in estimating the TRE lies in determining the actual position of the targets, which itself is prone to measurement errors. The most accurate but expensive and cumbersome method is to use a custom mechanical device which allows controlled precise positioning

33 LIVYATAN et al.: GRADIENT-BASED 2-D/3-D RIGID REGISTRATION OF FLUOROSCOPIC X-RAY TO CT 1401 of the anatomy. The next best option is to use implanted fiducials (spheres) and perform point-to-point registration between their centers on the CT images and their actual centers as measured by direct contact with a tracked pointer. This establishes a ground-truth transformation to which the computed transformation can then be compared. We opted, as many others, for this option in our experiments. The fiducials TRE ( ) is defined as the distance between the actual and computed position of the fiducials [31] where is a fiducial point in space, are its coordinates in the CT image coordinate frame, are its coordinates as measured by the tracking system, and is the computed transformation. The accuracy of the measurement depends on the accuracies of the tracking device, of the fiducial center localization on the CT, and on the computed transformation. Its advantages are that it can be computed with one or more fiducials with no restrictions on their relative positions and that it does not require the computation of a ground-truth transformation. Its disadvantages are that it requires fiducials, that it depends on very few points and their spatial distribution, and that it is only an indirect estimate of the error that the surgeon will observe when using the CT images for intraoperative navigation. We propose an alternative measure, the surface TRE ( ), which we define as the distance between the actual and the computed position of points on the bone surface identified on the CT image where is the ground-truth (gold) rigid transformation computed by fiducial contact-based registration. Although the is relative to the ground-truth registration, it does not require additional implanted fiducials, including instead many points uniformly distributed over the entire anatomy surface which do not require individual actual measurement. We quantify the expected error of the ground-truth registration with the method described in Fitzpatrick et al. [31] for all points on the bone surface. Although in the worst case this error should be added to the results, the error will be smaller in most cases since it also depends on the same optical tracker inaccuracies. The includes all the errors in the registration chain, thus providing a faithful estimate of tool positioning errors during intraoperative navigation based on CT images. In our experiments, the ground-truth bone position for the in vitro and cadaver experiments are obtained from implanted fiducials by contact-based registration. The ground-truth transformation for the simulation experiments is known in advance. Note that the can also be used to quantify how far the initial guess is from the final one by substituting the initial guess transformation instead of the computed transformation. For validation, we show the bone model position with respect to the actual bone position by overlaying the gradient projections magnitude directly onto the fluoroscopic X-ray images [Fig. 6(b) and (c)]. This shows how far the bone model is from 29 where it should be. The overlaid edges are computed as follows. For each pixel in the fluoroscopic X-ray image, we create a ray starting at the camera focal point and passing through the pixel. For each such ray, we compute its gradient projection using the CT volume gradient, which is the expected 2-D gradient direction and magnitude for the pixel. Nonmaxima suppression and thresholding on these pixels leave the expected edge pixels which are overlaid on the original fluoroscopic X-ray image. Note that this approach yields more accurate results than generating DRRs from the CT at the computed position and then extracting bone contours from them, since edges in the DRR are usually more blurred than actual fluoroscopic images [47]. B. Experiments We performed a simulation experiment on a real clinical pelvis CT. We first generated DRRs at known poses and input these as fluoroscopic X-ray images, together with an initial guess transformation with a realistic of 9.5 mm, to the algorithm. We then computed the final error as described above. The first entry in Tables I and II shows the results averaged over ten runs. The mean measured error is 0.5 mm, about the size of X-ray image pixel. Since the reported for all other cases depends on the accuracy of the ground-truth registration, we performed fiducial point-based registration and applied the method described in Fitzpatrick et al. [31] to compute the expected error over the entire imaged anatomy. We obtained an expected average accuracy of mm (0.4-mm to 0.7-mm maximum) for all the points on the bone surface. We performed in vitro experiments on a single vertebra of a dry spine and on a dry proximal femur. First, we implanted seven 6-mm aluminum spheres (Fig. 5 top right insert) and CT scanned them at 0.6-mm slice interval. We extracted from each data set the sphere centers at a resolution of 0.1 mm. In the operating room, we acquired two sets of three fluoroscopic X-ray images at various C-arm orientations, one with, and one without anatomy for optimal camera calibration. The fluoroscopic X-ray images were pixels, 8-bit gray-scale with pixel size of 0.45 mm. We performed C-arm calibration to mean accuracy of 0.3 mm (0.6-mm maximum), as described in our previous work [40]. We performed fiducial contact-based registration on the spheres and established the ground-truth registration. We then performed image-based registration with the gradient based algorithm and compared the resulting transformations. The second and third entries in Tables I and II show the results. The 20-s to 50-s computation time for the ideal case increased to s for the worst case when foreign objects and surrounding anatomy appeared in the fluoroscopic X-ray images. However, the accuracy error is still acceptable. We observed a small decrease in error when using three fluoroscopic X-ray images instead of two, and no further significant decrease beyond three. We observed little or no influence when foreign objects were present in the fluoroscopic X-ray images. We performed cadaver experiments on a fresh lamb hip and a human pelvis following the same protocol as in the in vitro experiment, except that we implanted four spheres instead of seven. The last entries in Tables I and II show the results. For the lamb hip, the decrease in accuracy as compared to the in vitro

34 1402 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 22, NO. 11, NOVEMBER 2003 Fig. 6. Registration experiments: real pelvis with simulated fluoroscopic X-ray images (DRRs), in vitro dry vertebra and dry femur with surgical instruments, and cadaver lamb hip. The first column shows the CT model. The second and third column show one fluoroscopic X-ray image with contours at (b) initial and (c) final pose superimposed on them (white lines). 30

35 LIVYATAN et al.: GRADIENT-BASED 2-D/3-D RIGID REGISTRATION OF FLUOROSCOPIC X-RAY TO CT 1403 TABLE I SUMMARY OF EXPERIMENTAL RESULTS. EACH SCENARIO (IDEAL, REALISTIC, AND BAD) IS DEFINED BY THE CT SLICE SPACING 1h (MM) AND THE PRESENCE OF FOREIGN OBJECTS IN THE FLUOROSCOPIC X-RAY IMAGES (NONE, SOME). EACH ENTRY SHOWS THE MEAN (MAXIMUM) SURFACE TRE IN MILLIMETERS. COMPUTATION TIMES ARE s TABLE II DETAILED EXPERIMENTAL RESULTS FOR THE RREALISTIC CASES (CT SLICE SPACING OF 2.4 MM, SOME FOREIGN OBJECTS IN THE FLUOROSCOPIC X-RAY IMAGES), WITH THE NUMBER OF IMAGES AS SHOWN IN TABLE I the initial pose guess on the coarse and fine registration steps, we applied the gradient based algorithm on many initial pose guesses between 1 and 85 mm from the ground-truth registration. For each 1-mm interval, we randomly generated 50 initial positions and performed the registration. To isolate registration error from the ground-truth error, we also computed the for 5 7 representative implanted fiducials. Fig. 7 summarizes the results of the 4250 runs. We conclude that the mean registration accuracy is nearly independent of the initial guess, but that the percentage of failures (registrations with mm) increases as the initial guess is further away. A mean failure rate of 5% occurs at 72-mm (16-mm maximum). To better understand the characteristics of the fine gradient-based projection registration search space, we recorded the value of the optimization function as the search converges toward the final transformation. Fig. 8 shows the results for the human pelvis cadaver study (the other cases are very similar). The plots show a unique minimum near the ground-truth value, an appropriate convergence range, and relatively smooth, monotonically decreasing values. This validates our choice of optimization function and search method. For each data set, three rows of results are shown, one after the initial guess registration step (first row), one after the coarse geometry-based registration step (second row) following it, and one after the fine gradient-projection registration step following it (third row). Each row of results shows the average over ten runs error at the end of the step, the individual position ( ) and orientation parameters ( ) and the running time in seconds. The real pelvis, dry femur, and lamb hip have a success rate of 100% and the dry vertebra and human pelvis have a success rate of 70%. Note that a small angular deviation from the correct transformation can yield a large, depending on the location of the origin. The origins of the dynamic reference frames are at a distance of mm from the surface of the anatomy. case is most likely due to a less accurate ground-truth registration and the fact that the lamb femur has fewer salient features than the other anatomical structures. For the human pelvis, the decrease in accuracy is due to the larger size of the pelvic bone as compared to the other structures. To better understand and quantify the various aspects of the proposed method, we conducted an extensive series of experiments on the dry femur data set. To determine the influence of 31 IX. CONCLUSION We have presented the GPR algorithm, a new method for rigid registration of a patient preoperative CT to its intraoperative situation with a few fluoroscopic X-ray images obtained with a tracked C-arm. The three-step hybrid method progressively brings the data sets closer with landmark point-based registration, coarse geometry-based registration on the bone contours, and fine GPR on edge pixels. Each step uses more information than its predecessor, has a narrower convergence range, is slower than its predecessor, but adds accuracy. The last step, GPR, which exploits the volume gradient projection property achieves good accuracy even in the presence of other anatomical structures and foreign objects, such as implants and surgical tools in the fluoroscopic X-ray images. It does not rely on the accuracy of segmentation, as do geometry-based approaches, and is more efficient than intensity-based registration, although it has a narrow convergence range. We conclude from our experimental results that the desired goal, e.g., 1- to 1.5-mm mean target registration error (2- to 3-mm maximum), obtained within 60 s 95% of the time with simple and minimal user interaction, including validation, with standard imaging and tracking equipment in clinical conditions is within reach. We achieved on the cadaver studies, for the lamb hip 1.4 mm (2.5-mm maximum) in s 100% of the time and for the human pelvis 1.7 mm (2.6-mm maximum) in s 70% of the time with some user interaction (CT processing, initial pose estimation and livewire bone contour segmentation on fluoroscopic X-ray images). For validation, we show how far the bone model is from where it should be by overlaying the bone edge contours directly onto the fluoroscopic X-ray images. We plan to further reduce the GPR algorithm computation time with space leaping techniques adapted from volume rendering. We also plan to improve the robustness of the GPR algorithm by implementing genetic or simulated annealing tech-

36 1404 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 22, NO. 11, NOVEMBER 2003 Fig. 7. Detailed results of in vitro dry femur experiments. The horizontal axis indicates the initial surface TRE from the ground-truth transformation. The vertical axes indicate (a) the final fiducial TRE; (b) the final surface TRE; (c) the cumulative percentage TRE failures (stre > 2 mm), and; (d) the average running time. Fig. 8. Optimization function values (vertical axis) for individual pose parameters as a function of their deviation from the ground truth value, (zero for all parameters). Translational deviations are in millimeters, rotational deviations are in degrees. The lower dark curves correspond to the values for ideal conditions, the upper light curves correspond to the values for realistic conditions. 32

37 LIVYATAN et al.: GRADIENT-BASED 2-D/3-D RIGID REGISTRATION OF FLUOROSCOPIC X-RAY TO CT 1405 TABLE III MAIN PARAMETERS AND THEIR SETTINGS USED IN THE EXPERIMENTS at 1-mm intervals. Finally, in the gradient based registration step we use the downhill simplex optimization algorithm. ACKNOWLEDGMENT The authors would like to thank Dr. A. Khoury, Prof. M. Leibergall, and Dr. R. Mosheiff of the Hadassah Medical Center, Ein Karem, Jerusalem, for providing the CT data sets and for their invaluable assistance in preparing and performing the in vitro and the cadaver experiments. niques to help avoid local minima. We plan to embed the algorithm in navigation and positioning systems for minimally invasive and percutaneous orthopaedic procedures, including the FRACAS system for long bone fracture reduction [3], the MARS robot for percutaneous spinal pedicle screw insertion [48], and a new system for percutaneous pelvic fraction reduction. APPENDIX MAIN PARAMETERS AND THEIR SETTINGS The proposed algorithm, as any of its kind, relies on tens of parameters with preset values. Table III lists the most important ones and their values. The parameters are classified in six categories, with the first four used in coarse geometry-based registration [Step 2)] and the last three used in fine gradient-based projection registration [Step 3)]. No parameters are used for the initial registration. The 3-D model parameters include the iso-value threshold used by the Marching Cubes algorithm to segment out the bone surface and the maximum depth of the hierarchical sphere tree described at the end of Section V. The Livewire parameters include the number of points input by the user on each fluoroscopic X-ray image and the number of points sampled on the segmented bone contours. The ICP parameter is the maximum number of iterations. For processing the fluoroscopic X-ray image parameters, we use the Canny edge detector. As input to the edge detector we specify a Gaussian mask which is used for computing the image gradient and two thresholds which are defined relative to the gradient magnitude data. As a post processing step to the edge detection we discard edges which are shorter than 30 pixels. For computing the gradient of the CT image, we use a Gaussian filter and sample the image with rays 33 REFERENCES [1] A. M. DiGioia, D. A. Simon, B. Jaramaz, M. Blackwell, F. Morgan, R. V. O Toole, and B. Colgan, Hipnav: Preoperative planning and intraoperative navigational guidance for acetabular implant placement in total hip replacement surgery, in Computer Assisted Orthopedic Surgery,L. P. Nolte and R. Ganz, Eds. Seattle, WA: Hogrefe & Huber, [2] R. Hofstetter, M. Slomczykowski, M. Sati, and L. P. Nolte, Fluoroscopy as an imaging means for computer assisted surgical navigation, Comput. Aided Surg., vol. 4, no. 2, pp , [3] L. Joskowicz, C. Milgrom, A. Simkin, L. Tockus, and Z. Yaniv, FRACAS: A system for computer-aided image-guided long bone fracture surgery, Comput. Aided Surg., vol. 3, no. 6, pp , [4] R. H. Taylor, B. D. Mittelstadt, H. A. Paul, W. Hanson, P. Kazanzides, J. F. Zuhars, B. Williamson, B. L. Musits, E. Glassman, and W. L. Bargar, An image-directed robotic system for precise orthopaedic surgery, IEEE Trans. Robot. Automat., vol. 10, pp , [5] K. Smith, K. Frank, and R. Bucholz, The neurostation: A highly accurate, minimally invasive solution to frameless stereotactic neurosurgery, Comp. Med. Imag. Graphics, vol. 18, no. 1, pp , [6] Q. H. Li, L. Zamorano, A. Pandya, R. Perez, J. Gong, and F. Diaz, The application accuracy of the neuromate robot a quantitative comparison with frameless and frame-based surgical localization systems, Comput. Aided Surg., vol. 7, no. 2, pp , [7] J. R. Adler Jr, M. J. Murphy, S. D. Chang, and S. L. Hancock, Image-guided robotic radiosurgery, Neurosurgery, vol. 44, no. 6, pp , [8] L. Lemieux et al., A patient-to-computed-tomography image registration method based on digitally reconstructed radiographs, Med. Phys., vol. 21, no. 11, [9] A. Hamadeh, P. Sautot, S. Lavallée, and P. Cinquin, Towards automatic registration between CT and X-ray images: Cooperation between 3D/2D registration and 2D edge detection, in Proc. Medical Robotics and Computer Assisted Surgery Conf., 1995, pp [10] M. J. Murphy, An automatic six-degree-of-freedom registration algorithm for image-guided frameless stereotaxic radiosurgery, Med. Phys., vol. 24, no. 6, [11] A. Hamadeh, S. Lavallée, and P. Cinquin, Automated 3-Dimensional computed tomographic and fluoroscopic image registration, Comput. Aided Surg., vol. 3, no. 1, [12] M. Roth, C. Brack, R. Burgkart, and A. Czopf, Multi-view contourless registration of bone structures using a single calibrated X-ray fluoroscope, in Proc. Computer-Assisted Radiology and Surgery Conf., 1999, pp [13] A. Guéziec et al., Providing visual information to validate 2D to 3D registration, Med. Image Anal., vol. 4, no. 4, [14] G. P. Penney, P. G. Batchelor, D. L. G. Hill, D. J. Hawkes, and J. Weese, Validation of a 2D to 3D registration algorithm for aligning preoperative CT images and intraoperative fluoroscopy images, Med. Phys., vol. 28, no. 6, [15] D. A. LaRose, Iterative X-ray/CT Registration Using Accelerated Volume Rendering, Ph.D. dissertation, Robotics Inst., Carnegie Mellon Univ., Pittsburgh, PA, [16] L. Zöllei, W. E. L. Grimson, A. Norbash, and W. M. Wells III, 2-D-3-D rigid registration of X-ray fluoroscopy and CT images using mutual information and sparsely sampled histogram estimators, in Proc. IEEE Computer Vision and Pattern Recognition Conf., 2001, pp [17] D. Sarrut and S. Clippe, Geometrical transformation approximation for 2D/3D intensity-based registration of portal images and CT scan, in Proc. Medical Image Computing and Computer Assisted Intervention Conf., 2001, pp

38 1406 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 22, NO. 11, NOVEMBER 2003 [18] T. Rohlfing and C. R. Maurer Jr, A novel image similarity measure for registration of 3-D MR images and X-ray projection images, in Proc. Medical Image Computing and Computer Assisted Intervention Conf., 2002, pp [19] B. Brendel, S. Winter, A. Rick, M. Stockheim, and H. Ermert, Registration of 3D CT and ultrasound datasets of the spine using bone structures, Comput. Aided Surg., vol. 7, no. 3, pp , [20] D. Tomaževič, B. Likar, and F. Pernuš, Rigid 2D/3D registration of intraoperative digital X-ray images and preoperative CT and MR images, in Proc. SPIE Medical Imaging Conf., 2002, pp [21] H. Livyatan, Calibration and Gradient-Based Rigid Registration of Fluoroscopic X-ray to CT, for Intra-Operative Navigation, masters thesis, Sch. Eng. Comput. Sci., The Hebrew Univ. Jerusalem, Jerusalem, Israel, [22] J. B. A. Maintz and M. A. Viergever, A survey of medical image registration, Med. Image Anal., vol. 2, no. 1, pp. 1 37, [23] P. J. Besl and N. D. McKay, A method for registration of 3-D shapes, IEEE Trans. Pattern Anal. Machine Intell., vol. 14, pp , Feb [24] Z. Zhang, Iterative point matching for registration of free-form curves and surfaces, Int. J. Comput. Vis., vol. 13, no. 2, pp , [25] C. R. Maurer Jr, G. B. Aboutanos, B. M. Dawant, R. J. Maciunas, and J. M. Fitzpatrick, Registration of 3-D images using weighted geometrical features, IEEE Trans. Med. Imag., vol. 15, pp , Dec [26] A. Guéziec, P. Kazanzides, B. Williamson, and R. H. Taylor, Anatomy-based registration of CT-scan and intraoperative X-ray images for guiding a surgical robot, IEEE Trans. Med. Imag., vol. 17, pp , Oct [27] S. Lavallée, R. Szeliski, and L. Brunie, Anatomy-based registration of three-dimensional medical images, X-ray projections, and three-dimensional models using octree-splines, in Computer-integrated surgery, Technology and clinical applications, R. H. Taylor, S. Lavallée, G. C. Burdea, and R. Mösges, Eds. Cambridge, MA: MIT Press, 1995, ch. 7. [28] B. Ma, R. E. Ellis, and D. J. Fleet, Spotlights: A robust method for surface-based registration in orthopedic surgery, in Proc. Medical Image Computing and Computer Assisted Intervention Conf., 1999, pp [29] O. Sadowski, Z. Yaniv, and L. Joskowicz, Comparative in vitro study of contact and image-based rigid registration for Comput. Aided Surg., Comput. Aided Surg., vol. 7, no. 4, [30] R. Bächler, H. Bunke, and L. P. Nolte, Restricted surface matchingnumerical optimization and technical evaluation, Comput. Aided Surg., vol. 6, no. 3, [31] J. M. Fitzpatrick, J. B. West, and C. R. Maurer, Jr., Predicting error in rigid-body, point-based registration, IEEE Trans. Med. Imag., vol. 17, pp , Oct [32] B. K. P. Horn, Closed-form solution of absolute orientation using unit quaternions, J. Opt. Soc. Amer. A, vol. 4, no. 4, pp , Apr [33] W. M. Wells, III, P. Viola, and R. Kikinis, Multi-modal volume registration by maximization of mutual information, in Proc. Medical Robotics and Computer Assisted Surgery Conf., 1995, pp [34] F. Maes, A. Collignon, D. Vandermeulen, G. Marchal, and P. Suetens, Multi-modality image registration by maximization of mutual information, in Proc. Workshop on Mathematical Methods in Biomedical Image Analysis (MMBIA 96), 1996, pp [35] J. P. W. Pluim, J. B. A. Maintz, and M. A. Viergever, Image registration by maximization of combined mutual information and gradient information, IEEE Trans. Med. Imag., vol. 19, pp , Aug [36] D. A. LaRose, J. Bayouth, and T. Kanade, Transgraph: Interactive intensity-based 2D/3D registration of X-ray and CT data, SPIE Image Processing, [37] G. P. Penney, J. Weese, J. A. Little, P. Desmedt, D. L. G. Hill, and D. J. Hawkes, A comparison of similarity measures for use in 2-D-3-D medical image registration, IEEE Trans. Med. Imag., vol. 17, pp , Aug [38] D. Tomaževič, B. Likar, and F. Pernuš, Gradient-based registration of 3D MR and 2D X-ray images, Comput. Assist. Radiol. Surg., pp , [39] K. G. A. Gilhuijs, P. J. H. van de Ven, and M. van Herk, Automatic three-dimensional inspection of patient set-up in radiation therapy using portal images, simulator images, and computed tomography data, Med. Phys., vol. 23, pp , [40] H. Livyatan, Z. Yaniv, and L Joskowicz, Robust automatic C-arm calibration for fluoroscopy-based navigation: A practical approach, in Proc. Medical Image Computing and Computer Assisted Intervention Conf., T. Dohi and R. Kikinis, Eds., 2002, pp [41] E. N. Mortensen and W. A. Barrett, Interactive segmentation with intelligent scissors, Graph. Models Image Processing, vol. 60, no. 5, pp , [42] B. Gärtner, Fast and robust smallest enclosing balls, in Proc. Eur. Symp. Algorithms (ESA), 1999, pp [43], (2002) Smallest Enclosing Balls of Points Fast and Robust in C++. [Online]. Available: [44] A. C. Kak and M. Slaney, Principles of Computerized Tomographic Imaging. Piscataway, NJ: IEEE Press, [45] R. Jain, R. Kasturi, and B. G. Schunk, Machine Vision. New York: McGraw-Hill, [46] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical Recipes in C, 2nd ed. Cambridge Univ. Press, Cambridge, U.K., [47] W. Cai, Transfer functions in DRR volume rendering, in Proc. Computer-Assisted Radiology and Surgery Conf., H. Lemke, et al., Ed., 1999, pp [48] L. Joskowicz, C. Milgrom, M. Shoham, Z. Yaniv, and A. Simkin, Robot-guided long bone intramedullary distal locking: Concept and preliminary results, in Proc. 3rd Int. Symp. Robotics and Automation, Toluca, Mexico, 2002, pp

39 Chapter 3 Intra-operative panoramic image from fluoroscopic X-ray images 1. Long Bone Panoramas from Fluoroscopic X-ray Images, Z. Yaniv, L. Joskowicz, IEEE Trans. on Medical Imaging, Vol. 23(1), pp ,

40 26 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 23, NO. 1, JANUARY 2004 Long Bone Panoramas From Fluoroscopic X-Ray Images Ziv Yaniv*, Student Member, IEEE and Leo Joskowicz, Senior Member, IEEE Abstract This paper presents a new method for creating a single panoramic image of a long bone from several individual fluoroscopic X-ray images. Panoramic images are useful preoperatively for diagnosis, and intraoperatively for long bone fragment alignment, for making anatomical measurements, and for documenting surgical outcomes. Our method composes individual overlapping images into an undistorted panoramic view that is the equivalent of a single X-ray image with a wide field of view. The correlations between the images are established from the graduations of a radiolucent ruler imaged alongside the long bone. Unlike existing methods, ours uses readily available hardware, requires a simple image acquisition protocol with minimal user input, and works with existing fluoroscopic C-arm units without modifications. It is robust and accurate, producing panoramas whose quality and spatial resolution is comparable to that of the individual images. The method has been successfully tested on in vitro and clinical cases. Index Terms Fluoroscopic X-ray, image registration, panoramic images. CURRENT orthopedic practice heavily relies on fluoroscopic X-ray images to perform a variety of surgical procedures such as fracture reduction, joint replacement, osteotomies, and pedicle screw insertion, to name a few. Surgeons use fluoroscopic X-ray images acquired during surgery with a mobile fluoroscopic C-arm to determine the relative position and orientation of bones, implants, and surgical instruments. While inexpensive and readily available, X-ray fluoroscopy has several important limitations, including a narrow field of view, limited resolution and contrast, and geometric distortion. These limitations require the surgeon to frequently acquire images of the surgical situation and to mentally correlate them. They preclude precise evaluation and measurements across images and complicate the placement of long implants. This leads to positioning errors, cumulative radiation exposure to the surgeon, and suboptimal results in a small but nonnegligible number of cases. While modern X-ray units incorporate geometric distortion correction and contrast enhancement, only a handful address the narrow field of view issue, and only Siemens is designed for Manuscript received February 27, 2003; revised June 18, This work was supported in part by a grant from the Israel Ministry of Industry and Trade through the IZMEL Consortium on Image-Guided Therapy. The Associate Editor responsible for coordinating the review of this paper and recommending its publication was W. Niessen. Asterisk indicates corresponding author. *Z. Yaniv is with the School of Computer Science and Engineering, The Hebrew University of Jerusalem, Jerusalem Israel ( zivy@cs.huji.ac.il). L. Joskowicz is with the School of Computer Science and Engineering, The Hebrew University of Jerusalem, Jerusalem Israel ( josko@cs.huji.ac.il). Digital Object Identifier /TMI intraoperative use [1] [3]. In this paper, we describe a novel, simple, and inexpensive method for creating a single panoramic image from a few individual fluoroscopic X-ray images. Our method produces an undistorted panoramic view, which is the equivalent of a single X-ray image with a field of view several times wider than that of the individual images by finding correlations between individual overlapping images and composing them (Fig. 1). It uses standard, readily available, and inexpensive hardware: a video frame grabber, a standard PC, a dewarp grid, and a radiolucent X-ray ruler with graduations commonly used in orthopaedics. Our method can be used with existing fluoroscopic C-arm units without any modification, involves a simple imaging protocol, and requires minimal user input. Unlike existing methods, ours produces panoramic images which can be used during surgery. Distortionless fluoroscopic X-ray panoramic images can be very useful in many preoperative and intraoperative situations. Preoperatively, they can be used for diagnosis and measurements, replacing expensive special purpose film-based X-ray machines with custom cassettes. They are readily available and require less radiation than conventional film X-ray images. Intraoperatively, they can be used in long bone surgery, to determine the mechanical axis of the bone, to align bone fragments, and to measure extremity length and anteversion. In joint replacement surgery, they can be used to assess the position of long implants, such as hip implants, and to determine the length of intramedullary nails. All these require the presence, in the same image, of relevant anatomical features, such as the condyles, the femur head, and the femur neck. Panoramas are also useful to document surgery outcomes. These measurements and images are difficult or impossible to obtain with existing methods and can help to improve diagnosis, shorten surgery time, and improve outcomes. A detailed analysis of orthopaedic applications shows that an accuracy of mm over 500 mm is sufficient in most cases. I. PREVIOUS WORK The creation of image panoramas, also called image mosaicing, has a long history and is an active area of research in computer graphics and computer vision [4] [8]. Panoramic images are created by correcting individual images for distortion (when necessary), aligning them, and composing them. The most challenging step is image alignment. The main technical issues are the number of images and the amount of overlap between successive images, the geometric constraints on camera poses, the type of mapping between images, and the identification of common image features. Most existing methods assume many closely related images, usually obtained /04$ IEEE 36

41 YANIV AND JOSKOWICZ: LONG BONE PANORAMAS FROM FLUOROSCOPIC X-RAY IMAGES 27 Fig. 1. Panorama construction of a humerus from six fluoroscopic X-ray images. The top row shows the original images with the orthopedic ruler on the top. The bottom image shows the resulting panorama. from a video movie in which consecutive images are nearly identical. However, for medical images, identifying common features in two consecutive images is difficult, as anatomical features are hard to locate reliably and accurately. Two systems for creating panoramas from X-ray images have been developed. The first uses a special purpose digital X-ray machine that acquires partially overlapping images by simultaneously translating the X-ray source and the image intensifier over the patient [9], [10]. The pixel images are processed by an EasyVision workstation, which merges them into a pixel panoramic image. The system was developed to measure spine scoliosis [11] and is also useful in imaging the colon and the limbs. A quantitative study of X-ray dose and image parallax is described in [10]. The advantages of this method are that it produces high-quality undistorted panoramas with little parallax, that it has local exposure correction, and that it is suitable for any part of the body. However, it requires costly special hardware in its own suite and cannot be used intraoperatively. In the second system, images are acquired using overlapping standard film cassettes [12], which are digitized and then input to a computer that composes them. The setup requires placing a radiolucent plaque with a metal grid pattern parallel, close to the imaged anatomy, and on the X-ray path. The grid placement requirements are necessary to minimize the difference between scale factors of the imaged grid and the patient anatomy. The advantages of this method are that it uses only a few images and that it can image any part of the body. Its disadvantages are that it requires film digitization, which is time-consuming and intraoperatively impractical, and that it yields erroneous measurements when the metal grid is not close to the imaged anatomy. Siemens AG (Munich, Germany) has developed the only system that produces panoramic images from fluoroscopic 37 X-ray images [1] [3]. It consists of a motorized C-arm which acquires overlapping images by precise simultaneous translation of the X-ray source and image intensifier. The images are aligned by semi-automatically detecting and matching semantically meaningful features on a reconstruction plane. Composed images of objects that are not on that plane will have parallax errors. The advantages of this method are that it requires only a few images, that it can image any part of the body, and that it does not require a phantom. Its disadvantages are that it only works with motorized C-arms, which are expensive and not very common. Also, no direct metric measurements are possible on the resulting panorama since no distance on the reconstruction plane is known a priori. Panoramic views are widely used in other areas of medicine, such as in dentistry. They are usually tailored to the anatomy and obtained with special purpose cameras, hardware, and films. For example, two recent works describe how to generate digitally reconstructed panoramas of nailfold capillary patterns from video sequences [13] and from ultrasound image sequences [14]. Both methods assume that a sequence of many, largely overlapping images taken in a single plane, is available. These methods are not applicable to the current practice of X-ray fluoroscopy, where the undesirable continuous mode is seldom used. Geometric distortion correction of individual fluoroscopic X-ray images is a necessary first step before composing them. Recent studies indicate distortion of up to 4 mm on the image edges in older fluoroscopic C-arm units [15]. Fluoroscopic X-ray distortion correction is well understood and has been addressed in previous research, including our own [15] [17]. Image alignment, also called image registration, consists of computing the transformation that aligns paired points in two data sets. Previous work shows how to establish and reduce the

42 28 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 23, NO. 1, JANUARY 2004 dissimilarity between the images [18], which can be based on geometric features or on pixel intensity values. Feature-based alignment assumes feature segmentation but requires less overlap between images [7]. Intensity-based matching does not require segmentation but only works for nearly identical images [19]. Neither feature-based nor intensity-based alignment are directly applicable to our problem. Accurate segmentation of anatomical features in fluoroscopic X-ray images is not, in general, sufficiently reliable [15], [20]. For anatomical structures such as long bones, there are not enough distinct features to perform the match. Intensity-based methods are impractical since several dozen images are required to achieve significant overlap between them. II. EQUIPMENT AND IMAGE ACQUISITION PROTOCOL We have developed a new method for creating a single panoramic image of a long bone from individual fluoroscopic X-ray images. In this section, we define an image acquisition protocol and describe customized techniques from image processing and computer vision to match overlapping images, determine their relative position, and compose them into a single panoramic view. The relative simplicity and robustness of our method derives from the combination of the right simplifying assumptions for image acquisition, the use of external markers to establish image correspondence, and the judicious adaptation of image processing algorithms. This protocol is designed to minimize the patient s radiation exposure, eliminates the radiation exposure to the surgeon, and can be performed by any X-ray technician with no additional training. It can be repeated at various stages during the surgery. We first describe the equipment setup and image capture protocol. The equipment consists of a mobile fluoroscopic C-arm unit commonly used in the operating room, a standard PC with a video card and a monitor, a custom dewarp grid, and an off-theshelve orthopaedic radiolucent X-ray ruler with graduations. Images are directly downloaded from the fluoroscopic C-arm unit to the computer via a digital port, or are captured from the video output port with an analog to digital frame grabber. The images are stored and processed in the computer and the resulting panoramic view is displayed on the fluoroscopic unit screen or on the computer screen. The custom dewarp grid is used to correct the images for geometric distortion [Fig. 2(a)]. It is a 7-mm-thick coated aluminum alloy plate with 405, 4-mm-diameter holes uniformly distributed at 10 mm intervals machined to -mm tolerance. It attaches to the C-arm image intensifier on existing screw holes. This grid is simpler and cheaper to make than the commonly used steel balls or cross-hairs mounted on a radio-lucent plate, and yields excellent results. The radiolucent X-ray ruler [Fig. 2(b)] is 1 meter long, has graduations at 5-mm intervals, and is sterilizable. The image acquisition protocol is as follows. Shortly before surgery, the C-arm is oriented at the pose which will be used in acquiring the panoramic images (usually anterior-posterior or lateral). The dewarp grid is attached to the image intensifier and an image of it is acquired and transferred into the computer, which computes the distortion correction map. The dewarp grid 38 Fig. 2. Equipment and its setup: (a) dewarp grid mounted on the C-arm image intensifier; (b) orthopaedic X-ray ruler. Fig. 3. Operating room setup: the ruler is placed next to the patient, parallel to the long bone axis. To acquire the overlapping images for the panorama, the fluoroscopic C-arm is moved parallel to the long bone axis. is then detached from the C-arm and the surgery proceeds as usual. When the panoramic view is required during the surgery, the ruler is placed next to the patient, roughly parallel to the imaged long bone (Fig. 3). The camera, ruler, and anatomy of interest are placed such that they form a fronto-parallel setup, in which the C-arm viewing direction is perpendicular to the ruler plane and the bone apparent contour plane [Fig. 4(a)]. A sequence of overlapping images is then acquired by translating the C-arm parallel to the bone axis. Subsequent images should overlap in 20% 60% of their area. The images are automatically downloaded to the computer as they are acquired, and the resulting undistorted panoramic image is displayed on the computer and on the C-arm screens. The method just described relies on four assumptions: 1) images are acquired in a fronto-parallel setup; 2) the C-arm orientation does not change during the calibration and the image acquisition; 3) there is sufficient overlap between the individual fluoroscopic X-ray images; and 4) the user selects the reconstruction plane. The assumptions are both practical and realistic. First, the fronto-parallel C-arm setup is commonly used in long bone surgery since it corresponds to the anterior-posterior and the lateral viewpoints which are very familiar to surgeons and X-ray technicians. It is the only viewing setup which en-

43 YANIV AND JOSKOWICZ: LONG BONE PANORAMAS FROM FLUOROSCOPIC X-RAY IMAGES 29 TABLE I PANORAMA CONSTRUCTION ALGORITHM Fig. 4. (a) Fronto-parallel setup: the C-arm camera viewing direction is perpendicular to the ruler plane and the bone apparent contour plane, and; (b) Camera model: p is a point in space, p and p are its projections from camera poses c and c. The coordinates origin coincides with c. The camera location c is rotated by R and translated by t. ables metric measurements on the resulting panorama, since it induces a transformation that preserves angles and distance ratios. Other viewing setups require a priori knowledge of angular and distance relationships between viewed objects that are seldom available. Second, the C-arm orientation remains the same during image acquisition, but may differ from the one at the time it was calibrated. The C-arm can be reoriented to the calibrated orientation using the gantry graduations to an accuracy of five degrees or less. Larger angular variations will introduce more error but will not cause the method to fail. Our previous study shows that the distortion correction discrepancies for these small angle variations are submillimetric [15]. Third, sufficient % overlap between consecutive images can be easily achieved by comparing the ruler markings that appear in the acquired images. The X-ray technician can determine if the overlap is sufficient, and if not, discard the acquired image, adjust the C-arm position, and acquire another image. Also, the entire image set can be discarded if, upon visual inspection, the result is incorrect. More overlap than 60% is neither necessary nor desirable, since the patient s radiation exposure should be minimized. Typically, 4 10 images are sufficient for long bone panoramic images. Fourth, the user interaction to select the desired reconstruction plane is essential, since there is one such plane for each anatomical structure and implant. For example, in the intramedullary tibial fracture reduction surgery (described in detail Section IV-C), there are two planes: the tibial plane and the nail plane. One of them should be chosen, depending on where the visualization and the measurements will be done. To define the plane, the system prompts the user to mark the contour of interest. Inputing the contour requires minimal user effort and can be completed in several seconds of user interaction. III. PANORAMA CONSTRUCTION ALGORITHM The algorithm creates a single rectangular panoramic view from the individual images in three steps: 1) distortion correc- 39 tion; 2) image alignment; and 3) image composition (Table I). For distortion correction, we compute the distortion map from the previously acquired grid image and the geometric model of the grid using local bilinear interpolation [15] and apply it to each image. For image alignment, we use the ruler s graduations to compute the planar rigid transformation between successive pairs of images by finding correspondences in their overlapping regions. To extract the graduations, we first identify the ruler by extracting its main thread with a modified Hough transform. This defines a region of interest where the graduations can be isolated by performing edge detection. We then compute the planar transformation that maximizes the Normalized Cross Correlation similarity measure between the original images. To adjust the transformation to the bone s apparent contour, the user specifies a contour of interest (there might be more than one) and the images are realigned according to it. For image composition, we select the first image as the reference and align all other images to it. We compute the panoramic image size and apply the computed transformations to the undistorted individual images. The resulting panoramic image has overlapping pixels coming from two or more images. Their values are computed by taking either the average, median, maximum, or minimum of the individual pixel values. From visual inspection, we concluded that the maximum yields the most uniform image. All the panoramas in this paper were computed using this rule. Fig. 5 illustrates the steps of the algorithm on a dry femur. Note that the method handles partial ruler occlusions. Next, we describe in detail the camera projection model and the image alignment method, which is the most technically challenging step. A. Camera and Projection Model We model the fluoroscopic camera as a pin-hole camera, as this has been shown to be an appropriate approximation of the X-ray imaging process [15]. We assume, based on our previous studies and that of others, that the camera internal parameters do not change when the camera poses are only translations on a plane [16], [17], [15].

44 30 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 23, NO. 1, JANUARY 2004 Fig. 5. Panorama of a dry femur. The top row shows the original images, the middle row shows the images after distortion correction, and bottom row shows the resulting panorama. Scissors and k-wires were placed below and above the femur. The intrinsic camera parameters are as follows: camera focal length; image origin coordinates at the intersection of the optical axis and the image plane; horizontal and vertical pixel scale factors in the image plane; angle between image plane axis. Following [21], the camera imaging parameters are modeled by a 3 3 camera projection matrix defined by In the first camera coordinate system we have (2). where is the third coordinate of point Similarly, in the second camera coordinate system we have (3) where is the third coordinate of point Substituting (2) into (3) we obtain. (1) Without loss of generality, we align the world coordinate system with the coordinate system of the first image camera pose. The second image is acquired after the camera has undergone a rotation, represented by a 3 3 rotation matrix and a translation, represented by a vector. Let be a point in and be its two projections in homogenous space, and let coordinates from two different camera viewpoints [Fig. 4(b)]., We seek the projective mapping, homography matrix which maps onto (4) relates the two projections and and The homography is computed without actually knowing the spatial coordinates be a plane containing point defined by normal of. Let vector and distance offset in world coordinates. Then (5) The relations between the point are and its projections and Substituting (2) into (5) we obtain (6) 40

45 YANIV AND JOSKOWICZ: LONG BONE PANORAMAS FROM FLUOROSCOPIC X-RAY IMAGES 31 Substituting (6) into (4) we obtain as is in homogenous coordinates we can drop the scale factor This yields the desired projective mapping Note that the first term of this expression,, is the same for all points and is independent of the plane parameters. The second term,, establishes the dependency with the plane parameters and. The homography holds for all imaged points when the camera transformation is purely rotational (the second term equals zero) or when all imaged points are coplanar (both terms are identical for all points). We use this equation later on to estimate the parallax error and compute a scale factor for the translation between pixels and millimeters. Based on our image acquisition protocol, we can make the following simplifications. Since the individual images have been corrected for geometric distortion, the angle between image plane axis,, is and the horizontal and vertical pixel scale factors, are both 1. The camera projection matrix then becomes Since the images are acquired by translating the C-arm, camera poses lie on a plane with a rotational component between images only around the optical axis. The apparent contour plane of the bone is nearly planar (the shape of long bones is nearly cylindrical) and roughly parallel to the image plane, the images are in fronto-parallel position [Fig. 4(a)]. Thus, the viewed plane is parallel to the image plane and the viewed points are on a plane defined by and. This setup restricts the relationships between ruler, anatomy, and camera and is the only one that preserves angles and scales distances. Since the images are acquired by translating the C-arm parallel to the viewed plane, the translations between images is. These conditions yield the well known expression for the rigid planar mapping The mapping in a fronto-parallel acquisition is a special case of the general perspective projection in which the distance ratios (7) (8) (9) 41 between points and the angles between lines are preserved. Once a known distance on the image is identified (e.g., the distance between ruler graduations), metric measurements can be made directly on the image. B. Image Alignment Pairs of consecutive images are aligned using a feature-based alignment method that uses the graduations of the ruler to compute the planar mapping transformation. Since the images are in fronto-parallel position, the transformation relating the images is planar, and thus the problem reduces to estimating three parameters between subsequent images. Image alignment consists of four steps (Table I): 1) find the ruler region; 2) segment the graduations; 3) compute the transformation; and 4) realign the images to compensate for parallax. We describe each step in detail next. 1) Ruler Region Identification: The main ruler thread is located using a modified two dimensional line Hough transform [22]. The transform computes the ruler s angle and distance from the origin. Since we are looking for a pair of parallel lines (the boundaries of the main thread), we add a constraint to the line Hough transform voting scheme. This constraint specifies that pixels vote for a certain line only if there is another pixel in the image which is on a line parallel to the current one. The difference in angle and translation between pairs of images are the first two alignment parameters, and. This method is robust to partial occlusions of the ruler (Fig. 5). 2) Graduations Segmentation: Next, we examine the region near the ruler s main thread [Fig. 6(a)]. We sum the columns in the region of the image and perform a one dimensional edge detection on the resulting signal [Fig. 6(b)]. This one dimensional signal has pronounced local minima at the graduation edges. Performing edge detection on this signal yields the right and left edges of the ruler graduations. Ruler graduations are now identified as pairs of these right and left edges [Fig. 6(c) (e)]. 3) Transformation Computation: Next, we compute the missing translation parameter, which corresponds to translation along the ruler s main thread. The visible graduations define the possible relative image translations along the axis parallel to the ruler. Valid translations are only those that align the graduations between images. Since there are at most a few dozen such graduations, the search space for the translation is both discrete and small. We use Normalized Cross Correlation (NCC) [23], also known as the Pearson correlation coefficient, as our similarity measure where and are the image pixel intensities at location, and and are the average image pixel intensities of the overlapping areas of the first and second images, respectively. The NCC measure is invariant to the overlap between images and can be computed efficiently. Because the registration is unimodal, the linear dependence measured by NCC is appropriate. 4) Image Realignment: The image alignment described above aligns the images on the ruler s plane. However, the correct alignment should be on the plane defined by the bone

46 32 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 23, NO. 1, JANUARY 2004 the corresponding unknown distance in apparent contour plane. Then, we have Thus To convert from pixels to millimeters we scale the known translation factor by, which is the inverse of the translational scale factor. Fig. 6. Ruler s graduations segmentation: (a) ruler image after graduations segmentation; (b) graph of the sum of gray level pixel values (vertical axis) as a function of its horizontal position; (c) detail of the ruler graduations; (d) its corresponding sum of intensities graph, and; (e) corresponding gradient magnitude graph. apparent contour. The conversion scale factor from pixels to millimeters should also be corrected to the bone apparent contour plane. We compute a translational correction for the pairwise transformations as follows. Let be the ruler plane distance and the apparent contour plane distance. From (8), the homography defined by the ruler plane is IV. EXPERIMENTAL RESULTS We conducted experiments to validate our method, quantify its accuracy, and determine its clinical value. Specifically, we performed a geometric error analysis study to determine the accuracy of the distortion correction and alignment steps in the panorama construction algorithm. We created panoramic images of several in vitro and in vivo long bones and showed them to doctors for evaluation. Finally, we created panoramas during a tibial intramedullary fracture reduction surgery which were used by the surgeon to evaluate the mechanical axis, choose the nail length, and document the surgery outcome. We describe each set of experiments in detail next. All images were acquired with a BV29 C-arm (Phillips, The Netherlands). They were pixels, 8-bit gray-scale with pixel size of about 0.5 mm 0.5 mm in the imaging setup described above. The distortion correction and panorama construction algorithms were implemented in C++ and ran on a 600 MHz, 256MB RAM PC running Windows NT. Processing times were 5 10 s per data set consisting of 4 10 individual images. The homography defined by the apparent contour plane is The homographies and differ only by a scaling of the translational component. To find the translational correction the user marks a contour of interest on two consecutive images using the Livewire [24] segmentation tool. It allows the user, with a few coarsely-placed input points, to accurately segment the contour edges and, thus, define the reconstruction plane. For each contour point on the first image, we rotate it by the rotational component of and then search for the closest contour point on the second image in the direction of the translational component of. The scale factor is the mean scale factor computed for all matched contour points. The scaling of the conversion from pixels to millimeters is computed as follows. Let be the camera focal length, and be the image distance of an object in the ruler and apparent contour plane, a known distance in the ruler plane, and 42 A. Error Quantification Geometric errors in the panorama arise from three sources: image distortion correction, Hough transform discretization, and distance difference between the ruler and the apparent contour plane. Our experiments show that the geometric distortion can be corrected across the entire field of view to an average accuracy of 0.25 mm (0.4 mm in the worst case) [15]. The standard Hough transform discretizes the parameter space, both the angle and translation. To minimize the errors introduced by the discretization, we use a parameter space with increments of 0.5 pixels for the translation and 0.5 for the rotation. The main cause of errors is the distance difference between the radiation source and the planes defined by the apparent contour and the ruler. When this distance difference is not taken into account, the resulting panorama has undesired scaling errors and parallax effects (Fig. 7). To evaluate the accuracy of the rescaling, we imaged two rulers (top and bottom) side by side at different heights [Fig. 8(a)]. We split each image into two, so that only one ruler appears in each, computed the transformations, and created panoramic images on the ruler s own plane [Fig. 8(c)]. We then applied the transformation of the top ruler panorama to the bottom ruler panorama, and created two new panoramas, without and with realignment correction [Fig. 8(b) and (d),

47 YANIV AND JOSKOWICZ: LONG BONE PANORAMAS FROM FLUOROSCOPIC X-RAY IMAGES 33 TABLE II REALIGNMENT MEASUREMENTS IN MILLIMETERS (ERROR IN PARENTHESIS). THE FIRST COLUMN SHOWS THE HEIGHT DIFFERENCE BETWEEN THE TWO RULERS. THE SECOND AND THIRD COLUMN SHOW WHICH RULER WAS USED AND ITS LENGTH.THE FOURTH,FIFTH, ANDSIXTH COLUMN SHOW THE RULER LENGTH MEASURED IN THE PANORAMA WITHOUT CORRECTION, IN THE RULER S OWN PLANE, AND WITH RESCALING CORRECTION IN THE OTHER RULER S PLANE, RESPECTIVELY Fig. 7. Illustration of the parallax effect on the leftmost K-wire in Fig. 5. The right image is the result of composing the two left images using the transformation computed for the bone reconstruction plane. The K-wire tip is about 50 mm above the ruler. Fig. 8. Rescaling accuracy experiment: (a) original dewarped images (b) without correction (c) own plane (d) with correction. TABLE III ACTUAL AND MEASURED DISTANCES ON DRY LONG BONES, IN MILLIMETERS. THE FIRST COLUMN INDICATES THE BONE REGION IN WHICH THE MEASUREMENTS WERE PERFORMED. THE SECOND COLUMN INDICATES THE NUMBER OF IMAGES IN WHICH THE ANATOMY APPEARS. THE THIRD AND FOURTH COLUMNS SHOW THE ACTUAL AND MEASURED DISTANCE VALUES. THE FIFTH COLUMN IS THE IMAGE MEASUREMENT ERROR respectively]. The ruler s own plane and realignment corrected panoramas are nearly identical [Fig. 8(c) and (d)], while the nonrealigned panorama [Fig. 8(b)] shows parallax. Table II quantifies these discrepancies as a function of the height difference. The average image measurement error is estimated at pixel. The results show that realignment is necessary, and that the realignment error is at most 1.1 mm, which is clinically acceptable. B. Preoperative Experiments We acquired six sets of fluoroscopic X-ray images of dry long bones (humerus and femur) and two sets of in vivo long bones (humerus and tibia). We acquired 4 8 images for each data set, placing a ruler with 5-mm graduations next to the anatomy following the protocol described above. The overlap between consecutive images was about 50%, which was visually verified with the aid of the ruler s graduations. For qualitative evaluation, we showed the panoramas to an orthopedic surgeon and got very satisfactory results: the bone structures and the ruler showed continuous boundaries, with very small jumps (one or two pixels) at locations where images were composed. Figs. 1 and 5 show in vivo and in vitro panoramas. The images were deemed accurate and of clinical value. For quantitative evaluation, we performed physical distance measurements on the dry humerus and femur and compared them with distance measurements on the images. The physical distance measurements were performed with a desk ruler with 1-mm graduations, so the average measurement error is estimated at mm. Table III summarizes these results. We observe that the error is below 2.8 mm (6%) in all cases. The relatively large error in case 3 is due to the fact that the measurement on the femoral head was relatively far from the apparent contour plane. 43 C. Intraoperative Experiment To test the method and protocol in an intraoperative setting, we participated in a tibial intramedullary fracture reduction surgery. The surgery restores the integrity of the fractured bone by means of a nail inserted in the medullary canal. The nail is placed without surgically exposing the fracture through an opening on the tibial fossa, right below the knee joint. The surgeon manually aligns the bone fragments by manipulating them through the skin, drives the nail in, and inserts lateral proximal and distal interlocking screws to prevent fragment rotation and bone shortening. The procedure is performed under X-ray fluoroscopy, which is used to view the position of bone fragments, surgical tools, and implants. The mechanical axis position and nail length are determined empirically, and sometimes require several trials. Right before the surgery, the C-arm was fitted with the dewarp plate and an anterior-posterior image was acquired. The dewarp plate was then removed and the image intensifier was draped. This step required four minutes and did not interfere with the surgery preparations. Before the fracture reduction, the surgeon placed the sterilized ruler on the operating table, right next to the patient tibia. We acquired nine fluoroscopic X-ray images and created a panorama [Fig. 9(a)]. The overlap was visually verified using the ruler s graduations. The surgeon used the resulting panorama to determine the extent of the fracture compression, to determine the nail length, and to assess the mechanical axis. The surgery proceeded as usual. Once the reduction and nail locking were completed, we acquired an additional set of six fluoroscopic X-ray images and created a panorama [Fig. 9(b)].

48 34 IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 23, NO. 1, JANUARY 2004 (b) Fig. 9. Intraoperative panoramas of a fractured tibia (a) before fracture reduction (b) after intramedulary nailing and locking. The surgeon used it to verify that the reduction was properly performed, that the leg length was as desired, and that the nail axis was properly aligned with the bone mechanical axis. The image acquisition for the panorama required only a few minutes in each case. The surgeon concluded that the panoramas helped him make the right nail length decision, and reduced the need for additional validation X-ray images. It also eliminated the need for a postoperative film X-ray. V. CONCLUSION We have presented a simple and robust method for creating a single panoramic image of a long bone from individual fluoroscopic X-ray images. Panoramic images are useful preoperatively for diagnosis, and intraoperatively for tasks such as long bone fragment alignment, for making anatomical measurements, and for documenting surgical outcomes. Unlike existing methods, ours uses readily available hardware, requires a simple image acquisition protocol with minimal user input, and works with existing fluoroscopic C-arm units without modifications. The method has been successfully tested on in vitro and clinical cases. Our experiments indicate that the method is practical and produces images and measurements which are clinically acceptable. We are planning to conduct a quantitative study to compare the accuracy of measurements with the panoramas to those obtained from CT models, which constitute the golden standard. We are also planning to further evaluate the clinical value of the 44 panoramic images on actual patient cases in a variety of routine preoperative and intraoperative procedures. As real-time tracking systems become ubiquitous in the operating room, we can envision extending our method to take advantage of the data they provide (however, we do not believe that the use of a tracker for the purpose of constructing panoramas alone is justified). By fitting a tracking device onto the C-arm and calibrating it, we can obtain accurately in real time the position and orientation of its imaging plane. Because the relative positions of the individual C-arm images are known, we no longer need to restrict the viewing to a fronto parallel setup or require a planar apparent contour. ACKNOWLEDGMENT The authors would like to thank Dr. R. Mosheiff and Dr. A. Khoury from the Hadassah Medical Center, Jerusalem, for facilitating the experiments. They would also like to thank Prof. C. Milgrom for advise during the system design. REFERENCES [1] J. S. Chou and J. Qian, Automatic Full-Leg Mosaic and Display for Peripheral Angiography, U. S. Patent , Nov. 10, [2] S. K. Murthy, C. L. Novak, J. Qian, and Z. Wu, System for Generating a Compound X-ray Image for Diagnosis, U. S. Patent , Aug. 8, [3] D. L. Wilson, Whole-Leg X-ray Image Processing and Display Techniques, U. S. Patent , June 16, [4] D. Capel and A. Zisserman, Automated mosaicing with super-resolution zoom, in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 1998, pp

49 YANIV AND JOSKOWICZ: LONG BONE PANORAMAS FROM FLUOROSCOPIC X-RAY IMAGES 35 [5] M. Irani and P. Anandan, Video indexing based on mosaic representations, Proc. IEEE, vol. 86, May [6] B. Rousso, S. Peleg, and I. Finci, Mosaicing with generalized strips, presented at the Proc. DARPA Image Understanding Workshop, 1997, pp [7] I. Zoghlami, O. Faugeras, and R. Deriche, Using geometric corners to build a 2D mosaic from a set of images, Proc Conf. Computer Vision and Pattern Recognition, p. 420, [8] A. Zomet and S. Peleg, Efficient super-resolution and applications to mosaics, presented at the Int. Conf. on Pattern Recognition, Barcelona, Spain, [9] H. W. van Eeuwijk, S. Lobregt, and F. A. Gerristen, A novel method for digital X-ray imaging of the complete spine, in Proc. Computer Vision, Virtual Reality and Robotics in Medicine Conf., 1997, pp [10] B. Verdonck, R. Nijlunsing, N. Melman, and H. Geiger, Image quality and X-ray dose for translation reconstruction overview imaging of the spine, colon, and legs, presented at the Conf. Computer Assisted Radiology and Surgery, Berlin, Germany, [11] H. Geijer et al., Digital radiography of scoliosis with a scanning method: Initial evaluation, Radiology, [12] P. Dewaele, P. Vuylsteke, S. Van de Velde, and E. Schoeters, Fullleg/full-spine image stitching, a new and accurate CR-based imaging technique, Proc. SPIE, vol. 5370, Medical Imaging: Image Processing, pp , [13] P. D. Allen, C. J. Taylor, A. L. Herrick, and T. Moore, Image analysis of nailfold capillary patterns from video sequences, in Proc. Medical Computing and Computer-Assisted Intervention Conf., 1999, pp [14] L. Weng and A. P. Tirumali, Method and Apparatus for Generating Large Compound Ultrasound Image, U. S. Patent , Nov. 19, [15] Z. Yaniv et al., Fluoroscopic image processing for computer-aided orthopedic surgery, in Proc. Medical Computing and Computer-Assisted Intervention, 1998, pp [16] C. Brack et al., Accurate X-ray based navigation in computer-assisted orthopedic surgery, in Proc. Computer Assisted Radiology and Surgery, 1998, pp [17] R. Hofstetter, M. Slomczykowski, M. Sati, and L. P. Nolte, Fluoroscopy as an imaging means for computer-assisted surgical navigation, Computer-Aided Surg., vol. 4, no. 2, pp , [18] S. Lavallée, Computer-Integrated Surgery,Technology and Clinical Applications. Cambridge, MA: MIT Press, 1995, ch. 5. [19] M. J. Murphy, An automatic six-degree of freedom image registration algorithm for image-guided frameless stereotactic radiosurgery, Med. Phys., vol. 24, no. 6, pp , [20] T. O. Ozanian and R. Phillips, Image analysis for computer-assisted internal fixation of hip fractures, Med. Image Anal., vol. 4, no. 2, pp , [21] O. Faugeras, Three-Dimensional Computer Vision : A Geometric Viewpoint. Cambridge, MA: MIT Press, [22] R. Jain, R. Kasturi, and B. G. Schunk, Machine Vision. New York: McGraw-Hill, [23] R. C. Gonzalez and R. E. Woods, Digital Image Processing, 2nd ed. Englewood Cliffs, NJ: Prentice-Hall, 2002, ch. 12. [24] E. N. Mortensen and W. A. Barrett, Interactive segmentation with intelligent scissors, Graphical Models Image Processing, vol. 60, no. 5,

50 Chapter 4 Robot-assisted Distal Locking of Long Bone Intramedullary Nails: Localization, Registration, and In-Vitro Experiments 4.1 Introduction The growing demand for precise, minimally invasive surgical interventions is driving the search for ways to use computers in conjunction with advanced assistance devices to improve surgical planning and execution. Over the past decade, a variety of Computer Integrated Surgery (CIS) systems have been designed, mostly for neurosurgery, laparoscopy, maxillofacial surgery, and orthopedics [28]. Recent studies are starting to show the clinical benefits of these systems. CIS systems can potentially benefit many orthopedic surgical procedures, including total hip and total knee replacement, pedicle screw insertion, fracture reduction, and ACL ligament reconstruction. These procedures are ubiquitous and high volume in operating rooms worldwide. They involve rigid bone structures that image well, require preoperative planning, and employ instruments and tools, such as implants, screws, drills, and saws that require precise positioning. Indeed, a few dozen CIS systems for these procedures are currently deployed [36]. Our work focuses on a technique for fracture reduction called closed intramedullary nailing. Closed intramedullary nailing is currently the routine procedure of choice for reducing fractures of the femur and the tibia [6]. It restores the integrity of the fractured bone by means of a nail inserted in the medullary canal. The nail is inserted without surgically exposing the fracture through an opening, usually in the proximal part of the bone. The surgeon reduces the fracture by manipulating the proximal and distal bone fragments through the leg until they are aligned. The surgeon then inserts a guide wire, reams the canal if necessary, and drives the nail in. In most cases, the 46

51 Figure 4.1: X-ray fluoroscopic images acquired during pilot hole drilling. surgeon also inserts lateral proximal and distal interlocking screws to prevent fragment rotation and bone shortening. The procedure is performed under X-ray fluoroscopy, which is used to view the position of bone fragments, surgical tools, and implants. Numerous X-ray fluoroscopic images are required, especially during distal locking [43]. Distal locking the insertion of lateral screws to prevent nail rotation has long been recognized as one of the most challenging steps in this procedure. Since the nail deforms by several millimeters to conform to the bone canal shape, the exact position of the distal locking nail holes axes cannot be determined in advance. By repeatedly alternating between anterior-posterior and lateral X-ray fluoroscopic views, the surgeon adjusts the entry point and orientation of the drill so that its axis coincides with the corresponding nail hole axis. Drilling proceeds incrementally, with each advance verified with a new pair of X-ray fluoroscopic images (Figure 4.1). Once the pilot hole passing through the distal locking nail s hole has been drilled, the locking screw is fastened. Complications include inadequate fixation, malrotation, bone cracking, cortical wall penetration, and bone weakening due to multiple or enlarged pilot holes. The literature reports that the surgeon s direct exposure to radiation per procedure without the use of CIS systems is 3-30 minutes, of which 31-51% is spent on distal locking depending on the patient anatomy and the surgeon s skill [43]. 4.2 Previous work Many non-cis devices have been developed for distal locking [31]. Examples include proximally mounted targeting devices, stereo fluoroscopy, mechanical guides, and optical and electro-magnetic navigation systems that help locate the center of the 47

52 distal locking nail holes. However, all of these devices and techniques have deficiencies: they are only selectively applicable, are cumbersome and difficult to use, or are not sufficiently accurate, and thus fail to significantly reduce the likelihood of patient complications. Fluoroscopy-based CIS navigation systems [19, 21, 26] take the guesswork out of targeting. The systems enhance, reduce, or altogether eliminate X-ray fluoroscopic images by replacing them with a virtual reality view in which the bone and instruments positions are continuously updated and viewed on-screen as they move. They help the surgeon align the drill axis with the distal locking nail hole axis to an accuracy of about 1mm and 1 o. However, they do not provide a mechanical guide for the hand-held drill, which can slip or deviate from its planned trajectory as the drilling proceeds. Thus, the surgical outcomes are still largely dependent on the surgeon s skill. Robot-based CIS systems are designed to assist the surgeon in implementing the preoperative plan by mechanically positioning and sometimes executing the surgical action itself [8, 45]. The robots are either adapted floor-standing industrial robots, or table-mounted custom-designed serial robots. They are usually voluminous and heavy despite the fact that they have relatively small workloads and workvolumes. In these systems, bone immobilization or real-time dynamic tracking is required, since the relative configuration of the bone with respect to the robot must be known precisely at all times. This complicates the registration procedure and adversely affects the overall system accuracy. A very recent novel development is the miniature parallel robot MARS [40]. The miniature robot is directly mounted onto the patient, forming a single rigid body with the anatomy. This removes the need for anatomy immobilization or real time tracking during surgery. We propose to use this robot to automatically align a drill guide with the distal nail hole axes providing accurate mechanical guidance for manual drilling [27]. 4.3 System concept The proposed system consists of a miniature patient-mounted robot, a drill guide, an image calibration ring for the fluoroscopic X-ray unit, a PC computer with a video frame grabber, and image based guidance software. MARS is a 5x5x7cm 3, 150 gram six-degree-of-freedom parallel manipulator whose work volume is about 10cm 3 and whose accuracy is better than 0.1mm. When locked, it is rigid and can withstand forces of a few kilograms. It is mounted either directly on the bone or on the nail head (Figure 4.2). Both options are minimally invasive and eliminate the need for leg immobilization or real-time tracking during surgery. The drill guide is a Delrin block with two guiding holes 30mm apart (the spacing between the nail holes). It has a pattern of 28 3mm stainless steel fiducial spheres asymmetrically distributed on two planes 20mm apart that are used for its spatial localization (Figure 4.3). 48

53 fluoroscopic X ray image calibration ring robot robot C arm drill guide nail (a) drill guide bone nail (b) rods nail holes Figure 4.2: Photographs of (a) the in-vitro setup and (b) the MARS robot in its desired configuration, where the drill guide and nail holes axes are aligned, as shown by the two rods passing through them. The surgical protocol is as follows. Once the fracture has been reduced and the nail has been inserted the image calibration ring is mounted on the C-arm image intensifier. The surgeon then mounts the robot onto the patient, either directly onto the bone, or onto the nail head. Our software then estimates the required robot pose so that the drill guide axes and nail axes coincide. This estimation can be done in one of two modes, interactive or fully automatic. In the interactive mode the X-ray technician orients the C-arm, guided by our software, so that it is in a fronto-parallel setting with the nail holes. This mode requires the acquisition of a fluoroscopic image for every prospective orientation. In the fully automatic mode a single fluoroscopic image is acquired and the required robot pose is estimated. Once the robot is positioned the surgeon manually drills the pilot holes through the robot mounted drill guide, removes the robot, and completes the surgery according to the standard protocol. Accurate and robust computation of the transformation that aligns the drill guide and the nail hole axes is a challenging image processing and pose estimation task. Localization of the nail holes and the drill guide is difficult because partial occlusions are inherent to the setup (the robot is mounted close to the nail holes and the image includes the nail, bone, and soft tissue). The nail holes are small (5mm diameter, about 20 pixels), nearby (30mm), and appear as ellipses in the images, so the accuracy with which their axes can be determined is limited. Furthermore, only one fluoroscopic X-ray image can be used, since there is no tracking of the C-arm pose. Finally, the C-arm imaging system exhibits orientation-dependent distortions and internal imaging parameters variations. To cope with these challenges, we have developed a novel model-based approach for robust and accurate localization of the drill guide target and nail holes and for the registration of their axes. 49

54 (a) (b) Figure 4.3: (a) drill guide target mounted on robot and (b) its fluoroscopic X-ray image. We model the fluoroscopic camera as a pin-hole camera with distortion, as this has been shown to be an appropriate approximation of the X-ray imaging process[5, 21, 34, 50]. In previous work, we describe a robust automatic C-arm calibration algorithm that includes fiducial localization, distortion correction and camera calibration [34]. The algorithm computes the distortion correction and camera calibration parameters from an X-ray fluoroscopic image in three steps. First, it locates the fiducials projections and pairs them with their spatial location. Next, it computes the distortion correction parameters, and then computes the calibration parameters. Accurate and robust localization of the fiducials and their pattern is the most important step, since all parameters critically depend on it. Our experiments show that submillimetric accuracy for the combined dewarping and camera calibration is achievable even when only 60% of the fiducials are detected. The nail is modeled as a planar object with two circular holes. This is a good approximation as the nail s distance from the camera focal point is between mm and the nail diameter is 10-15mm (45 to 70 times smaller than the distance). To align the robot so that the drill guide axes coincide with the nail hole axes we need to close the transformation chain depicted in Figure 4.4: T goal base = T tip base T pattern tip (T pattern camera ) 1 T nail camerat goal nail where T tip pattern base and Ttip are known from design and Tcamera pattern and Tcamera nail are computed from the C-arm internal camera parameters and the fluoroscopic image, and T goal nail is computed from the previous two transformations. Our method consists of three steps: 1) C-arm distortion correction and calibration; 2) drill guide and nail hole identification; 3) drill guide and nail pose estimation. We describe the later two next. 50

55 pattern T tip Image Intensifier robot tip T base C arm drill guide pattern T camera intramedullary nail T goal nail robot mount nail T camera goal position C arm focal point Figure 4.4: Robot alignment registration chain. Solid lines mark known transformations, dashed lines mark computed transformations. 4.4 Drill guide and nail hole identification Drill guide identification Drill guide identification is performed by first detecting the circular fiducials in the image and then finding the correct correspondence with their 3D model. The key issues are handling partial occlusions and missing fiducials, and minimizing positional inaccuracies. The target fiducials are detected in two steps: 1) localization and 2) circle fitting. Localization is performed using a modified circle Hough transform [23] and a modelbased analysis of the transform accumulator. Since the circular fiducials are darker than the image background, the Hough transform voting scheme is constrained so that edge pixels will only cast their vote if the vector connecting them to the hypothesized circle center is in the opposite direction of the gradient at the edge location. The contents of the transform accumulator are examined to identify the k > 28 circles which received the most votes. Considering a few more candidate circles is necessary since the accumulator may contain multiple peaks for the same fiducial (these are higher than the peaks for other fiducials). The algorithm then computes the average radius and number of votes of the five circles with the most votes and selects all circles whose radius is within ±2 pixels of the average radius and which have received more than half of the average votes. These circles, with very high probability, belong to the target. The selected set of circles may contain overlapping circles, which are due either to multiple responses for the same fiducial or to fiducial overlap. In our imaging setup, 51

56 there can be at most two overlapping fiducials. Thus, for each set of overlapping circles, we only retain the two circles with the highest number of votes. These pairs of overlapping circles correspond either to two overlapping fiducials or to a single fiducial. When the overlap area between the pair of circles is larger than 60% of the circle s area, it is a single circle, otherwise it is two. Circle fitting is performed using the Random Sample Consensus (RANSAC) paradigm [10]. For each circle, the algorithm collects all edge elements contained in its circumscribing square and fits a circle to them. This removes the dependency on the Hough transform discretization accuracy. The correspondence between the detected fiducials and their 3D model is computed using homographies. Correctly pairing the detected fiducials and the 3D model fiducials is difficult, as there are always missing fiducials which are occluded by the nail. To overcome these problems, we use the geometry of the drill guide target: we use lines instead of points, since lines are less sensitive to partial occlusions. Since the fiducials are distributed on two planes, the goal is to find the pair of homographies which minimizes the distance between the detected circles and the result of applying the homographies to the target fiducials. The pair of homographies is computed in three steps: 1. Find line with maximal support (either long or short target axis), the other axis is nearly perpendicular to the one found. 2. Find all lines which contain two spheres and are nearly perpendicular (parallel) to the long axis. Sort them according to their mean projection onto the long (short) axis. The next step requires that at least two lines per target plane were identified. This means that according to the target design for the long axis there are 4 7 lines and for the short axis 4 5 lines. If more than the maximal number of lines are detected then the identification step has failed. 3. Go over all ( ( 7 k), 5 j), 4 k 7, 4 j 5 line pairing options. For each pairing compute the two homographies corresponding to the target planes using the direct linear transformation method [18]. Each pairing is ranked according to the number of target-image point matches and the sum of distances. We transform all target points and pair them with the closest image point which is at most 2r pixels away (where r is the average radius of the detected circles). The best pairing is the one which maximizes the number of matches, and if there is more than one such pairing we take the one which minimizes the total distance between transformed target points and their matched image points. We now have the target-image correspondences. These may contain pairings between several target points and a single image point. This happens when the target spheres overlap in the image and only one is detected. All such multiple pairings are discarded. Finally a visual validation is done, overlaying the image with the detected circles and the paired target spheres. Figure 4.5 shows the stages of drill guide target detection and identification, with Figure 4.5(f) used for visual validation. 52

57 (a) (b) (c) (d) (e) (f) Figure 4.5: Drill guide target identification: (a) distortion corrected image (b) result of edge detection overlayed on previous image (c) detected fiducials (d) line with maximal support (e) lines which have a support of two fiducials (f) white circles mark detected fiducials and + and denote associated model planes. 53

58 4.4.2 Nail hole identification The location of the distal locking nail holes in the X-ray fluoroscopic image is determined by first locating the nail s longitudinal contour and then locating its holes from their expected position with respect to the contour (Figure 4.6). To locate the nail longitudinal contours, we use a 3D Hough transform in which the nail is modeled as a band consisting of two parallel lines with a known distance between them. The Hough transform voting scheme is constrained so that pixels which are on parallel lines will only cast their vote if the gray level values between them are lower than the gray level values outside the band. The search for the nail holes is then performed on the pixels contained between the nail s contours. The algorithm sweeps a parallelepiped window whose sizes are equal to the nail width along the nail s medial axis. The two locations containing the maximal number of edge elements correspond to the locations of the distal locking nail holes. The algorithm then fits an ellipse to the edge data contained inside the parallelepipeds. The edge elements originate from the nail holes, the drill guide target and the C-arm calibration target (Figure 4.6(b)). The ellipse parameter estimation must take this into account. We considered two approaches to cope with the outlying data: 1) RANSAC, and 2) a model-based approach. The RANSAC approach is a general randomized framework for dealing with outlying data. In our context subsets of five pixels are randomly chosen and the ellipse parameters are estimated using a generic conic fitting solution [51]. For each such estimate the consensus set is found and the largest set is used as input for a least squares estimation procedure. Edge elements are incorporated into the consensus set only if their geometric distance from the ellipse is below a predefined threshold. The model-based approach uses only edge elements which belong to the convex hull of the set of elements. This is because ellipses are convex shapes and because the nail is opaque, so outlying edges can only be present in the interior of the ellipse. In both approaches we estimated the final ellipse parameters using a non-linear geometric least squares optimization (Downhill-Simplex), initialized with the result of an algebraic least squares estimate [11, 16]. After experimenting with both approaches we opted for the model-based one. The RANSAC approach failed occasionally due to the threshold value for incorporation into the consensus set. Figure 4.6 shows an example of the nail hole identification using the model-based approach. Final evaluation of the ellipse fitting is done by visual inspection. 4.5 Drill guide and nail pose estimation The desired robot alignment is: T tip base = T goal pattern base (Ttip ) 1 54

59 (a) (b) (c) (d) Figure 4.6: Nail hole detection:(a) distortion corrected image (b) result of edge detection overlayed on previous image (c) nail s longitudinal contour and detected ellipses (d) zoom in on detected nail holes an arrow indicates occlusions due to drill guide spheres. 55

60 where T goal base = T tip base T pattern tip (T pattern camera ) 1 T nail camerat goal nail whose com- and we have three unknown transformations Tcamera pattern, Tcamera nail and T goal nail putation is described in the following subsections. In our context, we perform point based pose estimation, a problem which has been thoroughly studied in computer vision. We explored several solutions to this problem with the following factors in mind, ordered according to their importance: 1) achieve the highest accuracy possible; 2) deal with noisy input data without any outliers; 3) compute solution within several seconds. We evaluated four algorithms, Direct Linear Transform (DLT) [9], Depth Based (DB) [2], Genetic Algorithm (GA) and, Non-Linear optimization (NL). Empirically we found that the classic photogrammetric approach of Non-Linear optimization initialized with the results of the DLT gave the best results both in terms of time and accuracy. The algorithms are described in Section 4.8. A detailed comparison is provided in Section Drill guide pose estimation The drill guide pose is computed by non-linear minimization of the projection distances between the known fiducial projection coordinates (x i, y i ) and the expected ones ( x i, ŷ i ): ( n v = arg min 0.5 v i=1 (x i x i (v)) 2 + (y i ŷ i (v)) 2 ) where v is the rigid transformation parameterization. The non-linear minimization is performed with the Levenberg-Marquardt method as described in [35] Nail pose estimation We experimented with two approaches which determine the nail pose. The first approach is an interactive one, requiring the X-ray technician to acquire a few fluoroscopic images until the orientation of the C-arm forms a fronto-parallel setup with the nail holes. The second approach is more generic and requires the acquisition of a single fluoroscopic image. Although the second approach is computationally more attractive we have empirically found the first approach to be more accurate. Interactive nail pose determination In this approach we interactively guide the user to a known pose, a fronto-parallel setup between the C-arm and nail holes. To arrive at this setup, the X-ray technician 56

61 images the nail in several orientations which are scored on two criteria, hole circularity and deviation angle between hole supporting plane and camera viewing direction. When the setup is a fronto parallel setup the nail holes should appear in the image as circles and the angular deviation between the nail hole s supporting plane normal and the camera viewing direction is zero. We compute the measure of hole circularity as the aspect ratio of the ellipse which is fit to the data points, and the angular deviation between the computed supporting plane normal [13, 30] and the camera s viewing direction 1. This serves to guide the X-ray technician to the desired setup. We have empirically found this process to require up to six images. Once the fronto-parallel setup is achieved, the nail distance z from the camera focal point is estimated. Using the average diameter of the two nail holes in the image d i and their known diameter d w we have z = fdw d i and the nail hole location is computed accordingly. t = z f p x z f p y z where p and q are the centers of the two circles. The nail rotation relative to the camera is: p x q x p y q y 0 R = p y q y q x p x The nail s X axis direction (choice of p, q) is set so that its angular deviation from the direction of the drill guide s X axis is minimal. Generic nail pose determination To determine the nail pose from a single image acquired from an unknown pose we find points on the two ellipses which we are able to identify with their three dimensional model. Given four or more such points we solve the point-based pose estimation problem as described in Appendix 4.8. Given a pair of circles which have undergone a projective transfromation we would like to find a set of points which are invariant to this transformation. In general the intersection and bitangent points of the two circles are invariant under a projective transformation [13, 39]. In our specific case we know that the two ellipses have four bitangent lines and no intersection points. The bitangent lines are computed as described in [39]. 1 The computation of the supporting plane normal has two solutions yielding two angles. We display the smaller angle as we assume that the images are acquired near the fronto-parallel setup. 57

62 Figure 4.7: Bitangents in model and image. Intersection point of bitangents three and four is used in the evaluation of the bitangent identification step. In homogeneous coordinates the ellipses are: x T Q 1 x = 0 and x T Q 2 x = 0 The normal to Q 1 is n 1 = 2Q 1 x and the tangent line at x 1 is the set of points r which are orthogonal to n 1 : n 1 T r = 0 2x 1 T Q 1 r = 0 Let l 1 T = x 1 T Q 1 then l 1 T r = 0 is a line. Using the duality of points and lines we can view the line l 1 T as a point on the ellipse Q 1 1 : Likewise for the second ellipse we have: l 1T Q 1 1 l 1 = x 1 T Q 1 Q 1 1 Q 1 x 1 = 0 (4.1) l 2 T Q 1 2 l 2 = 0 (4.2) The bitangent l of Q 1, Q 2 satisfies both Equations 4.1 and 4.2. Hence l is given by the intersections of the ellipses Q 1 1 and Q 1 2, the four solutions to the two polynomial equations defined by the ellipses. The order of the tangency points around the ellipse does not change as we are dealing with a perspective transformation. This means there are four possible matches due to each of the bitangent lines. The matches are then ranked using homographies. The bitangents are enumerated in counter clockwise order according to the bitangent points on the first circle (Figure 4.7). As the nail is modeled as a planar object with two circular holes, every prospective match induces a homography. The homography 58

Navigation System for ACL Reconstruction Using Registration between Multi-Viewpoint X-ray Images and CT Images

Navigation System for ACL Reconstruction Using Registration between Multi-Viewpoint X-ray Images and CT Images Navigation System for ACL Reconstruction Using Registration between Multi-Viewpoint X-ray Images and CT Images Mamoru Kuga a*, Kazunori Yasuda b, Nobuhiko Hata a, Takeyoshi Dohi a a Graduate School of

More information

Assessing Accuracy Factors in Deformable 2D/3D Medical Image Registration Using a Statistical Pelvis Model

Assessing Accuracy Factors in Deformable 2D/3D Medical Image Registration Using a Statistical Pelvis Model Assessing Accuracy Factors in Deformable 2D/3D Medical Image Registration Using a Statistical Pelvis Model Jianhua Yao National Institute of Health Bethesda, MD USA jyao@cc.nih.gov Russell Taylor The Johns

More information

Fluoroscopy-Based X-Ray Navigation in Orthopedics Using Electromagnetic Tracking: A Practical Study

Fluoroscopy-Based X-Ray Navigation in Orthopedics Using Electromagnetic Tracking: A Practical Study Fluoroscopy-Based X-Ray Navigation in Orthopedics Using Electromagnetic Tracking: A Practical Study A thesis submitted in fulfillment of the requirements for the degree of Master of Science by Yair Yarom

More information

Real-time self-calibration of a tracked augmented reality display

Real-time self-calibration of a tracked augmented reality display Real-time self-calibration of a tracked augmented reality display Zachary Baum, Andras Lasso, Tamas Ungi, Gabor Fichtinger Laboratory for Percutaneous Surgery, Queen s University, Kingston, Canada ABSTRACT

More information

On the Design and Experiments of a Fluoro-Robotic Navigation System for Closed Intramedullary Nailing of Femur

On the Design and Experiments of a Fluoro-Robotic Navigation System for Closed Intramedullary Nailing of Femur On the Design and Experiments of a Fluoro-Robotic Navigation System for Closed Intramedullary Nailing of Femur Sakol Nakdhamabhorn and Jackrit Suthakorn 1* Abstract. Closed Intramedullary Nailing of Femur

More information

Lecture 8: Registration

Lecture 8: Registration ME 328: Medical Robotics Winter 2019 Lecture 8: Registration Allison Okamura Stanford University Updates Assignment 4 Sign up for teams/ultrasound by noon today at: https://tinyurl.com/me328-uslab Main

More information

Recovery of 3D Pose of Bones in Single 2D X-ray Images

Recovery of 3D Pose of Bones in Single 2D X-ray Images Recovery of 3D Pose of Bones in Single 2D X-ray Images Piyush Kanti Bhunre Wee Kheng Leow Dept. of Computer Science National University of Singapore 3 Science Drive 2, Singapore 117543 {piyushka, leowwk}@comp.nus.edu.sg

More information

Sensor-aided Milling with a Surgical Robot System

Sensor-aided Milling with a Surgical Robot System 1 Sensor-aided Milling with a Surgical Robot System Dirk Engel, Joerg Raczkowsky, Heinz Woern Institute for Process Control and Robotics (IPR), Universität Karlsruhe (TH) Engler-Bunte-Ring 8, 76131 Karlsruhe

More information

Improved Navigated Spine Surgery Utilizing Augmented Reality Visualization

Improved Navigated Spine Surgery Utilizing Augmented Reality Visualization Improved Navigated Spine Surgery Utilizing Augmented Reality Visualization Zein Salah 1,2, Bernhard Preim 1, Erck Elolf 3, Jörg Franke 4, Georg Rose 2 1Department of Simulation and Graphics, University

More information

Gradient-Based Differential Approach for Patient Motion Compensation in 2D/3D Overlay

Gradient-Based Differential Approach for Patient Motion Compensation in 2D/3D Overlay Gradient-Based Differential Approach for Patient Motion Compensation in 2D/3D Overlay Jian Wang, Anja Borsdorf, Benno Heigl, Thomas Köhler, Joachim Hornegger Pattern Recognition Lab, Friedrich-Alexander-University

More information

Calibration Method for Determining the Physical Location of the Ultrasound Image Plane

Calibration Method for Determining the Physical Location of the Ultrasound Image Plane Calibration Method for Determining the Physical Location of the Ultrasound Image Plane Devin V. Amin, Ph.D. 1, Takeo Kanade, Ph.D 1., Branislav Jaramaz, Ph.D. 2, Anthony M. DiGioia III, MD 2, Constantinos

More information

A Study of Medical Image Analysis System

A Study of Medical Image Analysis System Indian Journal of Science and Technology, Vol 8(25), DOI: 10.17485/ijst/2015/v8i25/80492, October 2015 ISSN (Print) : 0974-6846 ISSN (Online) : 0974-5645 A Study of Medical Image Analysis System Kim Tae-Eun

More information

Fully Automatic Endoscope Calibration for Intraoperative Use

Fully Automatic Endoscope Calibration for Intraoperative Use Fully Automatic Endoscope Calibration for Intraoperative Use Christian Wengert, Mireille Reeff, Philippe C. Cattin, Gábor Székely Computer Vision Laboratory, ETH Zurich, 8092 Zurich, Switzerland {wengert,

More information

An Accuracy Approach to Robotic Microsurgery in the Ear

An Accuracy Approach to Robotic Microsurgery in the Ear An Accuracy Approach to Robotic Microsurgery in the Ear B. Bell¹,J.Salzmann 1, E.Nielsen 3, N.Gerber 1, G.Zheng 1, L.Nolte 1, C.Stieger 4, M.Caversaccio², S. Weber 1 ¹ Institute for Surgical Technologies

More information

Lecture 6: Medical imaging and image-guided interventions

Lecture 6: Medical imaging and image-guided interventions ME 328: Medical Robotics Winter 2019 Lecture 6: Medical imaging and image-guided interventions Allison Okamura Stanford University Updates Assignment 3 Due this Thursday, Jan. 31 Note that this assignment

More information

Introduction to Digitization Techniques for Surgical Guidance

Introduction to Digitization Techniques for Surgical Guidance Introduction to Digitization Techniques for Surgical Guidance Rebekah H. Conley, MS and Logan W. Clements, PhD Department of Biomedical Engineering Biomedical Modeling Laboratory Outline Overview of Tracking

More information

An Iterative Framework for Improving the Accuracy of Intraoperative Intensity-Based 2D/3D Registration for Image-Guided Orthopedic Surgery

An Iterative Framework for Improving the Accuracy of Intraoperative Intensity-Based 2D/3D Registration for Image-Guided Orthopedic Surgery An Iterative Framework for Improving the Accuracy of Intraoperative Intensity-Based 2D/3D Registration for Image-Guided Orthopedic Surgery Yoshito Otake 1, Mehran Armand 2, Ofri Sadowsky 1, Robert S. Armiger

More information

Tracked surgical drill calibration

Tracked surgical drill calibration Tracked surgical drill calibration An acetabular fracture is a break in the socket portion of the "ball-and-socket" hip joint. The majority of acetabular fractures are caused by some type of highenergy

More information

Digital Volume Correlation for Materials Characterization

Digital Volume Correlation for Materials Characterization 19 th World Conference on Non-Destructive Testing 2016 Digital Volume Correlation for Materials Characterization Enrico QUINTANA, Phillip REU, Edward JIMENEZ, Kyle THOMPSON, Sharlotte KRAMER Sandia National

More information

Model-Based Validation of a Graphics Processing Unit Algorithm to Track Foot Bone Kinematics Using Fluoroscopy

Model-Based Validation of a Graphics Processing Unit Algorithm to Track Foot Bone Kinematics Using Fluoroscopy Model-Based Validation of a Graphics Processing Unit Algorithm to Track Foot Bone Kinematics Using Fluoroscopy Matthew Kindig, MS 1, Grant Marchelli, PhD 2, Joseph M. Iaquinto, PhD 1,2, Duane Storti, PhD

More information

CHomework Assignment /655 Fall 2017 (Circle One)

CHomework Assignment /655 Fall 2017 (Circle One) CHomework Assignment 2 600.455/655 Fall 2017 (Circle One) Instructions and Score Sheet (hand in with answers) Name Email Other contact information (optional) Signature (required) I/We have followed the

More information

Intramedullary Nail Distal Hole Axis Estimation using Blob Analysis and Hough Transform

Intramedullary Nail Distal Hole Axis Estimation using Blob Analysis and Hough Transform Intramedullary Nail Distal Hole Axis Estimation using Blob Analysis and Hough Transform Chatchai Neatpisarnvanit Department of Electrical Engineering Mahidol University Nakorn Pathom, Thailand egcnp@mahidol.ac.th

More information

Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences

Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences Jian Wang 1,2, Anja Borsdorf 2, Joachim Hornegger 1,3 1 Pattern Recognition Lab, Friedrich-Alexander-Universität

More information

FOREWORD TO THE SPECIAL ISSUE ON MOTION DETECTION AND COMPENSATION

FOREWORD TO THE SPECIAL ISSUE ON MOTION DETECTION AND COMPENSATION Philips J. Res. 51 (1998) 197-201 FOREWORD TO THE SPECIAL ISSUE ON MOTION DETECTION AND COMPENSATION This special issue of Philips Journalof Research includes a number of papers presented at a Philips

More information

3D Ultrasound Reconstruction By The 3 Cons: Michael Golden Khayriyyah Munir Omid Nasser Bigdeli

3D Ultrasound Reconstruction By The 3 Cons: Michael Golden Khayriyyah Munir Omid Nasser Bigdeli 3D Ultrasound Reconstruction By The 3 Cons: Michael Golden Khayriyyah Munir Omid Nasser Bigdeli Client Contact: Dr. Joseph McIsaac Hartford Hospital 80 Seymour St. PO Box 5037 Hartford, CT 06102 (860)

More information

3D Ultrasound System Using a Magneto-optic Hybrid Tracker for Augmented Reality Visualization in Laparoscopic Liver Surgery

3D Ultrasound System Using a Magneto-optic Hybrid Tracker for Augmented Reality Visualization in Laparoscopic Liver Surgery 3D Ultrasound System Using a Magneto-optic Hybrid Tracker for Augmented Reality Visualization in Laparoscopic Liver Surgery Masahiko Nakamoto 1, Yoshinobu Sato 1, Masaki Miyamoto 1, Yoshikazu Nakamjima

More information

Modeling and preoperative planning for kidney surgery

Modeling and preoperative planning for kidney surgery Modeling and preoperative planning for kidney surgery Refael Vivanti Computer Aided Surgery and Medical Image Processing Lab Hebrew University of Jerusalem, Israel Advisor: Prof. Leo Joskowicz Clinical

More information

Response to Reviewers

Response to Reviewers Response to Reviewers We thank the reviewers for their feedback and have modified the manuscript and expanded results accordingly. There have been several major revisions to the manuscript. First, we have

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

Intraoperative Prostate Tracking with Slice-to-Volume Registration in MR

Intraoperative Prostate Tracking with Slice-to-Volume Registration in MR Intraoperative Prostate Tracking with Slice-to-Volume Registration in MR Sean Gill a, Purang Abolmaesumi a,b, Siddharth Vikal a, Parvin Mousavi a and Gabor Fichtinger a,b,* (a) School of Computing, Queen

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

A New Method for CT to Fluoroscope Registration Based on Unscented Kalman Filter

A New Method for CT to Fluoroscope Registration Based on Unscented Kalman Filter A New Method for CT to Fluoroscope Registration Based on Unscented Kalman Filter Ren Hui Gong, A. James Stewart, and Purang Abolmaesumi School of Computing, Queen s University, Kingston, ON K7L 3N6, Canada

More information

Registration concepts for the just-in-time artefact correction by means of virtual computed tomography

Registration concepts for the just-in-time artefact correction by means of virtual computed tomography DIR 2007 - International Symposium on Digital industrial Radiology and Computed Tomography, June 25-27, 2007, Lyon, France Registration concepts for the just-in-time artefact correction by means of virtual

More information

Enabling Technologies for Robot Assisted Ultrasound Tomography

Enabling Technologies for Robot Assisted Ultrasound Tomography Enabling Technologies for Robot Assisted Ultrasound Tomography Seminar Presentation By: Fereshteh Aalamifar Team Members: Rishabh Khurana, Fereshteh Aalamifar Mentors: Emad Boctor, Iulian Iordachita, Russell

More information

On the Accuracy of a Video-Based Drill-Guidance Solution for Orthopedic and Trauma Surgery: Preliminary Results

On the Accuracy of a Video-Based Drill-Guidance Solution for Orthopedic and Trauma Surgery: Preliminary Results On the Accuracy of a Video-Based Drill-Guidance Solution for Orthopedic and Trauma Surgery: Preliminary Results Jessica Magaraggia 1,2, Gerhard Kleinszig 2, Wei Wei 2, Markus Weiten 2, Rainer Graumann

More information

Developments in Dimensional Metrology in X-ray Computed Tomography at NPL

Developments in Dimensional Metrology in X-ray Computed Tomography at NPL Developments in Dimensional Metrology in X-ray Computed Tomography at NPL Wenjuan Sun and Stephen Brown 10 th May 2016 1 Possible factors influencing XCT measurements Components Influencing variables Possible

More information

Shadow casting. What is the problem? Cone Beam Computed Tomography THE OBJECTIVES OF DIAGNOSTIC IMAGING IDEAL DIAGNOSTIC IMAGING STUDY LIMITATIONS

Shadow casting. What is the problem? Cone Beam Computed Tomography THE OBJECTIVES OF DIAGNOSTIC IMAGING IDEAL DIAGNOSTIC IMAGING STUDY LIMITATIONS Cone Beam Computed Tomography THE OBJECTIVES OF DIAGNOSTIC IMAGING Reveal pathology Reveal the anatomic truth Steven R. Singer, DDS srs2@columbia.edu IDEAL DIAGNOSTIC IMAGING STUDY Provides desired diagnostic

More information

Miniature faking. In close-up photo, the depth of field is limited.

Miniature faking. In close-up photo, the depth of field is limited. Miniature faking In close-up photo, the depth of field is limited. http://en.wikipedia.org/wiki/file:jodhpur_tilt_shift.jpg Miniature faking Miniature faking http://en.wikipedia.org/wiki/file:oregon_state_beavers_tilt-shift_miniature_greg_keene.jpg

More information

An Improved Tracking Technique for Assessment of High Resolution Dynamic Radiography Kinematics

An Improved Tracking Technique for Assessment of High Resolution Dynamic Radiography Kinematics Copyright c 2008 ICCES ICCES, vol.8, no.2, pp.41-46 An Improved Tracking Technique for Assessment of High Resolution Dynamic Radiography Kinematics G. Papaioannou 1, C. Mitrogiannis 1 and G. Nianios 1

More information

AUTOMATIC DETECTION OF ENDOSCOPE IN INTRAOPERATIVE CT IMAGE: APPLICATION TO AUGMENTED REALITY GUIDANCE IN LAPAROSCOPIC SURGERY

AUTOMATIC DETECTION OF ENDOSCOPE IN INTRAOPERATIVE CT IMAGE: APPLICATION TO AUGMENTED REALITY GUIDANCE IN LAPAROSCOPIC SURGERY AUTOMATIC DETECTION OF ENDOSCOPE IN INTRAOPERATIVE CT IMAGE: APPLICATION TO AUGMENTED REALITY GUIDANCE IN LAPAROSCOPIC SURGERY Summary of thesis by S. Bernhardt Thesis director: Christophe Doignon Thesis

More information

2D Rigid Registration of MR Scans using the 1d Binary Projections

2D Rigid Registration of MR Scans using the 1d Binary Projections 2D Rigid Registration of MR Scans using the 1d Binary Projections Panos D. Kotsas Abstract This paper presents the application of a signal intensity independent registration criterion for 2D rigid body

More information

INTRODUCTION TO MEDICAL IMAGING- 3D LOCALIZATION LAB MANUAL 1. Modifications for P551 Fall 2013 Medical Physics Laboratory

INTRODUCTION TO MEDICAL IMAGING- 3D LOCALIZATION LAB MANUAL 1. Modifications for P551 Fall 2013 Medical Physics Laboratory INTRODUCTION TO MEDICAL IMAGING- 3D LOCALIZATION LAB MANUAL 1 Modifications for P551 Fall 2013 Medical Physics Laboratory Introduction Following the introductory lab 0, this lab exercise the student through

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

A Video Guided Solution for Screw Insertion in Orthopedic Plate Fixation

A Video Guided Solution for Screw Insertion in Orthopedic Plate Fixation A Video Guided Solution for Screw Insertion in Orthopedic Plate Fixation J. Magaraggia a,b, G. Kleinszig b, R. Graumann b, E. Angelopoulou a, J. Hornegger a a Pattern Recognition Lab, University of Erlangen-Nuremberg,

More information

Vision-based endoscope tracking for 3D ultrasound image-guided surgical navigation [Yang et al. 2014, Comp Med Imaging and Graphics]

Vision-based endoscope tracking for 3D ultrasound image-guided surgical navigation [Yang et al. 2014, Comp Med Imaging and Graphics] Vision-based endoscope tracking for 3D ultrasound image-guided surgical navigation [Yang et al. 2014, Comp Med Imaging and Graphics] Gustavo Sato dos Santos IGI Journal Club 23.10.2014 Motivation Goal:

More information

An Automated Image-based Method for Multi-Leaf Collimator Positioning Verification in Intensity Modulated Radiation Therapy

An Automated Image-based Method for Multi-Leaf Collimator Positioning Verification in Intensity Modulated Radiation Therapy An Automated Image-based Method for Multi-Leaf Collimator Positioning Verification in Intensity Modulated Radiation Therapy Chenyang Xu 1, Siemens Corporate Research, Inc., Princeton, NJ, USA Xiaolei Huang,

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Data Fusion Virtual Surgery Medical Virtual Reality Team. Endo-Robot. Database Functional. Database

Data Fusion Virtual Surgery Medical Virtual Reality Team. Endo-Robot. Database Functional. Database 2017 29 6 16 GITI 3D From 3D to 4D imaging Data Fusion Virtual Surgery Medical Virtual Reality Team Morphological Database Functional Database Endo-Robot High Dimensional Database Team Tele-surgery Robotic

More information

Calibration. Reality. Error. Measuring device. Model of Reality Fall 2001 Copyright R. H. Taylor 1999, 2001

Calibration. Reality. Error. Measuring device. Model of Reality Fall 2001 Copyright R. H. Taylor 1999, 2001 Calibration Calibrate (vt) : 1. to determine the caliber of (as a thermometer tube); 2. to determine, rectify, or mark the gradations of (as a thermometer tube); 3. to standardize (as a measuring instrument)

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Physiological Motion Compensation in Minimally Invasive Robotic Surgery Part I

Physiological Motion Compensation in Minimally Invasive Robotic Surgery Part I Physiological Motion Compensation in Minimally Invasive Robotic Surgery Part I Tobias Ortmaier Laboratoire de Robotique de Paris 18, route du Panorama - BP 61 92265 Fontenay-aux-Roses Cedex France Tobias.Ortmaier@alumni.tum.de

More information

Medicale Image Analysis

Medicale Image Analysis Medicale Image Analysis Registration Validation Prof. Dr. Philippe Cattin MIAC, University of Basel Prof. Dr. Philippe Cattin: Registration Validation Contents 1 Validation 1.1 Validation of Registration

More information

3D Modeling of Objects Using Laser Scanning

3D Modeling of Objects Using Laser Scanning 1 3D Modeling of Objects Using Laser Scanning D. Jaya Deepu, LPU University, Punjab, India Email: Jaideepudadi@gmail.com Abstract: In the last few decades, constructing accurate three-dimensional models

More information

Geometric Feature Extraction for Precise Measurements in Intramedullary Nail images

Geometric Feature Extraction for Precise Measurements in Intramedullary Nail images Geometric Feature Extraction for Precise Measurements in Intramedullary Nail images Angelos Skembris 1, Elias Koukoutsis 1, Constantin Papaodysseus 1, Eleftherios Kayafas 1 1 National Technical University

More information

Calibration. Reality. Error. Measuring device. Model of Reality Fall 2001 Copyright R. H. Taylor 1999, 2001

Calibration. Reality. Error. Measuring device. Model of Reality Fall 2001 Copyright R. H. Taylor 1999, 2001 Calibration Calibrate (vt) : 1. to determine the caliber of (as a thermometer tube); 2. to determine, rectify, or mark the gradations of (as a thermometer tube); 3. to standardize (as a measuring instrument)

More information

SIGMI Meeting ~Image Fusion~ Computer Graphics and Visualization Lab Image System Lab

SIGMI Meeting ~Image Fusion~ Computer Graphics and Visualization Lab Image System Lab SIGMI Meeting ~Image Fusion~ Computer Graphics and Visualization Lab Image System Lab Introduction Medical Imaging and Application CGV 3D Organ Modeling Model-based Simulation Model-based Quantification

More information

Object Identification in Ultrasound Scans

Object Identification in Ultrasound Scans Object Identification in Ultrasound Scans Wits University Dec 05, 2012 Roadmap Introduction to the problem Motivation Related Work Our approach Expected Results Introduction Nowadays, imaging devices like

More information

Intensity Augmented ICP for Registration of Laser Scanner Point Clouds

Intensity Augmented ICP for Registration of Laser Scanner Point Clouds Intensity Augmented ICP for Registration of Laser Scanner Point Clouds Bharat Lohani* and Sandeep Sashidharan *Department of Civil Engineering, IIT Kanpur Email: blohani@iitk.ac.in. Abstract While using

More information

2D-3D Registration of Knee Joint from Single Plane X-ray Fluoroscopy Using Nonlinear Shape Priors

2D-3D Registration of Knee Joint from Single Plane X-ray Fluoroscopy Using Nonlinear Shape Priors University of Tennessee, Knoxville Trace: Tennessee Research and Creative Exchange Doctoral Dissertations Graduate School 5-2016 2D-3D Registration of Knee Joint from Single Plane X-ray Fluoroscopy Using

More information

Introduction to 3D Machine Vision

Introduction to 3D Machine Vision Introduction to 3D Machine Vision 1 Many methods for 3D machine vision Use Triangulation (Geometry) to Determine the Depth of an Object By Different Methods: Single Line Laser Scan Stereo Triangulation

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

DEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER

DEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER S17- DEVELOPMENT OF POSITION MEASUREMENT SYSTEM FOR CONSTRUCTION PILE USING LASER RANGE FINDER Fumihiro Inoue 1 *, Takeshi Sasaki, Xiangqi Huang 3, and Hideki Hashimoto 4 1 Technica Research Institute,

More information

Imaging protocols for navigated procedures

Imaging protocols for navigated procedures 9732379 G02 Rev. 1 2015-11 Imaging protocols for navigated procedures How to use this document This document contains imaging protocols for navigated cranial, DBS and stereotactic, ENT, and spine procedures

More information

Computational Medical Imaging Analysis Chapter 4: Image Visualization

Computational Medical Imaging Analysis Chapter 4: Image Visualization Computational Medical Imaging Analysis Chapter 4: Image Visualization Jun Zhang Laboratory for Computational Medical Imaging & Data Analysis Department of Computer Science University of Kentucky Lexington,

More information

L16. Scan Matching and Image Formation

L16. Scan Matching and Image Formation EECS568 Mobile Robotics: Methods and Principles Prof. Edwin Olson L16. Scan Matching and Image Formation Scan Matching Before After 2 Scan Matching Before After 2 Map matching has to be fast 14 robots

More information

AUTOMATED AND COMPUTATIONALLY EFFICIENT JOINT MOTION ANALYSIS USING LOW QUALITY FLUOROSCOPY IMAGES

AUTOMATED AND COMPUTATIONALLY EFFICIENT JOINT MOTION ANALYSIS USING LOW QUALITY FLUOROSCOPY IMAGES AUTOMATED AND COMPUTATIONALLY EFFICIENT JOINT MOTION ANALYSIS USING LOW QUALITY FLUOROSCOPY IMAGES BY SOHEIL GHAFURIAN A dissertation submitted to the Graduate School New Brunswick Rutgers, The State University

More information

Advanced Vision Guided Robotics. David Bruce Engineering Manager FANUC America Corporation

Advanced Vision Guided Robotics. David Bruce Engineering Manager FANUC America Corporation Advanced Vision Guided Robotics David Bruce Engineering Manager FANUC America Corporation Traditional Vision vs. Vision based Robot Guidance Traditional Machine Vision Determine if a product passes or

More information

Structured light 3D reconstruction

Structured light 3D reconstruction Structured light 3D reconstruction Reconstruction pipeline and industrial applications rodola@dsi.unive.it 11/05/2010 3D Reconstruction 3D reconstruction is the process of capturing the shape and appearance

More information

Computed tomography (Item No.: P )

Computed tomography (Item No.: P ) Computed tomography (Item No.: P2550100) Curricular Relevance Area of Expertise: Biology Education Level: University Topic: Modern Imaging Methods Subtopic: X-ray Imaging Experiment: Computed tomography

More information

A model-based approach for tool tracking in laparoscopy

A model-based approach for tool tracking in laparoscopy A model-based approach for tool tracking in laparoscopy Potential applications and evaluation challenges Sandrine Voros (INSERM), TIMC-IMAG laboratory Computer Assisted Medical Interventions (CAMI) team

More information

Image Acquisition Systems

Image Acquisition Systems Image Acquisition Systems Goals and Terminology Conventional Radiography Axial Tomography Computer Axial Tomography (CAT) Magnetic Resonance Imaging (MRI) PET, SPECT Ultrasound Microscopy Imaging ITCS

More information

Generation of Triangle Meshes from Time-of-Flight Data for Surface Registration

Generation of Triangle Meshes from Time-of-Flight Data for Surface Registration Generation of Triangle Meshes from Time-of-Flight Data for Surface Registration Thomas Kilgus, Thiago R. dos Santos, Alexander Seitel, Kwong Yung, Alfred M. Franz, Anja Groch, Ivo Wolf, Hans-Peter Meinzer,

More information

Optimal Planning of Robotically Assisted Heart Surgery: Transfer Precision in the Operating Room

Optimal Planning of Robotically Assisted Heart Surgery: Transfer Precision in the Operating Room Optimal Planning of Robotically Assisted Heart Surgery: Transfer Precision in the Operating Room Ève Coste-Manière 1, Louaï Adhami 1, Fabien Mourgues 1, Olivier Bantiche 1, David Le 2, David Hunt 2, Nick

More information

Personal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery

Personal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery Personal Navigation and Indoor Mapping: Performance Characterization of Kinect Sensor-based Trajectory Recovery 1 Charles TOTH, 1 Dorota BRZEZINSKA, USA 2 Allison KEALY, Australia, 3 Guenther RETSCHER,

More information

Computed tomography of simple objects. Related topics. Principle. Equipment TEP Beam hardening, artefacts, and algorithms

Computed tomography of simple objects. Related topics. Principle. Equipment TEP Beam hardening, artefacts, and algorithms Related topics Beam hardening, artefacts, and algorithms Principle The CT principle is demonstrated with the aid of simple objects. In the case of very simple targets, only a few images need to be taken

More information

2 Michael E. Leventon and Sarah F. F. Gibson a b c d Fig. 1. (a, b) Two MR scans of a person's knee. Both images have high resolution in-plane, but ha

2 Michael E. Leventon and Sarah F. F. Gibson a b c d Fig. 1. (a, b) Two MR scans of a person's knee. Both images have high resolution in-plane, but ha Model Generation from Multiple Volumes using Constrained Elastic SurfaceNets Michael E. Leventon and Sarah F. F. Gibson 1 MIT Artificial Intelligence Laboratory, Cambridge, MA 02139, USA leventon@ai.mit.edu

More information

Ch 22 Inspection Technologies

Ch 22 Inspection Technologies Ch 22 Inspection Technologies Sections: 1. Inspection Metrology 2. Contact vs. Noncontact Inspection Techniques 3. Conventional Measuring and Gaging Techniques 4. Coordinate Measuring Machines 5. Surface

More information

VALIDATION OF DIR. Raj Varadhan, PhD, DABMP Minneapolis Radiation Oncology

VALIDATION OF DIR. Raj Varadhan, PhD, DABMP Minneapolis Radiation Oncology VALIDATION OF DIR Raj Varadhan, PhD, DABMP Minneapolis Radiation Oncology Overview Basics: Registration Framework, Theory Discuss Validation techniques Using Synthetic CT data & Phantoms What metrics to

More information

Slide 1. Technical Aspects of Quality Control in Magnetic Resonance Imaging. Slide 2. Annual Compliance Testing. of MRI Systems.

Slide 1. Technical Aspects of Quality Control in Magnetic Resonance Imaging. Slide 2. Annual Compliance Testing. of MRI Systems. Slide 1 Technical Aspects of Quality Control in Magnetic Resonance Imaging Slide 2 Compliance Testing of MRI Systems, Ph.D. Department of Radiology Henry Ford Hospital, Detroit, MI Slide 3 Compliance Testing

More information

Comparison of Reconstruction Methods for Computed Tomography with Industrial Robots using Automatic Object Position Recognition

Comparison of Reconstruction Methods for Computed Tomography with Industrial Robots using Automatic Object Position Recognition 19 th World Conference on Non-Destructive Testing 2016 Comparison of Reconstruction Methods for Computed Tomography with Industrial Robots using Automatic Object Position Recognition Philipp KLEIN 1, Frank

More information

A Navigation System for Minimally Invasive Abdominal Intervention Surgery Robot

A Navigation System for Minimally Invasive Abdominal Intervention Surgery Robot A Navigation System for Minimally Invasive Abdominal Intervention Surgery Robot Weiming ZHAI, Yannan ZHAO and Peifa JIA State Key Laboratory of Intelligent Technology and Systems Tsinghua National Laboratory

More information

A SYSTEM FOR COMPUTER-ASSISTED SURGERY WITH INTRAOPERATIVE CT IMAGING

A SYSTEM FOR COMPUTER-ASSISTED SURGERY WITH INTRAOPERATIVE CT IMAGING A SYSTEM FOR COMPUTER-ASSISTED SURGERY WITH INTRAOPERATIVE CT IMAGING by ANTON OENTORO A thesis submitted to the School of Computing in conformity with the requirements for the degree of Master of Science

More information

Advanced Visual Medicine: Techniques for Visual Exploration & Analysis

Advanced Visual Medicine: Techniques for Visual Exploration & Analysis Advanced Visual Medicine: Techniques for Visual Exploration & Analysis Interactive Visualization of Multimodal Volume Data for Neurosurgical Planning Felix Ritter, MeVis Research Bremen Multimodal Neurosurgical

More information

Robot Calibration for Precise Ultrasound Image Acquisition

Robot Calibration for Precise Ultrasound Image Acquisition Abstract Robot Calibration for Precise Ultrasound Image Acquisition Pedro M. B. Torres 1,2, Paulo J. S. Gonçalves 1,2, Jorge M. M. Martins 2 1 Polytechnic Institute of CasteloBranco, School of Technology,

More information

C a t p h a n / T h e P h a n t o m L a b o r a t o r y

C a t p h a n / T h e P h a n t o m L a b o r a t o r y C a t p h a n 5 0 0 / 6 0 0 T h e P h a n t o m L a b o r a t o r y C a t p h a n 5 0 0 / 6 0 0 Internationally recognized for measuring the maximum obtainable performance of axial, spiral and multi-slice

More information

Recognition and Measurement of Small Defects in ICT Testing

Recognition and Measurement of Small Defects in ICT Testing 19 th World Conference on Non-Destructive Testing 2016 Recognition and Measurement of Small Defects in ICT Testing Guo ZHIMIN, Ni PEIJUN, Zhang WEIGUO, Qi ZICHENG Inner Mongolia Metallic Materials Research

More information

Automatic Subthalamic Nucleus Targeting for Deep Brain Stimulation. A Validation Study

Automatic Subthalamic Nucleus Targeting for Deep Brain Stimulation. A Validation Study Automatic Subthalamic Nucleus Targeting for Deep Brain Stimulation. A Validation Study F. Javier Sánchez Castro a, Claudio Pollo a,b, Jean-Guy Villemure b, Jean-Philippe Thiran a a École Polytechnique

More information

Computed Photography - Final Project Endoscope Exploration on Knee Surface

Computed Photography - Final Project Endoscope Exploration on Knee Surface 15-862 Computed Photography - Final Project Endoscope Exploration on Knee Surface Chenyu Wu Robotics Institute, Nov. 2005 Abstract Endoscope is widely used in the minimally invasive surgery. However the

More information

ENGN D Photography / Spring 2018 / SYLLABUS

ENGN D Photography / Spring 2018 / SYLLABUS ENGN 2502 3D Photography / Spring 2018 / SYLLABUS Description of the proposed course Over the last decade digital photography has entered the mainstream with inexpensive, miniaturized cameras routinely

More information

C-mode Real Time Tomographic Reflection for a Matrix Array Ultrasound Sonic Flashlight

C-mode Real Time Tomographic Reflection for a Matrix Array Ultrasound Sonic Flashlight C-mode Real Time Tomographic Reflection for a Matrix Array Ultrasound Sonic Flashlight George Stetten 1,2,3, Aaron Cois 1,2,, Wilson Chang 1,2,3, Damion Shelton 2, Robert Tamburo 1,2, John Castellucci

More information

Creating a distortion characterisation dataset for visual band cameras using fiducial markers.

Creating a distortion characterisation dataset for visual band cameras using fiducial markers. Creating a distortion characterisation dataset for visual band cameras using fiducial markers. Robert Jermy Council for Scientific and Industrial Research Email: rjermy@csir.co.za Jason de Villiers Council

More information

Machine Learning for Medical Image Analysis. A. Criminisi

Machine Learning for Medical Image Analysis. A. Criminisi Machine Learning for Medical Image Analysis A. Criminisi Overview Introduction to machine learning Decision forests Applications in medical image analysis Anatomy localization in CT Scans Spine Detection

More information

Three-dimensional nondestructive evaluation of cylindrical objects (pipe) using an infrared camera coupled to a 3D scanner

Three-dimensional nondestructive evaluation of cylindrical objects (pipe) using an infrared camera coupled to a 3D scanner Three-dimensional nondestructive evaluation of cylindrical objects (pipe) using an infrared camera coupled to a 3D scanner F. B. Djupkep Dizeu, S. Hesabi, D. Laurendeau, A. Bendada Computer Vision and

More information

7/31/2011. Learning Objective. Video Positioning. 3D Surface Imaging by VisionRT

7/31/2011. Learning Objective. Video Positioning. 3D Surface Imaging by VisionRT CLINICAL COMMISSIONING AND ACCEPTANCE TESTING OF A 3D SURFACE MATCHING SYSTEM Hania Al-Hallaq, Ph.D. Assistant Professor Radiation Oncology The University of Chicago Learning Objective Describe acceptance

More information

HIGH-SPEED THEE-DIMENSIONAL TOMOGRAPHIC IMAGING OF FRAGMENTS AND PRECISE STATISTICS FROM AN AUTOMATED ANALYSIS

HIGH-SPEED THEE-DIMENSIONAL TOMOGRAPHIC IMAGING OF FRAGMENTS AND PRECISE STATISTICS FROM AN AUTOMATED ANALYSIS 23 RD INTERNATIONAL SYMPOSIUM ON BALLISTICS TARRAGONA, SPAIN 16-20 APRIL 2007 HIGH-SPEED THEE-DIMENSIONAL TOMOGRAPHIC IMAGING OF FRAGMENTS AND PRECISE STATISTICS FROM AN AUTOMATED ANALYSIS P. Helberg 1,

More information

Improvement and Evaluation of a Time-of-Flight-based Patient Positioning System

Improvement and Evaluation of a Time-of-Flight-based Patient Positioning System Improvement and Evaluation of a Time-of-Flight-based Patient Positioning System Simon Placht, Christian Schaller, Michael Balda, André Adelt, Christian Ulrich, Joachim Hornegger Pattern Recognition Lab,

More information

Task analysis based on observing hands and objects by vision

Task analysis based on observing hands and objects by vision Task analysis based on observing hands and objects by vision Yoshihiro SATO Keni Bernardin Hiroshi KIMURA Katsushi IKEUCHI Univ. of Electro-Communications Univ. of Karlsruhe Univ. of Tokyo Abstract In

More information

Endoscopic Reconstruction with Robust Feature Matching

Endoscopic Reconstruction with Robust Feature Matching Endoscopic Reconstruction with Robust Feature Matching Students: Xiang Xiang Mentors: Dr. Daniel Mirota, Dr. Gregory Hager and Dr. Russell Taylor Abstract Feature matching based 3D reconstruction is a

More information

Markerless human motion capture through visual hull and articulated ICP

Markerless human motion capture through visual hull and articulated ICP Markerless human motion capture through visual hull and articulated ICP Lars Mündermann lmuender@stanford.edu Stefano Corazza Stanford, CA 93405 stefanoc@stanford.edu Thomas. P. Andriacchi Bone and Joint

More information

A Non-Linear Image Registration Scheme for Real-Time Liver Ultrasound Tracking using Normalized Gradient Fields

A Non-Linear Image Registration Scheme for Real-Time Liver Ultrasound Tracking using Normalized Gradient Fields A Non-Linear Image Registration Scheme for Real-Time Liver Ultrasound Tracking using Normalized Gradient Fields Lars König, Till Kipshagen and Jan Rühaak Fraunhofer MEVIS Project Group Image Registration,

More information