2D-3D Rigid-Body Registration of X-Ray Fluoroscopy and CT Images by. Lilla Zöllei

Size: px
Start display at page:

Download "2D-3D Rigid-Body Registration of X-Ray Fluoroscopy and CT Images by. Lilla Zöllei"

Transcription

1 2D-3D Rigid-Body Registration of X-Ray Fluoroscopy and CT Images by Lilla Zöllei Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Masters in Electrical Engineering and Computer Science at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY August 2001 c Lilla Zöllei, MMI. All rights reserved. The author hereby grants to MIT permission to reproduce and distribute publicly paper and electronic copies of this thesis document in whole or in part. Author... Department of Electrical Engineering and Computer Science August 10, 2001 Certified by... W. Eric L. Grimson Bernard Gordon Chair of Medical Engineering and Professor of Computer Science and Engineering, MIT AI Lab Thesis Supervisor Certified by... William M. Wells III. Research Scientist, MIT AI Lab Assistant Professor of Radiology, Harvard Medical School and Brigham and Womens Hospital Thesis Supervisor Accepted by Arthur C. Smith Chairman, Departmental Committee on Graduate Students

2 2

3 2D-3D Rigid-Body Registration of X-Ray Fluoroscopy and CT Images by Lilla Zöllei Submitted to the Department of Electrical Engineering and Computer Science on August 10, 2001, in partial fulfillment of the requirements for the degree of Masters in Electrical Engineering and Computer Science Abstract The registration of pre-operative volumetric datasets to intra-operative two-dimensional images provides an improved way of verifying patient position and medical instrument location. In applications from orthopedics to neurosurgery, it has a great value in maintaining up-to-date information about changes due to intervention. We propose a mutual information-based registration algorithm to establish the proper alignment. For optimization purposes, we compare the performance of the non-gradient Powell method and two slightly different versions of a stochastic gradient ascent strategy: one using a sparsely sampled histogramming approach and the other Parzen windowing to carry out probability density approximation. Our main contribution lies in adopting the stochastic approximation scheme successfully applied in 3D-3D registration problems to the 2D-3D scenario, which obviates the need for the generation of full DRRs at each iteration of pose optimization. This facilitates a considerable savings in computation expense. We also introduce a new probability density estimator for image intensities via sparse histogramming, derive gradient estimates for the density measures required by the maximization procedure and introduce the framework for a multiresolution strategy to the problem. Registration results are presented on fluoroscopy and CT datasets of a plastic pelvis and a real skull, and on a high-resolution CT-derived simulated dataset of a real skull, a plastic skull, a plastic pelvis and a plastic lumbar spine segment. Thesis Supervisor: W. Eric L. Grimson Title: Bernard Gordon Chair of Medical Engineering and Professor of Computer Science and Engineering, MIT AI Lab Thesis Supervisor: William M. Wells III. Title: Research Scientist, MIT AI Lab Assistant Professor of Radiology, Harvard Medical School and Brigham and Womens Hospital

4

5 2D-3D Rigid-Body Registration of X-Ray Fluoroscopy and CT Images by Lilla Zöllei Submitted to the Department of Electrical Engineering and Computer Science on August 10, 2001, in partial fulfillment of the requirements for the degree of Masters in Electrical Engineering and Computer Science Abstract The registration of pre-operative volumetric datasets to intra-operative two-dimensional images provides an improved way of verifying patient position and medical instrument location. In applications from orthopedics to neurosurgery, it has a great value in maintaining up-to-date information about changes due to intervention. We propose a mutual information-based registration algorithm to establish the proper alignment. For optimization purposes, we compare the performance of the non-gradient Powell method and two slightly different versions of a stochastic gradient ascent strategy: one using a sparsely sampled histogramming approach and the other Parzen windowing to carry out probability density approximation. Our main contribution lies in adopting the stochastic approximation scheme successfully applied in 3D-3D registration problems to the 2D-3D scenario, which obviates the need for the generation of full DRRs at each iteration of pose optimization. This facilitates a considerable savings in computation expense. We also introduce a new probability density estimator for image intensities via sparse histogramming, derive gradient estimates for the density measures required by the maximization procedure and introduce the framework for a multiresolution strategy to the problem. Registration results are presented on fluoroscopy and CT datasets of a plastic pelvis and a real skull, and on a high-resolution CT-derived simulated dataset of a real skull, a plastic skull, a plastic pelvis and a plastic lumbar spine segment. Thesis Supervisor: W. Eric L. Grimson Title: Bernard Gordon Chair of Medical Engineering and Professor of Computer Science and Engineering, MIT AI Lab Thesis Supervisor: William M. Wells III. Title: Research Scientist, MIT AI Lab Assistant Professor of Radiology, Harvard Medical School and Brigham and Womens Hospital

6

7 Acknowledgments First and foremost, I would like to say thank you to my thesis supervisors Prof. Eric Grimson and Prof. Sandy Wells. Both of them greatly supported me in achieving my goals throughout these two years and were there to talk to me whenever I had questions or doubts. Prof. Grimson, thank you for your knowledgeable advice regarding research issues, class work and summer employment. Sandy, thank you for being so patient with me and being open for a discussion almost any time. I learned a lot while working with you! My special thanks go to my third (and unofficial) thesis supervisor, Eric Cosman, the author of my precious Thesis Prep Talk. I really appreciated all of our valuable conversations throughout the past year and thanks for keeping me inspired even through a nice and sunny summer. Notice, I managed not to forget how much I prefer neon to sunlight! I sincerely appreciate all the help that I got from our collaborators at the SPL, the Brigham and from the ERC group. In specific, I would like to mention the people who helped me obtaining the majority of my 2D and 3D acquisitions: Ron Kikinis, Dr Alexander Norbash, Peter Ratiu, Russ Taylor, Tina Kapur and Branislav Jaramaz. Thank you to all the people in the AI Lab for all your valuable suggestions and advice and special THANKS to those who took time to read through my paper and/or thesis drafts: Lauren, Raquel, Kinh and Dave. Lily, thanks for the Canny edge-detection code! Tina, I would also like to express how greatly I appreciate your never-ending enthusiasm for research and the trust that you invested in me since the first day I got to MIT. I have truly enjoyed collaborating with you! And at last but definitely not at least I would like to express my appreciation for the constant encouragement that came from my parents and my bother even if the former have been thousands of miles away... Anyu, Apu és Pisti! Végtelenül köszönöm az önzetlen bizalmat és fáradhatatlan biztatástamittőletek kaptam nap mint nap!

8 This work was supported by the Whiteman Fellowship and the NSF ERC grant (JHU Agreement # ). 6

9 Contents 1 Introduction D-3D Registration Medical Applications D Roadmapping Orthopedics Problem Statement Thesis Outline Background and Technical Issues D-3D Rigid-Body Registration Medical Image Modalities Digitally Reconstructed Radiographs Similarity Measures Optimization Number of Views Transformation Representation Transformation Parameterization Other Notations Outline of Our Registration Approach Summary The Registration Algorithm The Transformation Parameter

10 3.2 The Objective Function Definition of MI MI in the Registration Problem Probability Density Estimation Parzen Windowing Histogramming The Optimization Procedures Powell s Method Gradient Ascent Strategy Defining the Update Terms Gradient-based Update Calculations Partial Derivatives of Density Estimators Partial Derivatives of Volume Intensities wrt T Summary Experimental Results Probing Experiments Summary of the Registration Algorithm Step 1: Preprocessing Step 2: Initialization Step 3: Optimization Loop Registration Results Registration Error Evaluation Objective Function Evaluation CT-DRR Experiments CT-DRR Registration Multiresolution Approach Robustness, Size of Attraction Basin Accuracy Testing Convergence Pattern

11 4.4.6 Registration Parameter Settings CT-Fluoroscopy Experiments Experiments with X-Ray Images of Gage s Skull Experiments with Fluoroscopy of the Phantom Pelvis Summary Concluding Remarks Summary Controlled Experiments CT - X-ray Registration Future Research Questions and Ideas Coupling Segmentation and Registration View and Number of Fluoroscopic Acquisitions Defining Automatic Stopping Criterion for Gradient Optimization Protocols Truncation/Limited Field of View Distortion Effects & Dewarping Histogram Characteristics Code Optimization Improving MI APPENDIX Small Angle Rotation The Story of Phineas Gage

12 10

13 List of Figures 2-1 Lateral and AP acquisitions of X-ray fluoroscopic images of the pelvis phantom Orthogonal slices of a head CT acquisition: axial, sagittal and coronal views CT-derived DRR images produced by the ray-casting algorithm CT-derived DRR images produced by the voxel-projection algorithm The transformation parameter T which relates the coordinate frames of the imaging environment and the data volume; T = D c R D d Results of two probing experiments evaluating (a) mutual information and (b) pattern intensity on the skull dataset. Displacement range of +/ 20 (mm) and rotational range of +/ 45 (deg) were specified Results of two probing experiments evaluating a cost function on (a) the original and (b) the downsampled and smoothed version of the same phantom pelvis dataset. Displacement range of +/ 20 (mm) and rotational range of +/ 45 (deg) were specified Single-view simulated fluoroscopic images from the controlled experiments Registration results of a phantom pelvis controlled experiment with the Reg-Pow method: contours of registration results are overlaid on the observed DRR images

14 4-5 Sample output from a controlled set of Reg-Hi experiments. Dataset: plastic pelvis. Initial offsets: (a) y = 20 (mm) and (b) β =15(deg). Plots display the magnitude of displacement error, rotation angle and the MI estimate at each iteration Real X-ray fluoroscopy of the phantom pelvis and real X-ray images of Phineas Gage s skull Error distribution based upon the results of 30 experiments with random initial offset on a given interval. Row 1 displays plots with respect to error terms d e and r e while row 2 demonstrates errors in D d and R Error distribution based upon the results of 30 experiments with random initial offset on a given interval. Row 1 displays plots with respect to error terms d e and r e while row 2 demonstrates errors in D d and R Registration results of an experiment on real X-ray and CT of the Gage s skull dataset using the Reg-Pz method Registration results of an experiment on real X-ray and CT of the Gage s skull dataset using the Reg-Pz method. Contours of the DRR images created by the output of the registration algorithm are overlaid on the original fluoro images Registration results of an experiment on real X-ray and CT of the Gage s skull dataset using the Reg-Pow method. Contours of the DRR images created by the output of the registration algorithm are overlaid on the original fluoro images Registration results of an experiment on real X-ray fluoroscopy and CT of the phantom pelvis dataset using the Reg-Pow method. Contours of the DRR images created by the output of the registration algorithm are overlaid on the original fluoro images Registration results of an experiment on real X-ray fluoroscopy and CT of the phantom pelvis dataset using the Reg-Hi method. Contours of the DRR images created by the output of the registration algorithm are overlaid on the original fluoro images

15 List of Tables 4.1 CT dataset specifications; sm1: smoothed volume on hierarchy level 2; sm2: smoothed volume on hierarchy level 3; sm3: smoothed volume on hierarchy level Computing resources machine specifications Timing measurements to contrast registration running time on different hierarchical levels Controlled, registration accuracy tests using the Reg-Hi method; No hierarchy; Registration results of methods Reg-Pz, Reg-Hi and Reg-Pow on controlled experiments of a phantom pelvis and a real skull Error measurements for the X-ray fluoroscopy and CT registration experiments on the Gage skull dataset

16 14

17 Chapter 1 Introduction 1.1 2D-3D Registration Recently, there has been a growing number of medical experts who advocate a minimally invasive approach to surgery. Their aim is to reduce the physical stress applied to the human body due to medical treatment/procedures and also to reduce treatment costs, for example, by minimizing the size and number of incisions. Unfortunately, in comparison to open procedures, these approaches restrict the surgeon s view of the anatomy. This leads to an increasing need for advanced imaging techniques that would help them not only with diagnosis, but also with planning and guiding interventions. Pre-operative images provide an excellent source of detail about the anatomy in question. The widely used three-dimensional image modalities such as Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) contain high resolution information about the imaged body part. Other imaging techniques such as Positron Emission Tomography (PET) and Functional MRI (fmri) complement that knowledge with metabolic and functional information. All these datasets can greatly assist in establishing diagnosis and planning procedures pre-operatively or evaluating an intervention post-operatively. The same set of images can be conveniently utilized in surgery as well. However, they have the drawback that they may not completely 15

18 reflect the surgical situation, since they are static. In some applications it is important to use intra-operative images to follow the changes caused by the procedure or to visualize the location of a tool. In the operating room or interventional suite, it is mostly 2D images that are available to record details about the current anatomical state. X-ray, X-ray fluoroscopy and portal images are all good examples of image modalities used for this purpose. Two-dimensional acquisitions are often taken instead of volumetric datasets because of timing, radiationrelated and technological arguments. First, acquiring several 3D volumetric images during a procedure takes too long to make it practical compared to 2D imaging. Second, the radiation dose to both the patient and the doctor is reduced if only image slices are recorded rather than all the projections needed to reconstruct a 3D volume. Third, by using only 2D images, it is sufficient to have simpler imaging equipment in the operating suites. Unfortunately, 2D images lack significant information that is present in the 3D modalities. Hence, in order to relate the changes recorded by the 2D modalities to the detailed 3D model, medical experts need to fuse the information from the pre-operative and intra-operative images mentally, which can be a challenging task. Therefore, it is useful to find a way to both automate that procedure and to make it reliable. The combination of pre-operative and intra-operative images conveys the most information if the components are properly aligned in space. To achieve this it is necessary to determine their relative position and orientation. The procedure that identifies a geometrical transformation that aligns two datasets, or in other words locates one of them in the coordinate system of the other, is called registration. There already exist several techniques that can perform this task either semi- or fully-automatically. Matching, for example, different types of MRI with each other or with CT datasets is routinely done at numerous medical institutions. Most of these applications operate on images of the same dimensionality, aligning inputs from either 2D or 3D. Could we, nevertheless, align images with different dimensionality 16

19 and complement the information from high-resolution pre-operative datasets with the more up-to-date, intra-procedural images? To achieve this goal, not only would we have to account for the different representations of a particular anatomical structure in the multimodal inputs, but we would also need to process information represented in different spaces. Additionally, as the registration results are expected during the medical procedure, the computation time would also be constrained. In a nutshell, these are the main challenges that one needs to address when solving the 2D-3D registration task. In our work, we present a solution to these problems and discuss the performance behavior of our registration framework. 1.2 Medical Applications In this section, we give some specific examples of medical applications that could benefit from a reliable (and efficient) solution to the 2D-3D registration problem. They belong to the field of Image Guided Surgery (IGS). Their main objective is to introduce highly accurate pre-operative information about the examined anatomy into the operating room (where normally only lower dimensional images can be acquired) and help the execution of interventions carefully planned prior to the procedure by fusing the more detailed pre-operative with the more current intra-operative images D Roadmapping There exist a number of serious illnesses which can treated by the use of catheters that are maneuvered into the blood vessels of the brain. These include aneurysms and arteriovenous malformations. Traditionally, X-ray fluoroscopy has been widely used in these cranio-catheter procedures. There is a currently existing procedure called 2D roadmapping in which doctors follow the path of a catheter in the patient s body with the help of dynamic intra-operative 2D imaging. The procedure takes place in a special fluoroscopy suite. Prior to the intervention, opaque contrast material is injected into the patient, and a 2D acquisition is obtained. The resulting image shows vessels with high contrast 17

20 because of the injected contrast agents. This type of data is used pre-operatively for diagnosis and planning, and it is also often acquired at the beginning of a procedure to serve as a reference set during the procedure. When the contrast agent is no longer present in the blood, dynamic fluoro images are acquired to follow the changes due to the intervention and to record the most current state of the treated body part. These are then subtracted from the pre-operative static image. As a result the vessels (of high contrast in the pre-operative data) and the catheter (not present at all in the preoperative data) are the only structures highlighted. Continuing this process allows the physician to obtain information about the actual location and the movement of the catheter. The main disadvantage of this method lies in having only a static 2D reference image highlighting the vessels. It is not rare that cranio-catheter procedures take more than 5 hours. During such a long time it is difficult to prevent any patient movement. Misalignment between the pre-operative image and the intra-procedural ones is inevitable. When that happens re-injection of the contrast agent is necessary for obtaining another static reference image and the intervention is halted. In the future, the drawbacks of the 2D roadmapping method might be overcome by using a 3D dataset as the reference from which synthetic 2D images can be generated as needed 1. Prior to the surgery, when the initial dose of contrast agent is injected, it requires a 3D volume rather than 2D images to be taken. During the procedure, when the dynamic fluoro images are obtained, they are compared to simulated projection images created from the 3D dataset. In this way, if the patient moves, it is only the parameters that describe the patient position and orientation in the imaging model that have to be modified in order to have the simulated and intra-procedural images line up again. These parameters are the ones that a 2D-3D registration algorithm would compute. 1 This project has been jointly proposed by Alexander M. Norbash, MD (Department of Radiology, Brigham and Women s Hospital and Prof. William Wells (Artificial Intelligence Laboratory, MIT). 18

21 1.2.2 Orthopedics Metastatic Bone Cancer Another application is related to an orthopedics procedure, the treatment of metastatic cancer in the bones. The task here is to remove localized lesions from particular locations of the bones. Again, the treatment plan can be thoroughly designed prior to the operation using 3D CT volumes with high information content, but during the intervention, guidance and verification is only practical by making use of intraoperative images. Utilizing both of the two data sources requires the alignment of the intra-operative and pre-operative datasets. Total Hip Replacement Hip joint replacement surgery has several uses for 2D-3D registration. One is implanting an acetabular cup into the pelvic bone during total hip replacement procedures. In order to verify the correct position and orientation of the metal cup before the operation terminates 2D images are acquired. These need to be related to the 3D model of the anatomy. Another use concerns cases in revision surgery. Such a procedure is necessary if, following a total hip replacement procedure, the acetabular cup gets mislocated or gets deattached from the pelvis. These orthopedics applications are currently pursued by the HipNav project at CMU and researchers at Johns Hopkins University. Spine Procedures Spine procedures are another very large application area for IGS, since back problems are very common, and the potential complications of damage to the spinal cord are devastating. Planning may effectively use pre-operative CT, while the interventions may be most practically guided by the use of C-arm X-ray equipment. One example procedure is vertebroplasty, which is the reinforcement of a failing vertebra by the placement of cement. Other applications include the placement of pedicle screws as components of stabilization hardware. 19

22 1.3 Problem Statement The goal of the project described in this document is to register pre-operative volumetric data to intra-procedural 2D images. We are particularly interested in examining the problem of aligning 3D CT volumes to corresponding X-ray fluoroscopy. As a single 2D image, in practice, does not convey sufficient information about the spatial location of the imaged object, we require two projection images to achieve our task. We assume that the two imaging views are related by a known transformation, hence it is necessary to recover the required transformation with respect to only one of them. (This is a realistic assumption as biplanar images are often taken by rotating the imaging source by a pre-specified angle around one of the imaging axis. Also, biplanar acquisitions are considered to be standards in cranio-catheter applications.) In solving the proposed problem, our main challenges lie in identifying a similarity measure, or objective function, that can quantify the quality of the alignment between the images and defining a procedure to modify and refine current estimates of the problem parameters in a way that the similarity score is optimized. An additional primary focus of this effort is finding 2D-3D alignment methods which have computational complexity that is compatible with the time constraints implied by the interventional applications. Experimentally, we aim to demonstrate the performance characteristics of our registration algorithm on a wide variety of datasets. The collection includes fluoroscopy and CT datasets of a plastic pelvis and a real skull and also a high-resolution CTderived dataset of a real and plastic skull, a plastic pelvis and a plastic lumbar spine segment. 1.4 Thesis Outline In Chapter 2, we introduce the problem of 2D-3D registration in a more thorough manner. We present the technical difficulties involved in the analysis and comparison of the multimodal and multidimensional datasets. We then summarize a handful 20

23 of approaches that have already presented promising results in this area. We also introduce some frequently-used medical image modalities, describe some objective functions and some fast methods that simulate X-ray generation; which is a subtask of some registration methods. In Chapter 3, we focus on the computational details of our own approach. We describe the particular choices made when designing the components of our algorithm, we demonstrate the data structures used to encode the transformation variables and provide an in-depth derivation of the most important formulas used in the implementation. In Chapter 4, registration experiments are described using both synthetic and real datasets as well as detailed analysis of their results. The thesis concludes with Chapter 5, which summarizes the project and our contributions. Finally we describe some related future research ideas that we would like to investigate. In the Appendix, the reader may find a precise derivation of a particular mathematical formula and also a summary of the fascinating case of Phineas Gage, whose skull was used in our experiments. 21

24 22

25 Chapter 2 Background and Technical Issues Introduction In this chapter, we give a general introduction to the 2D-3D rigid-body registration problem applied specifically to medical modalities. We present a concise summary of research studies that have been applied to the problem while outlining a highly selective set of objective functions, optimization procedures and medical image modalities that are most frequently used in medical image processing. We also describe a fast technique that produces simulated projection images, called digitally reconstructed radiographs, as this technique was crucial in speeding up and monitoring our registration procedure. Then we introduce a new approach that we used to address the 2D-3D registration task D-3D Rigid-Body Registration Registering pre-operative datasets to images acquired intra-operatively can provide up-to-date information at the treatment site, guiding surgery or other interventions. When using different image modalities, information invisible in one of them can be incorporated into the other. Three-dimensional intra-procedural image acquisition is uncommon - typically only two-dimensional datasets can be obtained for such purposes. Although these images lack the spatial detail of volumetric data, they have 23

26 the advantages of faster acquisition time and potentially reduced amount of radiation exposure to both patients and doctors. Ideally, one can recover the advantages of the volumetric data by aligning the intra-operative 2D images with pre-operative volumes. However, not only do we have to focus on solving the multi-dimensional registration problem, but the algorithm running time should also be kept reasonable. If the alignment results cannot be produced well within the time-limits of an intervention, the algorithm cannot be used. The majority of the medical applications for the proposed kind of registration task has emerged in the field of radiology. Alignment information is crucial in planning, guidance and treatment procedures. More specifically, the medical community has expressed interest in applying the 2D-3D alignment results in the following application areas: placement of pedicle screws in spine surgery [5, 6], aortic endoprostheses in transfemoral endovascular aneurysm management [7], verifying patient setup accuracy for radiotherapy and acetabular implant position in case of total hip replacement [1, 2, 11], displaying surgical instruments in the pre-operative CT volume [5], projecting important anatomical structures visible in CT onto 2D acquisitions and confirmation of depth electroencephalogram electrode position [33]. Our collaborators 1, in specific, are interested in applying the 2D-3D registration in the field of orthopedics and neuroradiology. Two of the major projects that of interest are head catheter tracking in case of cranio-catheter procedures and monitoring acetabular cup insertion during total hip replacement surgery. (A more detailed description of these and other procedures can be found in Chapter 1.) Therefore, the experimental dataset that we have acquired is mostly images of the skull and the pelvis. 1 Alexander M. Norbash, MD (Department of Radiology, Brigham and Women s Hospital) and the Engineering Research Center (ERC) group including collaborators from CMU, Johns Hopkins University and MIT 24

27 2.1.1 Medical Image Modalities The most commonly used 2D medical image modalities for the 2D-3D alignment task have been portal images and X-ray fluoroscopy (fluoro). Portal images are used in radiation treatment procedures. Their creation employs high-energy treatment radiation beams instead of low-energy imaging radiation, hence they could be considered byproducts of a procedure and their quality is extremely poor they are of low resolution and they have low contrast. Research studies involving this modality use various segmentation techniques prior to or simultaneously with the registration procedure [1, 2, 30] in order to identify key structures in the portal images. Otherwise the individual intensity values have not been found to be sufficiently informative to describe the imaged anatomy. Fluoroscopic images, on the other hand, reveal much more detail about the examined anatomy. They are taken by X-ray machines and are created by short wavelength energy. Fluoro images best visualize bony structures of the anatomy (Figure 2-1), as it is the bony tissues that absorb the most amount of radiation in the human body. The major disadvantage of this modality stems from the fact that without correction, its geometric accuracy degrades due to pincushion and radial distortion effects in current equipment. (Distortions of this sort are not a problem with the newer generation solid-state detectors.) Among the 3D image modalities, Computed Tomography (CT) has been most widely considered for the registration task. CT images are created by assimilating multiple X-ray acquisitions. The X-ray machine rotates around the patient s body and at pre-specified angles shoots X-ray beams through the imaged object. The reconstructed images represent the absorption rate due to the intervening tissues called the Hounsfield number. On the other end, the imaging plate records the absorption rate of different tissue types which quantities are referred to as Hounsfield numbers. The tomographic data acquisition is conventionally modeled by the Radon Transform and reconstructed according to the Filtered Backprojection algorithm. Distortion problems are usually 25

28 Figure 2-1: Lateral and AP acquisitions of X-ray fluoroscopic images of the pelvis phantom. not of major concern in case of this modality. Figure 2-2 shows three orthogonal slices of a real head CT acquisition. Figure 2-2: Orthogonal slices of a head CT acquisition: axial, sagittal and coronal views Digitally Reconstructed Radiographs In our application, we focus on fusing CT and X-ray fluoroscopy images. One of the key challenges when attacking the 2D-3D registration problem is the need for an appropriate way to compare input images that are of different dimensionalities. The 26

29 most common approach is to simulate one of the modalities given the other dataset and an estimate about their relative spatial relationship, so that the images can be compared in the same space. Then the transformation estimate can be updated to maximize an alignment score according to some similarity measure. Reconstructing the 3D volume from 2D images is one alternative, but it requires numerous projection acquisitions and large computation time. It is more feasible to simulate 2D images from the 3D volume. Most existing applications follow this approach. Ray-Casting Simulated projection images, that are to model the production of X-ray acquisitions from volumetric CT are called Digitally Reconstructed Radiographs (DRRs). These images are traditionally formed by implementing the so-called ray-casting algorithm which we briefly summarize. Rays are first constructed between points of the imaging plane and the imaging source. Then the individual intensity values of the DRR images are computed by summing up the attenuation coefficients associated with each volume element (voxel) along a particular ray. An example of a DRR image created according to this algorithm is shown in Fig Although producing high-quality results, this procedure can be quite inefficient for our purposes. As it must visit every voxel while computing the projection image, it tends to be extremely time-consuming. The creation of just one projection slice can take up to 100 seconds on a fast 1000 MHz machine. If we want to introduce a registration algorithm for interventional use, which task might require the creation of hundreds of DRRs as intermediate steps, we need to find alternative methods to approximate the 2D projections. The speed limitations of the ray-casting algorithm are partly due to the size of the volumetric datasets. The majority of the CT volumes that we analyzed had dimensions of (512x512x200). (See a more detailed summary of the specifications of our datasets in Table 4.1 of Chapter 4). But the other part of the problem stems from the fact that if we closely follow the ray-casting algorithm, the data voxels are 27

30 (a) (b) Figure 2-3: CT-derived DRR images produced by the ray-casting algorithm not accessed in an optimal way. As DRR-creation is a significant component of the registration application, several research studies have concentrated on defining more practical methods for their computation. One way to address the problem of handling large volumes is to somehow restrict the size of the 3D datasets to be analyzed. In [3], the authors introduce a focused registration technique. The region of interest in the CT acquisition is segmented out prior to the intervention (e.g., the image of a vertebra) and the alignment algorithm is applied only with respect to that sub-entity. The same issue may also be effectively addressed by the application of a multiresolution approach, where it is a downsampled and smoothed version of the input images that are first aligned[18, 15, 16]. (The hierarchical approach not only decreases the computational time, but also increases the robustness of the algorithm. A more detailed description of the hierarchical approach can be found in Chapter 4, where we present our experimental results.) Voxel-Projection To approach the problem from an algorithmic development point of view, it is useful to invent new approximation methods for constructing the DRRs. One such proce- 28

31 dure, which we used in our registration experiments, is called voxel-projection [14]. The main idea behind this new method is the attempt to maximally optimize memory accesses while processing the input datasets. Instead of carrying out the calculations following the layout of the DRR intensities to be determined in memory (and traversing the CT volume in a random manner), it accesses the volume elements in the order in which they are stored. First the algorithm estimates how much influence an individual volume element would contribute to elements of the DRR image. Then, after projecting the voxel centers onto the imaging plane, a smoothing function assures that the resulting image is not corrupted by banded intensities. That could happen due to lack of interpolation and due to ignoring the impact of a voxel on neighboring pixels. In our application, we achieved some improvement in the quality of the DRR images by increasing the minimal size of the smoothing kernel originally determined [14]. To compare the image quality of radiographs produced by the ray-casting method and the voxel-projection technique, compare Fig. 2-3 and Fig. 2-4, which display DRR images derived from the same CT volume with the two different algorithms. Careful examination of Fig. 2-3 and Fig. 2-4 reveals that the two DRR-production algorithms result in images that are very similar. The main criticism against the outputs of the fast, voxel-projection technique could be that its images are not as smooth as that of the traditional procedure. Some intensity banding is visible on the more uniformly colored regions of its images. The voxel-projection strategy has led to a speedup of factor 6, especially when relatively lower resolution projection images are sufficient. Other DRR Techniques Other approaches that also improve the computational burden of the ray-casting procedure include shear-warp factorization [32, 8] and the pre-computation of line integrals with the construction of a new data structure called Transgraph 2 [12]. 2 The name Transgraph is based on Lumigraph from the field of Computer Graphics. 29

32 (a) (b) Figure 2-4: CT-derived DRR images produced by the voxel-projection algorithm The main idea behind the latter comes from the field of computer graphics, and is referred to as view-based rendering. It allows for fast computation of the DRR values and easy differentiation of the function generating them. Interpolating the densely sampled pre-computed line integrals proves to be more efficient than implementing the ray-casting technique. However, that strategy imposes a significant pre-computational/pre-processing step Similarity Measures In many registration systems, the quality of alignment is scored by objective functions. Common registration methods can be grouped into two major categories based upon the nature of the similarity measure that they apply: they can be classified as featureor intensity-based. Feature-based Techniques Feature-based approaches rely on the presence and identification of natural landmarks or fiducial markers in the input datasets in order to determine the best alignment. It is necessary to segment the most significant features in both of the input images and the 30

33 matching criterion is then optimized with respect to them. Contour- and point-based techniques [5, 6, 10, 41] are examples of this strategy, as well as registration methods that compare medialness properties of segmented anatomies [30]. Others carry out a minimax entropy strategy [1, 2] executing simultaneous registration and segmentation steps. Although the reduced number of features to be registered could provide great computational speedup (after the segmentation procedure is completed), major drawbacks of these methods lie in the need to carefully plan the image acquisition protocols in advance and the need for potentially re-scanning the patient if the diagnostic images do not contain the fiducials, the assumption that most of the fiducial markers can be located in all of the analyzed inputs, the inconvenience of planting artificial markers on the patient and the dependence on the segmentation procedure that can potentially introduce (additional) errors. These solutions might also require some level of user interaction, which generally is not desirable throughout medical procedures. Intensity-based Measures Intensity-based measures operate on the pixel or voxel intensities directly. They calculate various statistics using the raw intensity values of the inputs which are then compared in the images to be aligned. Though the number of points to be registered is much greater than in the case of the feature-based methods, no feature extraction step is required. An extensive study of intensity-based similarity measures applied specifically to 2D-3D applications has evaluated the performance of six different objective functions in matching X-ray fluoroscopy and CT images [3]. The imaged organ was a phantom spine, and it was only a user-defined small region of interest (e.g., an individual vertebra) that was registered at a time. The objective functions considered by the authors were: normalized cross-correlation [33], entropy of the difference image [9], pattern intensity [6], mutual information [20, 15], gradient correlation [34, 33] and gradient difference [3]. After a careful registration study (using fiducial markers to ensure accuracy), the authors ranked these measures based upon their accuracy and 31

34 robustness. They found that the best objective functions for the examined multimodal registration task are pattern intensity and gradient difference. These measures proved to be the most robust with respect to the (simulated) presence of soft tissue and of a surgical instrument appearing only on one of the modalities. Both of these objective functions were implemented to use the whole input image in order to evaluate the current quality of alignment. The information theoretic measure of mutual information (MI) performed poorly in these experiments. It did not handle partial occlusions and truncations well and its performance further deteriorated when soft tissue was present. The study found two possible explanations for the failures of this similarity measure that has at the same time been very successful in the 3D-3D domain. First, MI is stated to require a large set of samples to obtain a good probability density estimate for the underlying entropy calculations. Although that is given in the 3D-3D registration problems, for the 2D-3D application that was not true. We say more about this aspect of their results later, in Chapter 3. Second, the authors claimed that as the search space of MI is much larger than what the problem requires, it is more difficult to recover the required parameters in it. (MI does not make the assumption that the two compared modalities are related via a linear function, it assumes a broader statistical relationship between the analyzed variables.) Other intensity-based measures that have also been introduced for solving the CT-DRR registration task are absolute correlation coefficient [34], cross correlation and magnitude of scalar product of gradient [33] and a second order estimation to mutual information that aims to incorporate spatial information into its MI-measure [31]. The pattern intensity measure was also successfully applied in an MR-derived DRR and CT registration problem [14] Optimization Provided we have a suitable similarity function, the best alignment parameters can be located with the help of an optimization procedure. Such a protocol is responsible 32

35 for modifying the current parameter estimates in a way that the similarity function eventually takes on its (local) extremum. In this work, we assume that the similarity measure is a reward and not a cost function. Hence the perfect/ideal alignment is assigned the highest score and an optimization procedure aims to maximize the objective function. There are two major types of strategies that perform the maximization task: non-gradient and gradient methods. Non-gradient strategies execute a local search in the parameter space by evaluating the objective function at different locations according to a pattern, while gradient procedures use the gradient information to indicate the direction to the desired extremum. The former strategy might be easier to implement as it requires only the evaluation of the objective function and no additional computations to derive the consecutive search directions. However, the latter could potentially be much faster as its search is specifically guided towards the extremum. Among the non-gradient methods, we found that the Powell method [34], the downhill simplex strategy [14] and an iterative optimization of individual transformation parameters (often called as coordinate ascent method) [5, 3] are the most popular. Among the gradient-based approaches, it is the Levenberg-Marquardttype strategies [11, 29] and the hill-climbing (gradient ascent) approach [42, 15] that dominate Number of Views In our experiments, examining only a single 2D image is not sufficient to robustly recover all registration parameters required to properly position the examined anatomy in the 3D world. While we can quite accurately recover in-plane rotation and displacement transformations, it is difficult to determine any out-of-plane transformations. In order to establish all of the transformation components with a desired level of certainty, it has proven advantageous to use two or more 2D acquisitions [2, 12, 14, 35] for the proposed alignment problem. 33

36 2.1.6 Transformation Representation Our task when attacking the 2D-3D registration problem is to return a geometric transformation that best specifies the position and orientation of the examined anatomy at the time of obtaining the 2D projection images. In other words, we want to find a way to align the imaging and the world coordinate systems or to determine the correspondence between the intra-operative imaging environment and the coordinates of the pre-operative volumetric data (Fig. 2-5). We focus on fusing CT and biplanar X-ray fluoroscopy images. In that specific case, the emphasis is on registering bony structures, since both modalities best visualize such information. Characterizing the rigid movement of bones implies six degrees of freedom. One 3D parameter specifies orientation, the other provides displacement information. No other transformation, such as shearing or scaling is allowed. If we also wished to align finer details, such as soft tissues, we would define higher-dimensional transformations. Figure 2-5: The transformation parameter T which relates the coordinate frames of the imaging environment and the data volume; T = D c R D d. 34

37 Throughout this thesis, we denote the transformation that aligns the two coordinate systems by transformation T. In order to obtain a better intuition for what movement T represents, we decompose it into a collection of sub-transforms. When operating on data-points of the 3D volume, it is most natural to have all rotations happen around the center of the volume. Hence, if the data is not centered in its own coordinate system, a displacement operator needs to be applied. This displacement operator is constant for a given registration task as it only depends on the specifications of the input volumetric dataset. Following the displacement, it is a rotational step and a translation in the oriented system that ensure the desired alignment. If we denote these operations by B c, Q and B respectively (the underscore c notation emphasizes the fact that the associated variable refers to a constant), then a transformation G from data coordinates to the imaging environment could be composed as G = B Q B c. As mentioned above, though, we are interested in computing the inverse of this transform, G 1, which converts image coordinates into data coordinates. Hence, we can write transformation T : T = G 1 = B 1 c Q 1 B 1. (2.1) In order to simplify our notation, we introduce new transformation variables for the inverse operations D c B 1 c,r Q 1, and D d B 1, andthusmodifythewayweexpresst as: T = G 1 = B 1 c Q 1 B 1 = D c R D d. (2.2) The objective of the registration algorithm is to recover the non-constant compo- 35

38 nents of T as accurately as possible. In Chapter 3, where we iteratively estimate the best parameters to provide the ideal alignment between the input images, the nature of the above decomposition plays an important role. (Note that we keep the same notation introduced for the decomposition of T throughout the rest of this document.) Transformation Parameterization For representing all six degrees of freedom of the rigid-body transformation, we use a new data structure. It is called pose and its name stems from the two notions that it describes: position and orientation. Given a pose parameter we can easily identify both its rotational and displacement components. As the rotational component is not linear in its parameters, the order of applying the transformation elements is essential; reversing them could produce a significantly different transformation. We use the usual convention of applying rotation first and then displacement. Therefore, if pose S were composed of rotational and displacement components (r, d), when applied to a coordinate point x, the resulting point could be written as x = S(r, d, x) =r(x)+d. The composition of two pose transformations is not commutative. Given two poses S 1 (r 1,d 1 )ands 2 (r 2,d 2 ), we have S 3 (r 3,d 3,x)=S 2 (r 2,d 2,S 1 (r 1,d 1,x)) = S 2 S 1 (r 1,d 1,x)=r 2 (r 1 (x)) + r 2 (d 1 )+d 2, so r 3 = r 2 r 1 and d 3 = r 2 (d 1 )+d 2. That is to say, in the case of two consecutive transformations, the rotational elements are composed and the total displacement results from the rotated version of the first translation added to the second translation. If the pose parameter only had a displacement component, we would write x = S(d, x) =x + d, 36

39 and if it only involved rotation, then the transformed point would become x = S(r, x) =r(x). It is important to remember the above conventions, as in Chapter 3, when deriving the partial derivatives of the objective function with respect to the transformation parameters, we heavily rely on them. There exists several ways to encode the transformation parameters that need to be recovered. The displacement part of T can be conveniently represented in a 3D vector format, however, the rotation parameter can be formulated in several different ways. Just to name a few of the options, we could use: roll-pitch-yaw; Z- Y-X Euler angles; Z-Y-Z Euler angles; equivalent angle-axis, orthonormal matrices and quaternions [23, 36]. quaternions. We decided to represent our rotation operators as unit This representation was appropriate for our needs as the quaternion encoding is easy to formulate and the composition of rotation operators (which occurs very frequently in our code) becomes a vector multiplication in that space. One way to define a quaternion is by a four-dimensional vector whose elements encode the rotational information as follows: q = {cos θ2, sin θ2 ˆω }. (2.3) In Definition (2.3), θ refers to the angle of rotation around the unit-length axis ˆω. Quaternions are appropriate measures if we want to define a metric on the space of rotations and they allow a uniform sampling of the rotation space [36]. We also use the equivalent angle-axis notation when illustrating the derivation of one of the update terms of the gradient ascent procedure in Chapter 3. In that case, if we represent the rotation transform with vector k, the magnitude of k determines the angle of rotation and its direction stands for the axis of rotation. 37

40 2.1.8 Other Notations To follow the conventional notation in the medical imaging literature, we write U to denote the reference image and V to express the intensity values of the moving or floating images. In our case, U stands for the X-ray fluoroscopy acquisitions while V stands for the simulated radiographs. As the DRRs are constructed from the CT volume given a transformation estimate T, when we indicate the images that we compare, we use the notation (U(x); V (T (x))) to explicitly emphasize that dependence. 2.2 Outline of Our Registration Approach Goal The aim of our study is the registration of biplanar 2D X-ray fluoroscopic images to a corresponding 3D CT dataset. The geometry of the imaging environment is assumed to be known, so the location of the two imaging sources for the 2D acquisitions is taken to be fixed. By updating our initial best estimate of the transformation components, we aim to make the CT-derived simulated projection images (DRRs) best approximate the observed fluoro acquisitions. The Choice of Similarity Measure Our choice of similarity measure depended on the examined image modalities, prior knowledge about features and possible distortions in the images to be registered, speed requirements (whether the registration needed to be completed in real time during a surgical intervention or the procedure was for treatment purposes and hence it could run for hours prior to or following the intervention) and implementation issues. We decided to use the information theoretic notion, mutual information, to measure the quality of image alignment. While Penney et al. found the performance of pattern intensity to be superior to MI [3], we have chosen this particular objective function because of several reasons. 38

41 First, we have experienced robust performance and good accuracy in the past using MI, both in addressing the 3D-3D multi-modal rigid-body registration [15, 16] and the 2D-3D video-frame to model surface alignment [17]. Secondly, execution time played another critical factor in our decision. We did not intend to use any pre-segmentation techniques to reduce the size of the examined data volume to make the algorithm run faster. We made this choice partly because we wanted to eliminate user interaction from our procedure and partly because, even if desired, it could be quite difficult to segment out individual bone segments in the anatomies that we analyzed. For instance, in case of the pelvis, the ischium, ileum and sacrum are so uniformly and smoothly joined that it would be extremely difficult to distinguish clear boundaries between them. Also, in case of MI, it has been shown that it is possible to reliably maximize its value even without using all available intensity information provided by the inputs. We investigate a stochastic sampling approach, which was introduced in a 3D-3D multi-modal registration problem [16]. The full input volume is considered in the registration task, but only a few randomly selected samples of it represent the dataset at each iteration. According to that scheme, we estimate probability distributions of image intensities by a sparse ray-casting method as opposed to by constructing full DRRs. It is not clear that pattern intensity could be implemented in this framework. That similarity measure is evaluated over the whole input image or at least on connected subregions of it. Hence, using pattern intensity in case of bigger datasets could become very computationally intensive and time-consuming. Third, the generality of MI, the fact that it does not assume a linear relationship between the random variables being compared, allows for a potential reuse of the algorithm for image modalities other than the ones currently presented. 39

42 Maximization Strategy In our study, to automatically locate the transformation that corresponds to the best alignment, we consider two optimization procedures: a stochastic gradient ascent procedure and the non-gradient Powell method. We preferred a gradient-guided search because of its computational efficiency, however, the Powell method was found to be extremely robust and was very helpful when designing experiments on the real X-ray datasets. The stochastic nature of the gradient-based optimization procedure is explained by using noisy approximations of partial derivatives instead of relying on true and accurate measures. The reason for applying such an estimate is to simplify computations, to speed up the overall registration process and to help escaping local extrema of the similarity measure. 2.3 Summary In this Chapter, we presented a high-level description of the 2D-3D registration problem and we provided some terminology and background information relevant to our proposed project. Additional details included specifics about medical image modalities, similarity functions, optimization techniques and about the transformation representation that we used to encode the searched pose parameters. We also gave a short summary of the motivation and the basic framework of the alignment approach that we investigated. 40

43 Chapter 3 The Registration Algorithm Chapter Summary In this Chapter, we give a detailed description of our registration procedure. First we remind the reader what transformation components we aim to recover as a result of our rigid-body registration algorithm. Then we introduce mutual information, the objective function we use, and describe its implementation details. We also compare two different optimization approaches, Powell s method and stochastic gradient ascent, which we have used to locate the extremum of the objective function. We derive in detail some of the update terms that are necessary for finding the desired alignment transformation. Lastly, we give a general overview of the registration algorithm. The description, results and performance evaluation of our experiments are presented in Chapter The Transformation Parameter For the specific case of fusing CT and X-ray images, the primary focus is on registering bony structures, since both modalities best visualize such information. Characterizing the rigid movement of bones implies six degrees of freedom, three describing a rotational and three a displacement term. Our registration tool can also be thought of as a tool for aligning two different coordinate systems: that of the intra-operative 41

44 imaging environment and that of the pre-operative image volume itself. Transformation T is used to transform the imaging coordinates to their corresponding equivalent in world coordinates (Fig. 2-5). As detailed in Chapter 2, T is a pose parameter. It is constructed from a rotational and a translational element. However, in order to distinguish constant and variable components of T, we decompose it into three individual sub-transforms. We write T = D c R(r) D d (d). (3.1) In Eq. (3.1), D c is a constant displacement term that is responsible for positioning the data volume into the center of its own coordinate system (so that rotation may be performed around its center). R encodes the rotational component required to perform the match, and translation D d positions the object in the imaging coordinate system. As we specify T to be the transformation that expresses imaging coordinates in terms of data coordinates, the appropriate order of the sub-transforms is D d followed by R and D c. Decoupling the components of the transformation in such a way is useful because it makes the parameter space more directly searchable for the optimization procedures. When we have access to multiple views of the same anatomy, we assume that the relationship between the various viewing sources is known. Hence, when we want to simulate projection images taken by other than the initial imaging source, we first apply a known, view-dependent transform to the coordinates and then apply the above introduced T. In case of a biplanar application, where transformation N provides the relationship between the two imaging locations, we have T 2 = T N. In more detail, we can write the expression of a point transformed by T and T 2 as T (x) =D c R D d (x) =D c R(r) D d (d, x) =D c (R(r, x + d)) = D c (r(x + d)) 42

45 and T 2 (x) =D c (r(n(x)+d)). Given this formulation, it is only the variables R and D d that we need to accurately recover. The rest of the components are known and constant; they are determined from a calibration procedure. D c is purely dependent on the specifications of the input volume dataset and the imaging geometry is characterized by the non-varying transform N. Hence, when we investigate how the alignment quality changes with respect to infinitesimal changes in T (Section 3.4), we implicitly refer to modifications with respect to the operations R and D d. 3.2 The Objective Function We refer to measures that quantify the alignment quality of the input images as objective functions. From a broad range of candidates that have been used to assist in registration procedures, we decided to employ an information theoretic term called mutual information (MI). This similarity measure has quickly gained popularity in multi-modal medical image registration after it was first introduced [15, 20]. In case of 3D-3D rigid registration of head datasets, MI has proved to be a robust objective function, that could be applied with numerous image modalities. Recently, there have been several extensions suggested to improve the general performance of MI. In most cases, it is gradient or some type of spatial information that is incorporated in the measure. One such example is the introduction of both the magnitude and the direction of the gradients into the mutual information formulation [18]. Although some robustness improvements can be demonstrated with these new methods, the altered objective functions did not preserve the information theoretical framework of the original formulation. We did not use these measures. 43

46 3.2.1 Definition of MI In information theory, the mutual information of two discrete random variables expresses how much the knowledge about one of these variables increases the knowledge about the other. More informally, instead of assuming a linear relationship between the values of the random variables that are compared (as some of the widely used correlation functions do), it proposes that, in the registration problem, the intensity values from the corresponding images maximally explain each other if they are perfectly aligned. When mutual information is zero between two random variables, knowing one of them will convey no further information about the other and they are statistically independent. However, a non-zero mutual information term indicates that given one of the variables, the value of the other could be predicted with a given level of certainty. There exists several definitions of mutual information. For example, according to the Kullback-Leibler distance interpretation, mutual information of two random variables, A and B, is defined to be the relative entropy between the joint probability distribution of the two variables and the product of their marginal distributions which would be the correct joint model if they were statistically independent. Thus MI is a measure of the extent to which they are not statistically independent. (Note that the information theoretical notation, in the rest of this chapter adheres to the conventions of [43].) [ ] p(a, B) I(A, B) =D(p(A, B) p(a)p(b)) = E A,B log = p(a, b) p(a, b)log p(a)p(b) p(a) p(b) a A b B In our computations, we will use another definition of MI. In order to introduce that formulation, we need to introduce another information theoretic term, entropy. The Shannon entropy of a random discrete variable A, H(A), measures the uncertainty about that variable, or the amount of randomness. It is formulated as the 44

47 expected value of the negative log probability: H(A) =E A [ log p(a)] = a A p(a) log(p(a)). (3.2) Likewise, the joint entropy term between two random variables A and B is written as H(A, B) =E A,B [ log p(a, B)] = p(a, b) log(p(a, b)) (3.3) a A The formula that we apply for our registration calculations involves the sum of individual entropy terms less the joint entropy of the variables. b B I(A, B) =H(A)+H(B) H(A, B) (3.4) MI in the Registration Problem In our biplanar registration procedure, we use two 2D projection images to guide the search for the best alignment parameters. Hence, we define our objective function g as the sum of mutual information terms, g = I 1 + I 2, where I 1 and I 2 stand for the mutual information quantities computed between the two observed fluoroscopy images and the CT-derived DRRs that are to be registered. Hereafter, for sake of simplicity, when describing the computational details of the algorithm, we use only the first MI term of the sum, I 1,andrefertoitasI (leaving the subscript off). All procedures, however, need to be carried out with respect to both of the image pairs. Our 2D-3D registration strategy is based upon the comparison of the input X- ray fluoroscopic acquisitions to their simulated equivalents produced from the 3D volumetric dataset by applying the current estimate of the transformation parameter. These are treated as two discrete random variables whose mutual information needs 45

48 to be evaluated. As noted in Chapter 2, we denote the observed 2D image by U(X) and the transformation-dependent DRR by V (T (X)), where X is the set of sample points examined for comparison purposes. When writing our objective function with respect to these terms, I(U(X),V(T (X))) = H(U(X)) + H(V (T (X))) H(U(X),V(T (X))) = = E U,V [log(p(u(x),v(t (x))))] E U [log(p(u(x)))] E V [log(p(v (T (x))))] (3.5) 3.3 Probability Density Estimation One of the main challenges in evaluating the objective function expressed in Eq. (3.5) lies in accurately estimating the marginal and joint probability densities of the random variables. These quantities denote the probability distributions of the image intensities. We apply two different types of density estimators in our calculations. One of our approaches uses the non-parametric density estimator, called Parzen Windowing [38], and the other uses 1D and 2D histograms. The available information presented by high resolution image volumes however is huge, and considering contributions from all pairs of corresponding image pixels is not always practical. It requires the generation of full DRR images for each iteration which creates an immense computational burden. Therefore, we experimented with both a dense and a sparse sampling approach. In the latter scenario, we base our probability estimates on a small number of random image intensity samples instead of using all the intensity values available from overlapping image regions. The smaller the sample size we use, the faster the estimations become. However, with each intensity value ignored we trade off accuracy (and possibly convergence of the algorithm). This strategy provides noisy estimates at each individual step, however, if the samples are randomly selected and the estimation procedure occurs a sufficiently large amount of times, it can be shown that the estimates converge to the true values [26]. 46

49 3.3.1 Parzen Windowing Applying Parzen Windowing for probability density estimation is a standard technique in the computational community. With this method, the underlying probability density is estimated by the sum of symmetric kernels. The centers of the kernels are fit to individual sample points and most frequently, the kernel is defined to be Gaussian. This kernel choice significantly simplifies computations. Given the Parzen Windowing formulation and a Gaussian kernel, we can write the probability density estimate of a random variable z as p(z) = 1 n n i=1 G ψ (z z i ), where G ψ (z) (2π) n 2 ψ n 2 exp 1 2 zt ψ 1 z. (3.6) In Eq. (3.6), n signifies the number of points in the sample collection Z (where i N + and 0 <= i<n, z i Z) based upon which our estimates rest, ψ indicates the covariance matrix and G stands for the Gaussian kernel Histogramming As opposed to the continuous Parzen Windowing strategy, the histogramming approach uses a discrete approximation. Probability densities are calculated after the construction of 1D and 2D intensity histograms. Sample points from overlapping regions of the corresponding image pairs are used to fill the histograms and probability densities are estimated directly based upon those entries. Dense histograms, for which all the available corresponding intensity pairs are utilized in formulating these estimates have been widely used. Many use it for 3D-3D MI registration and one specific application that successfully applied this method is a 2D-3D registration algorithm aligning surface models to video [17]. We used dense histogramming when evaluating MI in the case of our Powell procedure. Experimenting with sparsely sampled histograms is a new idea. It would be reasonable to expect that just a few random samples from a large dataset cannot provide a valid approximation about the underlying probability density function of 47

50 the image intensities. However, empirically, we found that in the case of the medical modalities that we have examined, the estimates can be useful and they can be used in the stochastic gradient ascent optimization procedure. In general, building histograms from random samples and using 32 intensity bins on the intensity range of proved to be adequate in our stochastic optimization framework. Decreasing the bin sizes did not improve the registration results. 3.4 The Optimization Procedures As a reminder, our task of finding the ideal transformation T is formulated as a maximization problem. We do an iterative search to locate the parameters that produce the highest score according to a reward function. At each iteration of the algorithm, we use the current estimate of transformation T to simulate the creation of the observed X-ray images by DRRs. We compute the quality of the alignment between these DRRs and the corresponding fluoro acquisitions. Then, to improve the matching score, we update the transformation estimate and start the registration loop over. In order to identify in an efficient and/or reliable manner the set of updates, we need to select an optimization procedure. We compare the performance of a direction set and a gradient-based optimization strategy: Powell s method and the stochastic gradient ascent procedure [40] Powell s Method Powell s method is a direction set method. It optimizes the input function in a succession of one dimensional line maximization steps. Given an n dimensional search space, the function maximum could be located in just one pass of n line optimizations. That would, however, assume that linearly independent search directions are provided. In practice, it can be difficult to identify those. Hence, instead of aiming to work with mutually conjugate directions, a few good directions are selected that 48

51 enable the localization of function extrema quickly. The Powell procedure requires no calculation of the gradient. However, in order to evaluate the similarity measure in case of the individual line optimizations, full reconstruction of the DRRs is necessary. That can easily cause a computational bottleneck in the algorithm. As we operate on huge datasets, applying the multiresolution approach was inevitable when using this strategy. It has also been established that the Powell method is sensitive to the order in which the parameters are optimized during the line maximizations. One study concluded that the updates should happen in the order of in-plane followed by outof-plane parameters [21]. We handle the translation components first and then the rotational elements Gradient Ascent Strategy The gradient ascent technique is a maximization method whose local search for the optimal parameter settings is guided by calculations of the objective function s gradient. As opposed to the Powell method, whose search directions are either predetermined or continuously modified to approximate the state of being linearly independent, it explores the parameter space by making steps in the directions defined by the gradients. As a result, the objective function does not even need to be evaluated at each round, it is sufficient to only calculate the partial derivative terms. (Nevertheless, as explained in Chapter 4, in order to monitor the convergence behavior of the algorithm, we do compute the similarity measure at each step of the maximization phase.) We use a probabilistic version of the gradient ascent procedure to find the local maximum of our objective function. The stochastic approximation [26, 27, 28] approach uses noisy estimates of the derivatives instead of the true ones in order to increase computational efficiency. The stochastic nature of our algorithm originates from two sources: we approximate the probability distributions of our variables by Parzen Windowing or sparse histogramming and we use various simplifications to compute the required derivatives. This approach has performed remarkably well in 49

52 3D-3D multi-modal medical image registration problems [15, 16] Defining the Update Terms In case of the Powell experiments, the optimization task is carried out almost as a black-box procedure. Mutual information is evaluated for each estimate of T and the transformation updates are calculated by the Brent line optimization method [40]. The optimization procedure finishes as soon as the gain from refining the estimate for T falls below a threshold tolerance measure. When the gradient-based approach is used, we formulate a good estimate of the transformation update by computing the partial derivative of the objective function g with respect to the transformation parameters. We write: T updated = T current + λ g T current. (3.7) In Eq.(3.7), λ represents the learning rate (or step-size) of the algorithm; it constrains the maximal magnitude of individual update operations. Finding the appropriate range for λ forms a crucial part of the experiments. If its magnitude is too small, convergence might take a long time, however, if it is set to be too high, convergence to the searched optimum might not even occur. One way to avoid fixing an ideal value for the learning rate is to vary it over time. This practice is called annealing, and it assigns decreasing values to λ as the iteration number increases. We eliminate the difficulty of selecting the ideal learning rate by using a hierarchical structure. Registration can be executed on several levels of resolution in order to make the algorithm run faster and to make it more robust. At the bottom of the pyramid, working with downsampled and smoothed versions of the input, we expect that it is easier to jump over local extrema and calculations can be executed in a smaller amount of time. At this level, the estimates might not be very accurate (they are indeed quite noisy), but that can be easily and swiftly refined on higher levels where smaller step sizes and more data samples can be used. As 50

53 the resolution of the inputs increases, the transformation approximation can be made more precise. Details of this strategy with some experimental results are explained in Chapter 4. We differentiate between the learning rates of rotational and displacement components. It is important to have both types of components contributing at the same rate to the overall transformation update. Further distinction could be made between components corresponding to in-plane and out-of-plane operations. 3.5 Gradient-based Update Calculations As explained above, to improve our current transformation estimate according to the gradient ascent procedure, we require the computation of the partial derivative of our objective function with respect to the transformation parameters (Eq.(3.7)). Using Eq.(3.4) to express MI, we want to compute I(U(X),V(T (X))) T = H(U(X)) T + H(V (T (X))) T H(U(X),V(T (X))). (3.8) T As the first term on the right of Eq.(3.8), the entropy of the observed image, does not depend on the transformation parameter, the equation can be simplified: I(U(X),V(T (X))) T = H(V (T (X))) T H(U(X),V(T (X))). (3.9) T The first approximation step in our algorithm results from the fact that we estimate statistical expectation terms with sample averages. In such a case, the entropy of a random variable A can be estimated as follows: H(A) =E A [ log p(a)] 1 N log(p(a)), (3.10) a A where a is one of N observed samples drawn from sample set A. Therefore, in the specific case of our registration problem, given M samples in our 51

54 observation set X whose i th sample point is x i, we can write Eq. (3.9) as I(U(X),V(T (X))) T = 1 M M i=1 T log(p(v (T (x i))))+ 1 M M i=1 T log(p(u(x i),v(t (x i )))). (3.11) Partial Derivatives of Density Estimators Parzen Windowing Approach Given the definition of the Parzen Windowing probability density estimator in Def. (3.6) we can rewrite the entropy approximation in Eq.(3.10): h(z) 1 N A z j A ln 1 N B z i B G ψ (z j z i ), where B is another random variable, another set of sample points. This expression is continuous, taking its partial derivative with respect to the transformation parameter produces d 1 h(z(t )) dt N A z j A z i B W z (z j,z i )(z j z i ) T ψ 1 d dt (z j z i ) (3.12) Gψ(z j z i ) W z (z j,z i ) z k B G ψ(z j z k ) (3.13) Writing the partial derivative of mutual information from Eq. (3.9) and (3.11) then becomes: di dt 1 N A x j A x i B where we use the following definitions (v j v i ) T [ W v (v j,v i )ψ v 1 W uv (w j,w i )ψ uv 1 ] d dt (v j v i ), (3.14) ψ uv 1 =DIAG(ψ uu 1,ψ vv 1 ) (3.15) 52

55 W v (v j,v i ) G ψv (v j v i ) x k B G ψ v (v j v k ), W uv(w j,w i ) G ψuv (w j w i ) x k B G ψ uv (w j w k ) (3.16) and u i U(x i ), v i V (T (x i )), w i [u i,v i ] T. (3.17) This formulation of entropy manipulation and estimation is called EMMA 1 [42]. It provides an efficiently optimizable entropy measure, which is calculated from random samples of the available data points. Exhaustive sampling would be of quadratic cost in the sample size, hence only a few samples are selected. Although the less samples are used the more noise this approach introduces into the calculations, this allows it to effectively escape from local extrema. The convergence of this estimate to its true value was proved by Viola [42]. The EMMA estimate uses a Gaussian function for the Parzen kernel, but that could be replaced by any differentiable function. The only unknown expression in Eq. (3.14) is the partial derivative of volume intensities with respect to the transformation components: in great detail in Section d (v dt i v j ). It is computed Histogramming For the histogramming approach we need to further manipulate Eq. (3.11). After some algebraic operations and expanding the partial derivatives, Then I(U(X),V(T (X))) = T M 1 M i=1 1 M 1 p(u(x i ),V (T (x i ))) p(u(x i ),V (T (x i 1 ))) T M M [ i=1 1 p(u(x i ),V (T (x i ))) p(u(x i ),V (T (x i ))) T M i=1 1 p(v (T (x i ))) p(v (T (x i = ))) T 1 p(v (T (x i ))) ] p(v (T (x i ))). (3.18) T I(U(X),V(T (X))) T = 1 M M [ i=1 1 p(u i,v i ) p(u i,v i ) T 1 p(v i ) ] p(v i ). (3.19) T To complete the optimization task, the density estimator needs to be differentiated 1 The acronym stands for Empirical Entropy Manipulation and Analysis 53

56 (Eq.(3.19)) with respect to the components of transformation T. We adopted ideas that were introduced for dense histograms [17]. Given a histogramming function f, approximating the probability density function of random variable A based upon a collection of sample points B, the probability of a A is given by p(a) f(a, B) and the derivative of f with respect to variable s is estimated according to d f(a, B) = f(a, B)da ds a ds + f(a, B)dB B ds. (3.20) The application of the chain rule in Eq.(3.20) makes an implicit assumption. It holds only for cases when the histogram estimator function f is not explicitly dependent on the variable s with respect to which the derivative is taken. Although this assumption is not quite valid in our scenario (the histograms do depend on transformation T with respect to which we take derivatives), empirically, it was established that small changes in the parameters of T are unlikely to (greatly) alter the nature of the density estimator. Hence, we apply the simplification. Furthermore, the last term on the right of Eq.(3.20) can be ignored if differential changes in the sample intensities in B result in vanishingly small changes in the density values estimated by f. experiments, that condition also holds for sparse sampling. Based on our Utilizing the assumptions explained in case of equation (3.20) and after some algebraic manipulations, the terms in Eq.(3.19) can be expressed as: p(u i,v i ) T p(u i,v i ) v i v i T and p(v i) p(v i) v i T v i T. (3.21) The terms in Eq. (3.21) correspond to changes in the DRR image intensity values resulting from modifications in the transformation parameters and to changes in the probability densities as a result of changes in sample intensities. We approximate the derivatives of the probability densities by the use of finite differences calculated from their corresponding histogram estimates. Deriving the v other unknown term, i, though is more complex, and the details of the related T computations are explained below. This is the same term that we need for the Parzen approximation in Eq. (3.14). 54

57 3.5.2 Partial Derivatives of Volume Intensities wrt T Each iteration of our iterative search corresponds to a small angle rotation and small displacement applied to the current transform estimate. As among the components of transformation T, only rotation R and displacement D d need to be recovered, we only take partial derivatives with respect to these terms. For calculations of the update elements, we introduce a new term, the update rotation R u. This operator represents the small angle rotation r which adjusts the value of the current rotation estimate at the end of each iteration cycle. (Note that at the beginning of each iteration of the registration algorithm r is reset to be a zero angle rotation.) Hence we write the new rotation component of the transform as (R u R) and the transformation itself as T = D c R u R D d = D c R u (r) R D d (d). A transformed point becomes T (x) =T (r, d, x) =D c (R u (r, R(D d (d, x)))). (3.22) From Eq.(3.21), we need to compute v i T = V (T (x i)) T { V (T (xi )) = r ; V (T (x } i)). (3.23) d In the following calculations, the vector r encodes a rotation transform according to the equivalent angle-axis notation. The magnitude of vector r determines the angle of rotation and its direction stands for the axis of rotation (see Section 2.1.7). In order to express the partial derivative terms, we use the ray-casting algorithm to model the formation of the fluoro image intensities. (The ray-casting algorithm is used instead of a more efficient procedure, as we only sample a small fraction of the image intensities and the whole image is not constructed.) In particular, a sample of the simulated fluoroscopic image at location x i on the image plane (or at T (x i )in 55

58 data coordinates) is approximated as V (T (x i )) = Vol(z), z ray(t (x i ),S) where ray refers to the line segment which connects the imaging source S with T (x i ) on the imaging plane and z indicates uniformly distributed steps along that ray within the volume. As the steps are located in the transformed coordinate space, we could write z = T (y) =T (r, d, y). Therefore, V (T (x i )) T = z ray(t (x i ),S) Vol(T (r, d, y)). (3.24) T Update wrt Displacement We first calculate the partial derivative of the volume intensity with respect to the i th component of displacement d, denoted as d i. In Eq.(3.30) and (3.25), e i stands for a unit vector whose components are all zero except for the i th one which equals one. [ ] Vol(T (y)) = d i Vol(T (y)) (Dc(Ru(r,R(D d(d,y))))) d i = Vol(T (y)) (Ru(r,R(D d(d,y)))) d i = Vol(T (y)) (Ru(r,R(y+d))) d i = Vol(T (y)) (Ru(r,R(y))+Ru(r,R(d))) d i = Vol(T (y)) (Ru(r,R(d))) d i = Vol(T (y)) (R u (r, R(e i ))) (3.25) The full expression is Vol(T (y)) = d Vol(T (y)) (R u (r, R(e 1 ))) Vol(T (y)) (R u (r, R(e 2 ))) Vol(T (y)) (R u (r, R(e 3 ))). (3.26) 56

59 Update wrt Small Angle Rotation We derive the partial derivative terms of the volume intensities with respect to the rotation component of transformation T similarly to the above. First we only consider the i th element of r, which we denote as r i. [ ] Vol(T (y)) = r i Vol(T (y)) (Dc(Ru(r,R(D d(d,y))))) r i = (3.27) Vol(T (y)) (Ru(r,R(D d(d,y)))) r i = (3.28) Vol(T (y)) (R(D d(d,y))+r R(D d (d,y))) r i = (3.29) Vol(T (y)) (r R(D d(d,y))) r i = Vol(T (y)) (e i R(D d (d, y))) = (3.30) e i (R(D d (d, y)) Vol(T (y))) = (R(D d (d, y)) Vol(T (y))) i Hence, with respect to the full vector r, r Vol(T (y)) = R(D d(d, y)) Vol(T (y)). (3.31) We note two of the steps in the above derivation. First, Eq.(3.28) is a result of a simplification to the formula in Eq.(3.27). As the constant displacement operation D c only happens after the rotation, this has no effect on the partial derivatives that are being calculated. That term disappears from the numerator. Secondly, to arrive at Eq.(3.29), we use the fact that R u is strictly defined to stand for a small angle rotation. In that case that we can make the assumption that a coordinate point p, after a rotation by r can be expressed in the form: p = R u (r, p) =r(p) =p + r p. (3.32) For a more detailed explanation of why Eq. (3.32) holds, see the Appendix. 57

60 As a reminder, calculations in case of the second projection image and the corresponding fluoroscopy image are performed in the same manner. The only difference is that before T is applied to transform a coordinate, an additional transformation takes place which is responsible for expressing the second projection environment. 3.6 Summary We provided a detailed description of the objective function that we selected to evaluate the estimated transformation parameters at intermediate stages of the registration algorithm. We use two different strategies to identify the parameters that would maximize that measure. One of them, Powell s method, only needs to know how to evaluate the matching score, while the more efficient gradient-based techniques rather compute the direction of the updates that could lead to the optimum. We derived, in great detail, the terms that are required for the latter strategy and presented two distinct ways of estimating probability densities which is also a required component of the maximization process. 58

61 Chapter 4 Experimental Results Chapter Summary This chapter introduces the experimental framework that we used in order to characterize the performance of our registration procedure. We also describe the 2D and 3D datasets that were available to us and provide quantitative and qualitative evaluation of the registration results. We present results not only from experiments with CT and fluoroscopic images, but also with CT and CT-derived simulated DRRs. The latter experiments were necessary as we did not obtain ground truth information along with the real projection images. In order to thoroughly explore the characteristics of our method, we provide accuracy results also with respect to simulated datasets. In that analysis, we address issues related to multiresolution techniques, speed criterion and robustness. 4.1 Probing Experiments Before we started evaluating the performance of our registration approach, we intended to carefully and extensively investigate the robustness of our objective function. We also intended to compare the properties of mutual information to those of another widely used similarity function, pattern intensity. Therefore, we designed probing experiments that would quantitatively describe the behavior of an objective 59

62 function with respect to its free variables. Given a ground-truth estimate of the searched parameters as the starting position and orientation, the matching qualities were computed while some/all of the free variables were slightly and iteratively modified. In the majority of the experiments, we only varied one of the variables at a time. Although that decision prohibited us from acquiring a more complete characterization of the similarity measure, it was a way to keep the computation time under reasonable limits. (Otherwise, thoroughly evaluating any kind of a matching criterion in a higher dimensional space could be a real challenge especially given the average size of our input volumes. For a detailed reference on the size of the datasets, see Table 4.1.) With the help of the probing experiments, we were able to form a reasonable prediction about the major objective function characteristics. The capture range, the height and location of the function extremum were all useful for estimating the registration performance of the examined objective function given a specific dataset. We show two examples of outputs of such probing experiments in Fig The one on the left hand side, Fig. 4-1 (a), evaluates mutual information and the other, Fig. 4-1 (b), the pattern intensity on a CT-derived skull dataset. Pattern intensity (PI) is an objective function, that some studies found to be quite robust when solving the 2D-3D rigid-body registration task [6, 3]. It operates on the difference image of its two inputs and computes the structuredness in small neighborhoods of each individual pixel. The more uniform the neighboring intensities are the higher the score that pattern intensity assigns at that particular pixel location. We provide the formula for calculating PI on the difference image (I diff ) of the two input images (I fluoro and I drr ) in Eq. (4.1). The detailed definition of mutual information was provided in Section PI r,σ (I diff )= σ 2, s.t. σis a constant, (4.1) σ 2 +(I x,y u,v diff (x, y) I diff (u, v)) 2 (u x) 2 +(v y) 2 <r 2, and I diff = I fluoro s I drr, where s R +. 60

63 14 x 105 Probing MI 1.4 x with gage DRRs (+/ 20, ~ +/ 45deg) x x 107 Probing PI 2 x with gage DRRs (+/ 20, ~ +/ 45deg) x x axis offset y axis offset z axis offset x axis offset y axis offset z axis offset 14 x x x x x x roll offset pitch offset (a) yaw offset roll offset (b) pitch offset yaw offset Figure 4-1: Results of two probing experiments evaluating (a) mutual information and (b) pattern intensity on the skull dataset. Displacement range of +/ 20 (mm) and rotational range of +/ 45 (deg) were specified. A collection of probing experiments, similar to the ones displayed in Fig. 4-1 could verify that mutual information peaked when the correct alignment was evaluated. Although pattern intensity also took its maximum at the zero offset location (ground truth value, in this case), we found more local extrema in the vicinity of the ideal transformation parameters. In Fig.4-1, probing experiments are displayed with respect to all six of the free variables. Evaluation curves in the upper rows correspond to experiments with displacement modifications, while similar curves in the bottom rows contain the evaluation results due to perturbing rotation angles (roll, pitch and yaw respectively). We can see that especially in case of changes due to rotational changes, curves corresponding to pattern intensity are more jagged. That means that the optimization function could more easily get trapped in local extrema when using PI as opposed to MI. These and more extensive analysis of the same sort led us to decide that pattern intensity was a less preferable function to work with. We also carried out numerous probing experiments at different levels of resolution. Our aim was to use their results to support our argument about increasing robustness with decreasing resolution level. In Fig. 4-2, we present probing curves from two 61

64 10 x x x x x x x axis offset y axis offset z axis offset x axis offset y axis offset z axis offset 10 x x x x x x roll offset pitch offset (a) yaw offset roll offset (b) pitch offset yaw offset Figure 4-2: Results of two probing experiments evaluating a cost function on (a) the original and (b) the downsampled and smoothed version of the same phantom pelvis dataset. Displacement range of +/ 20 (mm) and rotational range of +/ 45 (deg) were specified. identical experiments with the only difference being that the input CT volume was downsampled by a factor of 2 and smoothed (with a small Gaussian kernel) in case of the second one (Fig. 4-2 (b)). Analyzing the objective function curves, it is apparent that in case of the downsampled dataset the peaks of the curves are less pointy, all six curves are smoother and there are fewer local extrema encountered around the ground truth position. (The curves in the top row, just in case of Fig. 4-1, indicate evaluation results due to displacement changes and curves at the bottom represent changes with respect to rotational components.) 4.2 Summary of the Registration Algorithm We first provide a brief summary of our registration framework. The short, top-level outline is followed by more implementation details in the subsequent sections. The three major steps of our alignment algorithms are: 1. Preprocessing the input images and input volume 2. Initializing the imaging environment 62

65 3. Iterative optimization of the similarity measure Step 1: Preprocessing During the preprocessing step, we smooth the fluoro images to better match the resolution of the CT-derived DRR images during the alignment process. We also eliminate all artificial labels from the acquisitions that were placed there for patient identification and check whether the fluoroscopy data contains a black rim around the image margins. If it is present, that is the artifact of the image intensifier of the imaging X-ray machine (Figure 4-6 serves as a good example). In case of the CT volume, the desirable window and level settings have to be defined. These entities determine the range and average value of intensities which are usually set by radiologists following an image acquisition. In a multiresolution approach, it is at this stage that additional downsampling and smoothing operators are applied to the volumetric dataset and the fluoro images. If the input volumetric dataset contains too much background (in case of head imaging that could easily happens), we can also crop the volume. In this way we do not have to spend time ignoring voxels with no useful data content during the registration procedure. (Note, however, that this step is not equal to a full segmentation task. We do not eliminate the whole background, which provides useful information during alignment procedures, we just roughly estimate the smallest bounding volume around the imaged anatomy.) Step 2: Initialization The initialization step involves reading in the parameters that are known about the imaging environment. That information is necessary in order to simulate the X- ray creation procedure as accurately as possible. Also, at this point, we roughly position the CT volume in the scene: we make an initial guess about the parameters of transformation T. (We make the assumption that a rough estimate about the required transformation is always available. That is a realistic/reasonable expectation 63

66 as registration algorithms, solving the proposed problem, are not applied to find alignments greater than 30 and 30 (mm) in general, but instead to provide finer details.) Step 3: Optimization Loop Non-gradient Powell Approach Until convergence is detected or, in other words, as long as the Powell tolerance measure is smaller than the individual improvements that are made after each iteration towards the function optimum, two steps alternate. First, the evaluation of the similarity measure given the CT volume, the observed projection images and the current transformation estimate T takes place. Second, the transformation estimate is updated in a way that increases the matching score. This method can be treated almost as a black box procedure. Besides the Powell tolerance measure, the order of linear optimizations and an upper limit for maximum optimization iterations, there are no other parameters that need to be fine-tuned. Gradient-based Maximization The current version of the algorithm executes the iterated part of the code a predetermined number of times. In our case, that number is (This number was experimentally determined.) Hence, for a fixed number of iterations and for all fluoro- DRR image pairs, we follow these steps. 1. Fluoro sampling Randomly sample image points from the observed image and extract their intensities (U(X) where X denotes the sample collection). 2. DRR sampling Calculate the corresponding DRR values (V (T (X))) by applying the current transformation estimate to the CT volume and running the raycasting algorithm. (In case we did not use the sparse sampling approach but utilized all available intensity information, we would apply one of the more efficient techniques for creating the DRR images (see Section 2.1.2), because 64

67 the fast DRR-creating strategies usually achieve significant speedup only if the whole DRR image has to be calculated.) 3. *Objective function evaluation This step is indicated with a (*) symbol as it is not an integral part of the algorithm using the gradient maximization method. This approach guides its search towards the target extremum considering only the gradient and not the value of the objective function value. We evaluate the MI estimate at each iteration for the purposes of monitoring the convergence behavior of the algorithm as a function of iterations, but it is not a required step in the optimization procedure. To compute our similarity measure, mutual information, we use Eq.(3.4). 4. Transformation update Compute transformation updates and assign a new transformation estimate according to Eq. (3.7), applying all the computations derived for probability density and partial derivative estimates in Chapter 3. The update is T updated = T + λ N 2 j=1 N i=1 z ray(t j (x i ),S j ) Vol(z) T ( 1 p(u i,v ji ) p(u i,v ji ) v ji 1 p(v ji ) ) p(v ji ) (4.2) v ji for the histogramming approach and T updated = T + λ N A x j A x i B for the method using Parzen estimation. (v j v i ) T [ W v (v j,v i )ψ v 1 W uv (w j,w i )ψ uv 1 ] d dt (v j v i ) (4.3) In Eq. (4.2) and (4.3), λ denotes the learning rate of the update variables which is experimentally determined for our application. The learning rates for rotational and translational components are significantly different but the influence of their unit update on the transformation should be approximately the same. Many times, it can also be useful to distinguish between in- and out-of-plane transformations, as the latter transformations are usually more difficult to correctly estimate. 65

68 We found that the magnitude of the step size is more crucial in case of the histogramming approach and especially in case of the higher resolution computations. It took us some experimentation to find the best set of values (which can sometimes vary among the different datasets as well). In most of our experiments, convergence took place much earlier than the 5000 iteration that were used. It usually took less than half as many steps as we required to reach the ideal settings. It also happened, though, that in certain cases the fixed number of iterations was not enough. Hence, instead of using the predetermined number of iterations as a stopping criterion, it would be more desirable to halt the registration procedure as soon as convergence is detected Registration Results We would like to point out at the very beginning of this section that no special code optimization has been applied to our algorithm. All relative speed-ups demonstrated are purely the result of either the sampled or the multiresolution approach. We also carry out a great number of additional/superfluous similarity function evaluations and full DRR generations that significantly increase the execution time. Hence, our running times are not directly comparable to solutions geared towards minimal running time Registration Error Evaluation In case of our controlled experiments, when we possess ground truth information about the searched transformation, we determine the quality of the registration results by calculating an error transformation, T error. This variable is defined to be the transformation that takes the registration output pose to the ground truth one. 1 We did not investigate this problem. T GT = T error T output. (4.4) 66

69 When referring to the magnitude of the registration error, we actually describe a tuple, (d e,r e ). One component of that tuple, d e, is the magnitude of the displacement component of T error and the second element, r e, is the magnitude of the rotation angle encoded by the unit quaternion component of the error pose. (See Chapter 2 for a detailed description of pose parameters.) We want the reader to notice that d e and r e do not directly represent errors in the searched components of the transformation. For instance, displacement errors in the estimate for sub-transform D d are summed together to produce d e and a non-zero d e could also be the result of merely a noisy estimate for R, the rotational sub-transform of T, without any actual error in D d itself. (As a reminder, transformation T, in Chapter 2, is defined to be T = D c R D d.) See an example for that case in Fig. 4-5 (b). Even though we only perturbed the ground truth pose by a rotation angle of 15 around the y-axis, there is a significant error in the displacement term d e as well. This unusual error interpretation is the result of the particular way we constructed transformation T. Results in the registration literature are often times presented with respect to the displacement and rotation angle error specific to displacement directions and rotation axes. Therefore, one should keep in mind this difference when comparing measures produced by the various approaches. In order to determine whether we perform our task with high accuracy, we establish a range for d e within which the results satisfy our interpretation of sub-voxel accuracy requirements. For r e, no special bound needs to be defined as R is the only rotational component of T, so all rotation angle errors are directly related to R. If (dx, dy, dz) denoted the size of the CT voxels, we formulate a criterion for sub-voxel accuracy in the displacement parameter by specifying the range 0 <= d e <= (dx 2 + dy 2 + dz 2 ). (4.5) That is to say, we bound the displacement error term with the magnitude of the diagonal of a volume element. That upper limit represents a worst case scenario: having a 67

70 translational offset of exactly the length of the volume element in all three directions. So for example, if the voxel dimensions were (0.5; 0.5; 1), then a displacement error in the range of d e <= meets the criterion of being highly accurate. As this measure depends on the input data specifications, we calculated its value for all of the CT volumes that we used in our experiments. They are denoted by d and appear in the last column of Table 4.1. As an alternative, we could also look at error in the subcomponents themselves. That is, we could look at how much transformation D d and R are different from their ground truth values. We look at these measurements more closely in case of the real X-ray - CT experiments in Section 4.5, where we obtain results with bigger variance than in case of the controlled setup Objective Function Evaluation In order to closely follow the convergence pattern of our algorithm, we computed the mutual information measure at each iteration of the registration procedure. This is an optional step in case of the stochastic gradient ascent methods. Although the objective function curves are quite jagged in all cases (which is an expected result of the sparse sampling method), we can definitely observe an overall convergence pattern. Some example figures, to which we are going to refer in a later analysis, are displayed in Fig In case of the Powell maximization method, we did not record intermediate evaluation results throughout the registration process. 4.4 CT-DRR Experiments CT-DRR Registration We designed several controlled experiments in order to obtain a thorough characterization of the algorithmic behavior of our registration methods given a known, ground truth transformation parameter. We wanted to test their accuracy under a wide range 68

71 of circumstances: on different resolution levels, with different learning rates and with different sample sizes. After an initial offset was specified, high-quality, CT-derived DRR datasets of a plastic pelvis, plastic skull, real skull, real head and plastic lumbar spine were registered to their volumetric counterparts. The specifications for the CT datasets from which we created the simulated projection images are listed in Table 4.1. The second column contains row, column, and slice information in that order and the third column specifies voxel dimensions (dx, dy, dz). The last column, with quantity d, represents the sub-voxel accuracy upper bound (Sec ) to which we compare our registration results. As a fascinating bit of information, one set of our experiments was run on Phineas Gage s skull dataset 2. DATASET VOLUME DIMENSIONS VOXEL DIMENSIONS d Plastic Pelvis 265 x 455 x 107 [0.6621; ; 2.0] Plastic Pelvis (sm1) 132 x 227 x 107 [1.3242; ; 2.0] Plastic Pelvis (sm2) 66 x 113 x 107 [2.6484; ; 2.0] Plastic Pelvis (sm3) 33 x 56 x 53 [5.2969; ; 4.0] Real Skull 512 x 512 x 390 [0.4844; ; 0.5] Real Skull (sm1) 256 x 256 x 195 [0.9688; ; 1.0] Real Skull (sm2) 128 x 128 x 97 [1.9375; ; 2.0] Gage s Skull 512 x 512 x 424 [0.4473; ; 0.5] Gage s Skull (sm1) 256 x 256 x 212 [0.8945; ; 1.0] Gage s Skull (sm2) 128 x 128 x 106 [1.7891; ; 2.0] Plastic Skull 188 x 128 x 105 [1.0156; ; 2.0] Real Head 512 x 512 x 127 [0.4883; ; 1.0] Plastic Lumbar Spine 512 x 512 x 103 [0.2344; ; 1.5] Table 4.1: CT dataset specifications; sm1: smoothed volume on hierarchy level 2; sm2: smoothed volume on hierarchy level 3; sm3: smoothed volume on hierarchy level 4 Single views of the observed DRR images that were used as the simulated fluoro images in the registration experiments are displayed in Figure We provide a brief description of his famous head injury in the Appendix. 69

72 (a) Real skull (b) Pelvis phantom (c) Phineas Gage s skull (d) Lumbar spine segment (e)real head Figure 4-3: Single-view simulated fluoroscopic images from the controlled experiments. 70

73 In the following, we provide a performance evaluation of our registration approaches on these datasets. We focus on three key features of the algorithms: benefits from the multiresolution hierarchy, capture range and accuracy. During the evaluation, we use the following naming convention to distinguish between our algorithms with different optimization methods: Reg-Pow: registration using the Powell maximization method Reg-Hi: registration using stochastic gradient ascent with sparse histogramming for density estimation Reg-Pz: registration using stochastic gradient ascent with Parzen Windowing for density estimation Further abbreviations used in the succeeding sections are: LEVELS sm0, sm1, sm2, sm3: To indicate the various levels of resolution we use in the hierarchy. They denote, respectively, the highest level of resolution (the original dataset with no downsampling), 1st, 2nd and 3rd level of hierarchy of downsampled datasets 3. No.: number of experiments executed for the given task. Machine Specifications We had access to two types of computing resources and we indicate their characteristics in Table 4.2. The name section in the first column demonstrates the abbreviations by which we refer to them in our analysis. NAME MODEL NAME CPU (MHz) CACHE SIZE M1 Pentium III (Coppermine) KB M2 Pentium III (Katmai) KB Table 4.2: Computing resources machine specifications. 3 The downsampling procedure always takes place with respect to a factor of two. 71

74 4.4.2 Multiresolution Approach Motivation As we have already mentioned in Chapter 2, we investigated a hierarchical approach to the 2D-3D registration problem. The idea behind this formulation stems from an approach originally offered in the field of image compression [37]. Essentially, as we descend to lower levels of the hierarchy we aim to eliminate superfluous information encoded in the dataset and attempt to represent it in a more compact manner. Since the introduction of this strategy, it has been widely used in the computer vision community. In medical imaging applications, for instance, excellent results have been presented in multi-modal 3D-3D head registration applications [16, 18]. Our main motivation behind running our experiments on various levels of resolution was to increase the speed and the robustness of our alignment procedure. Even if we did not want to find the fastest solution to the problem, we can see in Table 4.1 that most of the datasets available to us are extremely large. These were taken specifically to test research efforts, so their resolution is generally higher than that of an acquisition for medical/treatment purposes would be. For example, the skull dataset (indicated in the 5th row of Table 4.1) has 390 slices and a slice thickness of 0.5 mm which is 2 or 3 times more than it would have been if requested for ordinary diagnostic purposes. Handling such large datasets efficiently is a challenging task, especially when we have to traverse the volume several times in order to produce full projection images (in case of the Powell method and when displaying intermediate registration results). Hence, given the 3D and 2D input datasets, we downsampled and smoothed them (with a Gaussian kernel) to obtain versions of the original with lower resolution. Due to the high accuracy of our initial volumetric datasets, we used 3-4 levels of hierarchy. The data specifications about the lower resolution volumes are also included in Table

75 Experiments We demonstrated in Section 4.1 that the objective function can be made much smoother if we downsample the original input datasets, see Fig. 4-2, for an example, where we show probing experiments on Gage s skull dataset. It is apparent from the figures that local extrema can be avoided by downsampling the images. To measure the registration speedup that we could gain from applying the hierarchy, we ran some experiments with the same initial offset applied to the ideal/ground truth transformation at each level of the pyramid using all three of the optimization methods. In all of the experiments, we recorded the running time it took for the algorithms to converge to the optimal parameters. We summarize our results in Table 4.3. For these experiments, we used two different ways to initialize our transformation estimate. We either defined a fixed offset of 20 or 30 mm in displacement or in rotation angle for one of the free variables or we specified a displacement and rotation range from which the offset value was randomly picked for all of the six unknown elements. The former initialization method, in general, allows testing for maximum possible offsets that could be recovered in individual dimensions of the space. Only a single parameter is offset and goal is to identify what the upper bound is for it. The latter initialization helps to evaluate the robustness of an algorithm in a given subspace. As all of the components are perturbed from their optimal value, it is a more complex task to optimize for them simultaneously. We recorded the computation time for all three of the alignment strategies: the Powell, the Parzen windowing and the histogramming methods. In all cases, but most significantly, for the Powell approach, the computational speedup achieved by the hierarchy was enormous. As Table 4.3 indicates, for the direction set method, computations on the pelvis dataset converged times faster on the third level of the hierarchy than on the second and in case of the Gage dataset the algorithm was completed and times faster on the forth pyramid level than on the second and third respectively. (In our table, running time is indicated in seconds and 73

76 the offset measures are given in mm and degrees.) METHOD DATA LEVEL TIME (sec) CPU No. OFFSET Reg-Pow pelvis sm M2 6 [20 mm;15 ] sm M2 6 [20 mm;15 ] gage sm M1 6 [20 mm;15 ] sm M1 6 [20 mm;15 ] sm M1 6 [20 mm;15 ] Reg-Hi pelvis sm M1 6 [20 mm;15 ] sm M1 6 [20 mm;15 ] sm M1 6 [20 mm;15 ] skull sm M1 6 [30 mm;20 ] sm M1 6 [30 mm;20 ] sm M1 12 [30 mm;20 ] Reg-Pz gage sm M2 6 [0-10 mm;0-20 ] sm M2 6 [0-10 mm;0-20 ] sm M2 6 [0-10 mm;0-20 ] Table 4.3: Timing measurements to contrast registration running time on different hierarchical levels. In case of the histogramming approach (using the Reg-Hi method), running the alignment procedure on the pelvis dataset was and faster on the second and first pyramid levels than on the original input. The same experiments produced a1.94 and a speed-up in case of the skull experiments comparing results with the original and the second level of the hierarchy. With the Parzen Windowing approach we achieved similar results. For example, a speedup of and of was noted when registering images taken of the skull dataset. It is important to point out that we carry out the downsampling procedure with a factor of two in all desired dimensions in a way that the voxels in the resulting volume approximate cubical elements. Therefore, with each level of hierarchy, the datavolumesizegrowsby2 3 in the worst case scenario. For algorithms that fully traverse the 3D image and produce full-sized DRRs that increase directly appears in the running time. In the sampled approach, the slowdown is (approximately) at most by a factor of two if the number of samples used remains the same between the 74

77 levels. That is because we only have to access more voxels when following rays going through the volumes. We note that the execution times of the three different approaches should not be directly compared using data indicated in Table 4.3. That is especially true for the two different optimization strategies. The reason for that is the fact that the experiments using the Powell method ran until convergence was detected while the gradient methods were executed for a fixed number of iterations. We are able to conclude, though, that the hierarchical approach is inevitable when using the non-gradient maximization method on large datasets. Otherwise the running time grows unreasonably high, exceeding several hours. In case of a sampled approach, using only a few data points for the computations keeps the execution time well-manageable. The relative gain in between hierarchy levels is smaller but still significant Robustness, Size of Attraction Basin Given the ground-truth pose parameter T GT, the capture range of the algorithm with respect to a particular dataset can be established by finding the greatest perturbation of individual components that could be consistently reset by our application. First, we could get a good intuition for the extent of the capture range while running probing experiments (Section 4.1). As probing the six dimensional parameter space is computationally highly expensive, lower dimensional experiments can be carried out to characterize the objective function. However, the probing results can be misleading as they are not capable of representing the parameter space in its full complexity. So to evaluate the registration power of our optimization methods, we ran some initial experiments by only offsetting one of the free parameters from its ideal value. In these controlled experiments, we found that a displacement offset of mm and rotation angle offset of degrees could generally be registered with all three of the methods. The alignment was more sensitive when the images were truncated. In these cases, the maximum displacement offsets had to be lowered. 75

78 (That situation occurs in case of the pelvis, spine and the Gage s skull.) We then also ran some experiments where the initial offset was determined by randomly assigning offset values to all of the free variables given a pre-specified range of values. As all six parameters were offset at the same time, the individual perturbation ranges were specified to be smaller in this case Accuracy Testing Preliminary Results Without using the hierarchical approach and only offsetting one of the free variables at a time, we obtained good preliminary results. We were able to show that the algorithms could be applied to several types of datasets with good accuracy. assigned offsets in the range of (mm)or (deg) and we ran the algorithms on four different datasets. These were a plastic pelvis, plastic skull, plastic lumbar spine and a real head. Most of the times, the quality of the convergence results only depended on finding the appropriate set of registration parameters (step size and sample size, mostly). We present our results for the newreg-hi method in Table 4.4. DATASET No. d e (mm) r e (deg) Pelvis Plastic Skull Plastic Spine Real Head Table 4.4: Controlled, registration accuracy tests using the Reg-Hi method; No hierarchy; We Table 4.4 displays the number of experiments executed in case of each given dataset and the average magnitude of the displacement component and of the rotation angle of T error. In the majority of the experiments, the displacement error terms fall under the d value (Table 4.1) and the rotation angle errors are under 0.5 deg. We can notice a relative weakness in the accuracy results related to the real head experiments. This, most probably, stems from the fact that the dataset is seriously truncated with the top and the bottom (below the nose) of the head completely missing (see Fig. 4-3). 76

79 With Hierarchy After obtaining the above described preliminary results, we wanted to optimize both the accuracy and the running time of our methods. That is when we implemented the multiresolution approach. In the following, we analyze performance results on two particular datasets comparing all three of our algorithms. We decided to run these more extensive experiments on the Gage skull dataset and the images of the phantom pelvis as our collaborators are particularly interested in seeing results related to two of these anatomies. We constructed the multiresolution hierarchy in the following way. We started the experiments on the lowest level of the volume pyramid. We randomly offset the ground truth transformation variable in a particular range. The extent of this range was determined based upon some initial tests with respect to capture range calculations. More specifically, they were specified by providing an upper bound for all of the displacement components and by assigning an maximum value for the rotation angle, while the rotation axis was determined randomly. The four parameters were selected uniformly from their specified ranges. We continued the alignment process on higher resolution levels using the results of the lower stages as inputs until the top level of the pyramid was reached. It was not only the resolution level that changed in-between the multiresolution steps. We also used lower step sizes (λ) and more sample points towards the top of the hierarchy. In case of the Reg-Pow experiments, we also added an additional small perturbation to the starting poses on higher levels, in order to prevent the optimization method to get trapped in local extrema. The stochastic approach introduces sufficient amount of noise into its estimates that this perturbation was not necessary in case of the gradient ascent procedures. The results of these controlled experiments aresummarizedintable4.5. All timing results were measured on our M1-type machines. Although, in general, the pre-specified 5000 iteration were more than sufficient for the gradient-based algorithms to converge, both in case of the Reg-Pz and Reg- 77

80 METHOD DATASET LEVEL No. d e (mm) r e (deg) OFFSET TIME Reg-Pz Pelvis sm0 9/ from sm sm1 9/ from sm sm2 9/ from sm sm3 8/ [15,15,15,20 ] 843 Skull sm from sm sm from sm sm [10,10,10,20 ] 2084 Reg-Hi Pelvis sm0 10/ from sm sm1 10/ from sm sm2 10/ from sm sm3 10/ [15,15,15,20 ] 699 Skull sm from sm sm from sm sm [10,10,10,20 ] 2820 Reg-Pow Pelvis sm from sm sm from sm sm from sm sm [15,15,15,20 ] 240 Skull sm from sm sm from sm sm [12,12,12,15 ] 561 Table 4.5: Registration results of methods Reg-Pz, Reg-Hi and Reg-Pow on controlled experiments of a phantom pelvis and a real skull Hi method we encountered exceptions. In those cases, the randomly assigned offset values were so large compared to the given step size that the registration process could not take place during the provided interval. When that happened we did not average in the results of those runs into our accuracy measures. When such an event happened, we specifically indicate in the fourth column of Table 4.5 the actual number of data samples out of the total with respect to which we present the accuracy measures. (In case of the Reg-Pz pelvis experiments, the considered number of experiments increases from 8 to 9 after the second hierarchical level, as even though the first set of iterations was not sufficient, during the second step the parameters did manage to converge to the optimal settings.) The offset range for the registration algorithms was almost the same in all cases, with a slight variation in the displacement settings (and only once in the rotation 78

81 bound). The capture range for the skull dataset was smaller in translation, but the rotational element did not have to be modified. The table entries nicely present how by ascending the multiresolution pyramid the registration errors decrease, both with respect to the rotational and the displacement components. When comparing the results of the controlled Reg-Hi and Reg-Pz experiments, we can say that their performance was very similar. Running on the pelvis and the skull datasets, they both completed the registration task even before getting to the top-most level of the hierarchy. (See Table 4.1 for upper bound on error terms to achieve high accuracy.) We have to admit though, that in case of the histogramming method, we had to experiment more with the appropriate parameter settings. That was especially true as we got close to the top of the registration pyramid. It proved to be more crucial to increase the sample size for more accurate density estimations, and Reg-Hi was also much more sensitive to the optimization step size. The data also suggests that the Reg-Pow procedure might have got attracted to several local extrema. That would be the only explanation for the fact that, even on the original dataset, its error terms are larger than that of the two other methods. For each registration run, we also provide the running time. As we have already hinted it in earlier sections, in case of the gradient-based methods these represent only an upper bound as convergence might occur way before the pre-specified iteration number is completed. Although the Powell method converges very quickly during the first couple of stages, the additional refinement of the intermediate results approaching the top of the hierarchy takes an extremely long time. Such a delay prevents this approach to even be considered for interventional applications. To demonstrate the extent of the registration errors in a way other than the numerical scores, we present the results of the registration experiments of the pelvis phantom images using the non-gradient technique. These examples contain the contours of projection images produced by the registration output parameters overlaid on the observed, input acquisitions. See Figure 4-4 for a qualitative evaluation of the performance of Powell s method. By visual inspection, only tiny misalignments are detectable on the images created with the output of the algorithm; the majority of 79

82 the contours aligns well Convergence Pattern As part of the registration algorithm for the gradient-based optimization approaches, we evaluated the similarity measure at intermediate steps of the procedure. We recorded the MI estimates together with the error transformation components. Displaying these values as a function of the iteration number allowed us to monitor the convergence behavior of the alignment procedure closely. Two such convergence plots are displayed in Fig These present the results of a Reg-Hi plastic pelvis experiment. In case of the left-hand side, on Fig. 4-5 (a), the displacement parameter was perturbed by 20 (mm) in the direction of the y-axis and in case of 4-5 (b) the rotation angle around the y-axis was perturbed by 15. The MI curve, in both experiments, is very noisy. This can be explained by the fact that the objective function is only evaluated on a small-sized, random set of sample points. However, it is apparent that the closer the transformation estimate gets to its optimal value (as iteration counter increases), the higher the reward value is that we assign to the most current set of transformation variables. The optimization method in this particular example was Reg-Hi and the number of sample points used was 150. In the first experiment, Fig. 4-5 (a), the initial error in the displacement component quickly causes an offset in the rotational component as well. It is only then that they converge to the zero-offset solution simultaneously. The second run, Fig. 4-5 (b), is a good example of the discussion in Section Even though it is only the rotational component of transformation T that is offset initially, our displacement error measure d e is also non-zero at the outset of the alignment procedure. As the rotation error decreases, the displacement error vanishes, too. 80

83 (a) With initial transformation estimate (b) With transformation estimate resulting from registration Figure 4-4: Registration results of a phantom pelvis controlled experiment with the Reg-Pow method: contours of registration results are overlaid on the observed DRR images 81

84 40 30 MI registration with histogramming; PELVIS; dy = +20 (mm) displacement error (mm) MI registration with histogramming; PELVIS; db = +15 (deg) displacement error (mm) rotation angle error (rad) rotation angle error (rad) MI estimate MI estimate iterations (a) iterations (b) Figure 4-5: Sample output from a controlled set of Reg-Hi experiments. Dataset: plastic pelvis. Initial offsets: (a) y = 20 (mm) and (b) β = 15 (deg). Plots display the magnitude of displacement error, rotation angle and the MI estimate at each iteration Registration Parameter Settings It was mainly in case of the registration methods using gradient-based optimization that we had to carefully set the operating parameters. For the Powell method the only setting that we tried to alter was the tolerance level. However, changing that variable did not produce significantly different results, so we did not invest time in quantifying its influence on the registration results. In case of the Reg-Hi and Reg-Pz methods there are several settings that could be adjusted; for example: size of sample collection, step size, iteration number, standard deviation of Gaussian kernels (for Reg-Pz) and number of intensity buckets used for constructing histograms. Some of these values were treated as constants and were never changed throughout all our study. For example, the standard deviation of the Gaussian kernel was set to be 2.0 and the number of intensity bins to 32. We experimented more with the sample and the step size measures. For all our sampled experiments we used only sample points with an increasing number towards the top of the hierarchy. Most of the time was invested in finding appropriate step sizes (learning rates) for the optimization procedure. We found that, especially when the multiresolution 82

85 approach was not used and the algorithm was run only on the top-most level, the maximization could be very sensitive to these values. If the learning rates were too high, convergence did not take place and when they were too small, the predetermined number of iterations were not sufficient to include the full convergence path. What is more, we differentiated between the update rate of the rotational and the translational components. It was crucial to set the ratio between them properly as their influence on the update had to be balanced. Neither of the update terms were to dominate the other. 4.5 CT-Fluoroscopy Experiments We obtained real X-ray fluoroscopy and X-ray images for two of the CT datasets that are listed in Table 4.1. We had corresponding biplanar 2D acquisitions of the phantom pelvis and Phineas Gage s skull. We present these four images on Fig Unfortunately though, the ground truth specifications describing the imaging geometry at the time of image acquisition were not recorded. Hence, our best estimates about the ideal transformation T GT were the results of a manual registration procedure. We attempted to reconstruct the imaging environment as accurately as possible while producing the best equivalents of the real fluoroscopy or X-ray images. However, it is highly probable that the ground truth approximations contain some (hopefully, slight) amount of error. Therefore, when registering the real fluoro/xray images to the CT volume, we allow for a bigger range of variance in the accuracy results compared to the standards established in the synthetic examples (Section 4.4). In order to make it more intuitive to interpret the extent of the registration errors, we also present our registration results in two additional ways for these datasets. It is in this section that we look at not only the error term provided by the computation of the error transformation T error, but also errors with respect to the subcomponents of T themselves. We examine how precise our estimates are regarding the rotational component R and the displacement term D d. We also provide a qualitative analysis 83

86 (a) Phantom pelvis: lateral acquisition (b) Phantom pelvis: AP acquisition (c) Phineas Gage s skull: sagittal view (d) Phineas Gage s skull: coronal view Figure 4-6: Real X-ray fluoroscopy of the phantom pelvis and real X-ray images of Phineas Gage s skull 84

87 of our results. We produce a DRR image both with the offset parameters and the results of the registration algorithm and overlay their main contours onto the observed (real) 2D images. In this way we are able to see how much of an error was recovered by our alignment procedure and also how well our final estimates match the original acquisitions. To extract the major outlines of the examined objects in these experiments, we used the Canny edge detector algorithm Experiments with X-Ray Images of Gage s Skull We ran extensive experiments on the Gage dataset with all three of our registration methods. The initial offset range was specified to be a random combination of maximum [0,10,10] or [5,10,10] 5 in displacement and maximum 20 degrees in rotation. We used at most three levels of the multiresolution registration pyramid. Below we present results obtained by the Parzen windowing and the Powell methods. We do not include a report on the histogramming technique as we did not find it to be robust and consistent enough in these experiments. With Reg-Hi, the confident range of convergence was much smaller than in case of the other two strategies (only [5,5,5,5 ] as opposed to [5,10,10,20 ]). It also seemed even more sensitive to code parameter settings than in case of the controlled experiments. Reg-Pz Experiments We ran 30 experiments with the Parzen windowing method starting from the third pyramid level (sm2). We obtained the following encouraging results. On the 3rd level of the hierarchy, with the input volume downsampled twice, 27 ( 90%) of the experiments finished converging to an approximate wider neighborhood of the ground truth pose. In two cases, the initial offsets were too high to be recovered. These originated from a coupling of a large rotational and a large displacement offset. And in the remaining third experiment, convergence has started but the number of iterations 4 Thanks to Lily Lee for providing the C++ code implementation of the Canny edge detector 5 The range is smaller in the x-direction as the chin is missing form the datasets. 85

88 was not sufficient to complete the procedure. On the second level, continuing the registration with the output from the lower hierarchy level, we could acquire even higher accuracy. Although in all cases we got closer to the optimal settings, the three cases that did not get close enough in the first round of registration remained behind. That is explained by the fact that on higher hierarchy levels the step size decreases, hence even with an increased resolution the size of error that could be corrected decreases. (Also, we showed that the danger of being attracted by local extrema also increases.) We first present our results quantitatively. We prepared two plot diagrams displaying the displacement and the rotational error terms both prior to and after the registration procedure was run. We calculated these error measures both with respect to the components of the error transformation T error and the individual (variable) components of transformation T : D d and R. The error terms, in all cases, are specified with respect to the manually determined ground truth pose. Figure 4-7 displays results obtained on the third pyramid level and Fig. 4-8 reports on the outcomes achieved on the second level. On each of the sub-plots, the x-axis stands for displacement error (measured in mm) and the y-axis represents rotation angle error (measured in degrees). The left hand side columns, on both of the main figures, represent settings from before the alignment procedure and the right hand side from after completing registration. The top row corresponds to measures with respect to the error transformation T error, and in the bottom row indicates error measures computed with respect to the two varying subcomponents of transformation T. On Fig. 4-7, we can see that with the exception of the few cases where the algorithm did not have enough time to complete convergence, the results cluster closely around the ideal settings. (The outlier data points are indicated by a circled cross-mark to distinguish them from the others.) These results are further improved on a higher level of the pyramid. These results are presented in Fig We also ran experiments on the original dataset, trying to even refine the outputs from the 2nd pyramid level, but these experiments did not improve the results very much and they are very expensive computationally. Hence, we do not include those 86

89 results here A summary of the error terms corresponding to T error are also summarized in Table DATASET METHOD LEVEL No. d e (mm) r e (deg) OFFSET Gage s skull Reg-Pz sm1 27/ from sm2 sm2 27/ [0,10,10,15 deg] Reg-Pow sm from sm2 sm from sm3 sm [5,10,10,15 deg] Table 4.6: Error measurements for the X-ray fluoroscopy and CT registration experiments on the Gage skull dataset We also invite the reader to judge the registration results qualitatively. Images in the top row of Figure 4-9 display the two views created using the offset transformation estimates and in the bottom row they show projection images produced with the output of the registration. These images are suitable to demonstrate the extent of the recovered offset. Figure 4-10 is the one that helps to judge accuracy. There, the main outlines of the DRR images in both the offset and registered poses are displayed on the real X-ray images. We can see that the DRR boundaries closely follow the outlines appearing on the original acquisitions. Reg-Pow Experiments We ran a set of nine experiments on three levels of the registration hierarchy using the Powell optimization method. All nine of the experiments converged to the optimal transformation settings. Although there is a definite improvement in the accuracy results of the different stages, on average, these experiments could not produce the same accuracy results as the above presented stochastic gradient method. Table 4.6 presents the relevant registration outcomes. It is possible, that we might have gained some error reduction on the top-most level of the pyramid, however, the running time was so high even in case of the second level (sm1), that these experiments were not conducted. Such a time-consuming solution could not be considered in the applications that we focus our attention on. 87

90 Error pose rotation angle magnitude (deg) Error pose components; Initial position Error pose displacement magnitude (mm) Error pose rotation angle magnitude (deg) Error pose components; Resulting position Error pose displacement magnitude (mm) Rotational subtransform error (deg) Subtransforms; Initial position Displacement subtransform error (mm) (a) Prior to registration Rotational subtransform error (deg) Subtransforms; Resulting position Displacement subtransform error (mm) (b) After registration Figure 4-7: Error distribution based upon the results of 30 experiments with random initial offset on a given interval. Row 1 displays plots with respect to error terms d e and r e while row 2 demonstrates errors in D d and R 88

91 Error pose rotation angle magnitude (deg) Error pose components; Initial position Error pose displacement magnitude (mm) Error pose rotation angle magnitude (deg) Error pose components; Resulting position Error pose displacement magnitude (mm) Rotational subtransform error (deg) Subtransforms; Initial position Displacement subtransform error (mm) (a) Prior to registration Rotational subtransform error (deg) Subtransforms; Resulting position Displacement subtransform error (mm) (b) After registration Figure 4-8: Error distribution based upon the results of 30 experiments with random initial offset on a given interval. Row 1 displays plots with respect to error terms d e and r e while row 2 demonstrates errors in D d and R 89

92 (a) With initial transformation estimate (b) With transformation estimate resulting from registration Figure 4-9: Registration results of an experiment on real X-ray and CT of the Gage s skull dataset using the Reg-Pz method. 90

93 (a) With initial transformation estimate (b) With transformation estimate resulting from registration Figure 4-10: Registration results of an experiment on real X-ray and CT of the Gage s skull dataset using the Reg-Pz method. Contours of the DRR images created by the output of the registration algorithm are overlaid on the original fluoro images 91

94 Figure 4-11 present results from an experiment where a fixed displacement offset of 30 mm was applied in the x-direction Experiments with Fluoroscopy of the Phantom Pelvis In case of the experiments involving the phantom pelvis, we had a very difficult time finding the accurate ground truth pose even manually. The lateral images contain considerably less information about the position of the anatomy as the AP ones as images from the two opposite sides are almost identical. Two other challenges involved were the fact that the fluoroscopic images of the phantom pelvis are greatly truncated (some parts of the ileum is missing and a black rim appears around the margin of the acquisition) and the pincushion distortion effects were not accounted for at the time of imaging. Hence, our results with respect to this anatomy are in very early stages. We present only qualitative results in this Section. Figure 4-12 presents results from an experiment with the Reg-Pow method and Fig shows the results of method Reg-Hi. We can see that while at the outset of the algorithm the DRR outlines do not really fit the edges in the fluoro acquisitions, the edges at the final stage nicely match the boundaries of the observed images. In case of the pelvis images, one should focus on matching object boundaries closer to the image centers as the warping effect is not as strong in that region as towards the image margins. 4.6 Summary This chapter presented the experimental analysis of our newly proposed alignment procedures. We first characterized our objective function via probing experiments and gave an incentive for the use of a multiresolution registration framework. Then the performance of the various algorithms was tested both on CT-derived and real medical image datasets. In the controlled settings, the two gradient-based techniques 92

95 (a) With initial transformation estimate (b) With transformation estimate resulting from registration Figure 4-11: Registration results of an experiment on real X-ray and CT of the Gage s skull dataset using the Reg-Pow method. Contours of the DRR images created by the output of the registration algorithm are overlaid on the original fluoro images 93

96 (a) With initial transformation estimate (b) With transformation estimate resulting from registration Figure 4-12: Registration results of an experiment on real X-ray fluoroscopy and CT of the phantom pelvis dataset using the Reg-Pow method. Contours of the DRR images created by the output of the registration algorithm are overlaid on the original fluoro images. 94

97 (a) With initial transformation estimate (b) With transformation estimate resulting from registration Figure 4-13: Registration results of an experiment on real X-ray fluoroscopy and CT of the phantom pelvis dataset using the Reg-Hi method. Contours of the DRR images created by the output of the registration algorithm are overlaid on the original fluoro images. 95

@ Lilla Z611ei, MMI. All rights reserved. The author hereby grants to MIT permission to reproduce and

@ Lilla Z611ei, MMI. All rights reserved. The author hereby grants to MIT permission to reproduce and 2D-3D Rigid-Body Registration of X-Ray Flouroscopy and CT Images by Lilla Z6llei Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for

More information

FOREWORD TO THE SPECIAL ISSUE ON MOTION DETECTION AND COMPENSATION

FOREWORD TO THE SPECIAL ISSUE ON MOTION DETECTION AND COMPENSATION Philips J. Res. 51 (1998) 197-201 FOREWORD TO THE SPECIAL ISSUE ON MOTION DETECTION AND COMPENSATION This special issue of Philips Journalof Research includes a number of papers presented at a Philips

More information

Assessing Accuracy Factors in Deformable 2D/3D Medical Image Registration Using a Statistical Pelvis Model

Assessing Accuracy Factors in Deformable 2D/3D Medical Image Registration Using a Statistical Pelvis Model Assessing Accuracy Factors in Deformable 2D/3D Medical Image Registration Using a Statistical Pelvis Model Jianhua Yao National Institute of Health Bethesda, MD USA jyao@cc.nih.gov Russell Taylor The Johns

More information

A Study of Medical Image Analysis System

A Study of Medical Image Analysis System Indian Journal of Science and Technology, Vol 8(25), DOI: 10.17485/ijst/2015/v8i25/80492, October 2015 ISSN (Print) : 0974-6846 ISSN (Online) : 0974-5645 A Study of Medical Image Analysis System Kim Tae-Eun

More information

SIGMI Meeting ~Image Fusion~ Computer Graphics and Visualization Lab Image System Lab

SIGMI Meeting ~Image Fusion~ Computer Graphics and Visualization Lab Image System Lab SIGMI Meeting ~Image Fusion~ Computer Graphics and Visualization Lab Image System Lab Introduction Medical Imaging and Application CGV 3D Organ Modeling Model-based Simulation Model-based Quantification

More information

Medical Image Registration by Maximization of Mutual Information

Medical Image Registration by Maximization of Mutual Information Medical Image Registration by Maximization of Mutual Information EE 591 Introduction to Information Theory Instructor Dr. Donald Adjeroh Submitted by Senthil.P.Ramamurthy Damodaraswamy, Umamaheswari Introduction

More information

2 Michael E. Leventon and Sarah F. F. Gibson a b c d Fig. 1. (a, b) Two MR scans of a person's knee. Both images have high resolution in-plane, but ha

2 Michael E. Leventon and Sarah F. F. Gibson a b c d Fig. 1. (a, b) Two MR scans of a person's knee. Both images have high resolution in-plane, but ha Model Generation from Multiple Volumes using Constrained Elastic SurfaceNets Michael E. Leventon and Sarah F. F. Gibson 1 MIT Artificial Intelligence Laboratory, Cambridge, MA 02139, USA leventon@ai.mit.edu

More information

A New Method for CT to Fluoroscope Registration Based on Unscented Kalman Filter

A New Method for CT to Fluoroscope Registration Based on Unscented Kalman Filter A New Method for CT to Fluoroscope Registration Based on Unscented Kalman Filter Ren Hui Gong, A. James Stewart, and Purang Abolmaesumi School of Computing, Queen s University, Kingston, ON K7L 3N6, Canada

More information

Image Registration. Prof. Dr. Lucas Ferrari de Oliveira UFPR Informatics Department

Image Registration. Prof. Dr. Lucas Ferrari de Oliveira UFPR Informatics Department Image Registration Prof. Dr. Lucas Ferrari de Oliveira UFPR Informatics Department Introduction Visualize objects inside the human body Advances in CS methods to diagnosis, treatment planning and medical

More information

Lecture 6: Medical imaging and image-guided interventions

Lecture 6: Medical imaging and image-guided interventions ME 328: Medical Robotics Winter 2019 Lecture 6: Medical imaging and image-guided interventions Allison Okamura Stanford University Updates Assignment 3 Due this Thursday, Jan. 31 Note that this assignment

More information

Image Acquisition Systems

Image Acquisition Systems Image Acquisition Systems Goals and Terminology Conventional Radiography Axial Tomography Computer Axial Tomography (CAT) Magnetic Resonance Imaging (MRI) PET, SPECT Ultrasound Microscopy Imaging ITCS

More information

Advanced Visual Medicine: Techniques for Visual Exploration & Analysis

Advanced Visual Medicine: Techniques for Visual Exploration & Analysis Advanced Visual Medicine: Techniques for Visual Exploration & Analysis Interactive Visualization of Multimodal Volume Data for Neurosurgical Planning Felix Ritter, MeVis Research Bremen Multimodal Neurosurgical

More information

Annales UMCS Informatica AI 1 (2003) UMCS. Registration of CT and MRI brain images. Karol Kuczyński, Paweł Mikołajczak

Annales UMCS Informatica AI 1 (2003) UMCS. Registration of CT and MRI brain images. Karol Kuczyński, Paweł Mikołajczak Annales Informatica AI 1 (2003) 149-156 Registration of CT and MRI brain images Karol Kuczyński, Paweł Mikołajczak Annales Informatica Lublin-Polonia Sectio AI http://www.annales.umcs.lublin.pl/ Laboratory

More information

Tomographic Reconstruction

Tomographic Reconstruction Tomographic Reconstruction 3D Image Processing Torsten Möller Reading Gonzales + Woods, Chapter 5.11 2 Overview Physics History Reconstruction basic idea Radon transform Fourier-Slice theorem (Parallel-beam)

More information

Computational Medical Imaging Analysis Chapter 4: Image Visualization

Computational Medical Imaging Analysis Chapter 4: Image Visualization Computational Medical Imaging Analysis Chapter 4: Image Visualization Jun Zhang Laboratory for Computational Medical Imaging & Data Analysis Department of Computer Science University of Kentucky Lexington,

More information

Medical Image Registration

Medical Image Registration Medical Image Registration Submitted by NAREN BALRAJ SINGH SB ID# 105299299 Introduction Medical images are increasingly being used within healthcare for diagnosis, planning treatment, guiding treatment

More information

Dynamic digital phantoms

Dynamic digital phantoms Dynamic digital phantoms In radiation research the term phantom is used to describe an inanimate object or system used to tune the performance of radiation imaging or radiotherapeutic devices. A wide range

More information

Gradient-Based Differential Approach for Patient Motion Compensation in 2D/3D Overlay

Gradient-Based Differential Approach for Patient Motion Compensation in 2D/3D Overlay Gradient-Based Differential Approach for Patient Motion Compensation in 2D/3D Overlay Jian Wang, Anja Borsdorf, Benno Heigl, Thomas Köhler, Joachim Hornegger Pattern Recognition Lab, Friedrich-Alexander-University

More information

Support Vector Machine Density Estimator as a Generalized Parzen Windows Estimator for Mutual Information Based Image Registration

Support Vector Machine Density Estimator as a Generalized Parzen Windows Estimator for Mutual Information Based Image Registration Support Vector Machine Density Estimator as a Generalized Parzen Windows Estimator for Mutual Information Based Image Registration Sudhakar Chelikani 1, Kailasnath Purushothaman 1, and James S. Duncan

More information

A Radiometry Tolerant Method for Direct 3D/2D Registration of Computed Tomography Data to X-ray Images

A Radiometry Tolerant Method for Direct 3D/2D Registration of Computed Tomography Data to X-ray Images A Radiometry Tolerant Method for Direct 3D/2D Registration of Computed Tomography Data to X-ray Images Transfer Function Independent Registration Boris Peter Selby 1, Georgios Sakas 2, Stefan Walter 1,

More information

Intraoperative Prostate Tracking with Slice-to-Volume Registration in MR

Intraoperative Prostate Tracking with Slice-to-Volume Registration in MR Intraoperative Prostate Tracking with Slice-to-Volume Registration in MR Sean Gill a, Purang Abolmaesumi a,b, Siddharth Vikal a, Parvin Mousavi a and Gabor Fichtinger a,b,* (a) School of Computing, Queen

More information

Learning-based Neuroimage Registration

Learning-based Neuroimage Registration Learning-based Neuroimage Registration Leonid Teverovskiy and Yanxi Liu 1 October 2004 CMU-CALD-04-108, CMU-RI-TR-04-59 School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract

More information

Bone registration with 3D CT and ultrasound data sets

Bone registration with 3D CT and ultrasound data sets International Congress Series 1256 (2003) 426 432 Bone registration with 3D CT and ultrasound data sets B. Brendel a, *,1, S. Winter b,1, A. Rick c,1, M. Stockheim d,1, H. Ermert a,1 a Institute of High

More information

RADIOMICS: potential role in the clinics and challenges

RADIOMICS: potential role in the clinics and challenges 27 giugno 2018 Dipartimento di Fisica Università degli Studi di Milano RADIOMICS: potential role in the clinics and challenges Dr. Francesca Botta Medical Physicist Istituto Europeo di Oncologia (Milano)

More information

Leksell SurgiPlan. Powerful planning for success

Leksell SurgiPlan. Powerful planning for success Leksell SurgiPlan Powerful planning for success Making a difference in surgical planning Leksell SurgiPlan Leksell SurgiPlan is an advanced image-based neurosurgical planning software, specifically designed

More information

Object Identification in Ultrasound Scans

Object Identification in Ultrasound Scans Object Identification in Ultrasound Scans Wits University Dec 05, 2012 Roadmap Introduction to the problem Motivation Related Work Our approach Expected Results Introduction Nowadays, imaging devices like

More information

Medical Images Analysis and Processing

Medical Images Analysis and Processing Medical Images Analysis and Processing - 25642 Emad Course Introduction Course Information: Type: Graduated Credits: 3 Prerequisites: Digital Image Processing Course Introduction Reference(s): Insight

More information

Computational Medical Imaging Analysis

Computational Medical Imaging Analysis Computational Medical Imaging Analysis Chapter 1: Introduction to Imaging Science Jun Zhang Laboratory for Computational Medical Imaging & Data Analysis Department of Computer Science University of Kentucky

More information

REAL-TIME ADAPTIVITY IN HEAD-AND-NECK AND LUNG CANCER RADIOTHERAPY IN A GPU ENVIRONMENT

REAL-TIME ADAPTIVITY IN HEAD-AND-NECK AND LUNG CANCER RADIOTHERAPY IN A GPU ENVIRONMENT REAL-TIME ADAPTIVITY IN HEAD-AND-NECK AND LUNG CANCER RADIOTHERAPY IN A GPU ENVIRONMENT Anand P Santhanam Assistant Professor, Department of Radiation Oncology OUTLINE Adaptive radiotherapy for head and

More information

2D-3D Registration using Gradient-based MI for Image Guided Surgery Systems

2D-3D Registration using Gradient-based MI for Image Guided Surgery Systems 2D-3D Registration using Gradient-based MI for Image Guided Surgery Systems Yeny Yim 1*, Xuanyi Chen 1, Mike Wakid 1, Steve Bielamowicz 2, James Hahn 1 1 Department of Computer Science, The George Washington

More information

3D Surface Reconstruction of the Brain based on Level Set Method

3D Surface Reconstruction of the Brain based on Level Set Method 3D Surface Reconstruction of the Brain based on Level Set Method Shijun Tang, Bill P. Buckles, and Kamesh Namuduri Department of Computer Science & Engineering Department of Electrical Engineering University

More information

Incorporation of Prior Knowledge for Region of Change Imaging from Sparse Scan Data in Image-Guided Surgery

Incorporation of Prior Knowledge for Region of Change Imaging from Sparse Scan Data in Image-Guided Surgery Incorporation of Prior Knowledge for Region of Change Imaging from Sparse Scan Data in Image-Guided Surgery J. Lee a, J. W. Stayman b, Y. Otake c, S. Schafer b, W. Zbijewski b, A. J. Khanna b,d, J. L.

More information

Knowledge-Based Deformable Matching for Pathology Detection

Knowledge-Based Deformable Matching for Pathology Detection Knowledge-Based Deformable Matching for Pathology Detection Thesis Proposal Mei Chen CMU-RI-TR-97-20 The Robotics Institute Carnegie Mellon University Pittsburgh, Pennsylvania 15213 May 1997 c 1997 Carnegie

More information

Chapter 3 Set Redundancy in Magnetic Resonance Brain Images

Chapter 3 Set Redundancy in Magnetic Resonance Brain Images 16 Chapter 3 Set Redundancy in Magnetic Resonance Brain Images 3.1 MRI (magnetic resonance imaging) MRI is a technique of measuring physical structure within the human anatomy. Our proposed research focuses

More information

HST.582J / 6.555J / J Biomedical Signal and Image Processing Spring 2007

HST.582J / 6.555J / J Biomedical Signal and Image Processing Spring 2007 MIT OpenCourseWare http://ocw.mit.edu HST.582J / 6.555J / 16.456J Biomedical Signal and Image Processing Spring 2007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information

Improved Navigated Spine Surgery Utilizing Augmented Reality Visualization

Improved Navigated Spine Surgery Utilizing Augmented Reality Visualization Improved Navigated Spine Surgery Utilizing Augmented Reality Visualization Zein Salah 1,2, Bernhard Preim 1, Erck Elolf 3, Jörg Franke 4, Georg Rose 2 1Department of Simulation and Graphics, University

More information

Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences

Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences Jian Wang 1,2, Anja Borsdorf 2, Joachim Hornegger 1,3 1 Pattern Recognition Lab, Friedrich-Alexander-Universität

More information

Automatic Subthalamic Nucleus Targeting for Deep Brain Stimulation. A Validation Study

Automatic Subthalamic Nucleus Targeting for Deep Brain Stimulation. A Validation Study Automatic Subthalamic Nucleus Targeting for Deep Brain Stimulation. A Validation Study F. Javier Sánchez Castro a, Claudio Pollo a,b, Jean-Guy Villemure b, Jean-Philippe Thiran a a École Polytechnique

More information

Modern Medical Image Analysis 8DC00 Exam

Modern Medical Image Analysis 8DC00 Exam Parts of answers are inside square brackets [... ]. These parts are optional. Answers can be written in Dutch or in English, as you prefer. You can use drawings and diagrams to support your textual answers.

More information

MR IMAGE SEGMENTATION

MR IMAGE SEGMENTATION MR IMAGE SEGMENTATION Prepared by : Monil Shah What is Segmentation? Partitioning a region or regions of interest in images such that each region corresponds to one or more anatomic structures Classification

More information

[PDR03] RECOMMENDED CT-SCAN PROTOCOLS

[PDR03] RECOMMENDED CT-SCAN PROTOCOLS SURGICAL & PROSTHETIC DESIGN [PDR03] RECOMMENDED CT-SCAN PROTOCOLS WORK-INSTRUCTIONS DOCUMENT (CUSTOMER) RECOMMENDED CT-SCAN PROTOCOLS [PDR03_V1]: LIVE 1 PRESCRIBING SURGEONS Patient-specific implants,

More information

NIH Public Access Author Manuscript Proc Soc Photo Opt Instrum Eng. Author manuscript; available in PMC 2014 October 07.

NIH Public Access Author Manuscript Proc Soc Photo Opt Instrum Eng. Author manuscript; available in PMC 2014 October 07. NIH Public Access Author Manuscript Published in final edited form as: Proc Soc Photo Opt Instrum Eng. 2014 March 21; 9034: 903442. doi:10.1117/12.2042915. MRI Brain Tumor Segmentation and Necrosis Detection

More information

Spatio-Temporal Registration of Biomedical Images by Computational Methods

Spatio-Temporal Registration of Biomedical Images by Computational Methods Spatio-Temporal Registration of Biomedical Images by Computational Methods Francisco P. M. Oliveira, João Manuel R. S. Tavares tavares@fe.up.pt, www.fe.up.pt/~tavares Outline 1. Introduction 2. Spatial

More information

An Introduction to Statistical Methods of Medical Image Registration

An Introduction to Statistical Methods of Medical Image Registration his is page 1 Printer: Opaque this An Introduction to Statistical Methods of Medical Image Registration Lilla Zöllei, John Fisher, William Wells ABSRAC After defining the medical image registration problem,

More information

Whole Body MRI Intensity Standardization

Whole Body MRI Intensity Standardization Whole Body MRI Intensity Standardization Florian Jäger 1, László Nyúl 1, Bernd Frericks 2, Frank Wacker 2 and Joachim Hornegger 1 1 Institute of Pattern Recognition, University of Erlangen, {jaeger,nyul,hornegger}@informatik.uni-erlangen.de

More information

Prostate Detection Using Principal Component Analysis

Prostate Detection Using Principal Component Analysis Prostate Detection Using Principal Component Analysis Aamir Virani (avirani@stanford.edu) CS 229 Machine Learning Stanford University 16 December 2005 Introduction During the past two decades, computed

More information

Statistical Models in Medical Image Analysis by Michael Emmanuel Leventon Submitted to the Department of Electrical Engineering and Computer Science o

Statistical Models in Medical Image Analysis by Michael Emmanuel Leventon Submitted to the Department of Electrical Engineering and Computer Science o Statistical Models in Medical Image Analysis by Michael Emmanuel Leventon Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree

More information

Constructing System Matrices for SPECT Simulations and Reconstructions

Constructing System Matrices for SPECT Simulations and Reconstructions Constructing System Matrices for SPECT Simulations and Reconstructions Nirantha Balagopal April 28th, 2017 M.S. Report The University of Arizona College of Optical Sciences 1 Acknowledgement I would like

More information

Leksell SurgiPlan Overview. Powerful planning for surgical success

Leksell SurgiPlan Overview. Powerful planning for surgical success Leksell SurgiPlan Overview Powerful planning for surgical success Making a Difference in Surgical Planning Leksell SurgiPlan Leksell SurgiPlan is an advanced image-based neuro surgical planning software,

More information

Multi-Modal Volume Registration Using Joint Intensity Distributions

Multi-Modal Volume Registration Using Joint Intensity Distributions Multi-Modal Volume Registration Using Joint Intensity Distributions Michael E. Leventon and W. Eric L. Grimson Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA leventon@ai.mit.edu

More information

Good Morning! Thank you for joining us

Good Morning! Thank you for joining us Good Morning! Thank you for joining us Deformable Registration, Contour Propagation and Dose Mapping: 101 and 201 Marc Kessler, PhD, FAAPM The University of Michigan Conflict of Interest I receive direct

More information

Recovery of 3D Pose of Bones in Single 2D X-ray Images

Recovery of 3D Pose of Bones in Single 2D X-ray Images Recovery of 3D Pose of Bones in Single 2D X-ray Images Piyush Kanti Bhunre Wee Kheng Leow Dept. of Computer Science National University of Singapore 3 Science Drive 2, Singapore 117543 {piyushka, leowwk}@comp.nus.edu.sg

More information

Statistical Shape Analysis of Anatomical Structures. Polina Golland

Statistical Shape Analysis of Anatomical Structures. Polina Golland Statistical Shape Analysis of Anatomical Structures by Polina Golland B.A., Technion, Israel (1993) M.Sc., Technion, Israel (1995) Submitted to the Department of Electrical Engineering and Computer Science

More information

Digital Image Processing

Digital Image Processing Digital Image Processing SPECIAL TOPICS CT IMAGES Hamid R. Rabiee Fall 2015 What is an image? 2 Are images only about visual concepts? We ve already seen that there are other kinds of image. In this lecture

More information

Estimating 3D Respiratory Motion from Orbiting Views

Estimating 3D Respiratory Motion from Orbiting Views Estimating 3D Respiratory Motion from Orbiting Views Rongping Zeng, Jeffrey A. Fessler, James M. Balter The University of Michigan Oct. 2005 Funding provided by NIH Grant P01 CA59827 Motivation Free-breathing

More information

Overview of Proposed TG-132 Recommendations

Overview of Proposed TG-132 Recommendations Overview of Proposed TG-132 Recommendations Kristy K Brock, Ph.D., DABR Associate Professor Department of Radiation Oncology, University of Michigan Chair, AAPM TG 132: Image Registration and Fusion Conflict

More information

Nonrigid Registration using Free-Form Deformations

Nonrigid Registration using Free-Form Deformations Nonrigid Registration using Free-Form Deformations Hongchang Peng April 20th Paper Presented: Rueckert et al., TMI 1999: Nonrigid registration using freeform deformations: Application to breast MR images

More information

Using Probability Maps for Multi organ Automatic Segmentation

Using Probability Maps for Multi organ Automatic Segmentation Using Probability Maps for Multi organ Automatic Segmentation Ranveer Joyseeree 1,2, Óscar Jiménez del Toro1, and Henning Müller 1,3 1 University of Applied Sciences Western Switzerland (HES SO), Sierre,

More information

3/27/2012 WHY SPECT / CT? SPECT / CT Basic Principles. Advantages of SPECT. Advantages of CT. Dr John C. Dickson, Principal Physicist UCLH

3/27/2012 WHY SPECT / CT? SPECT / CT Basic Principles. Advantages of SPECT. Advantages of CT. Dr John C. Dickson, Principal Physicist UCLH 3/27/212 Advantages of SPECT SPECT / CT Basic Principles Dr John C. Dickson, Principal Physicist UCLH Institute of Nuclear Medicine, University College London Hospitals and University College London john.dickson@uclh.nhs.uk

More information

Visualisation : Lecture 1. So what is visualisation? Visualisation

Visualisation : Lecture 1. So what is visualisation? Visualisation So what is visualisation? UG4 / M.Sc. Course 2006 toby.breckon@ed.ac.uk Computer Vision Lab. Institute for Perception, Action & Behaviour Introducing 1 Application of interactive 3D computer graphics to

More information

Fiber Selection from Diffusion Tensor Data based on Boolean Operators

Fiber Selection from Diffusion Tensor Data based on Boolean Operators Fiber Selection from Diffusion Tensor Data based on Boolean Operators D. Merhof 1, G. Greiner 2, M. Buchfelder 3, C. Nimsky 4 1 Visual Computing, University of Konstanz, Konstanz, Germany 2 Computer Graphics

More information

Navigation System for ACL Reconstruction Using Registration between Multi-Viewpoint X-ray Images and CT Images

Navigation System for ACL Reconstruction Using Registration between Multi-Viewpoint X-ray Images and CT Images Navigation System for ACL Reconstruction Using Registration between Multi-Viewpoint X-ray Images and CT Images Mamoru Kuga a*, Kazunori Yasuda b, Nobuhiko Hata a, Takeyoshi Dohi a a Graduate School of

More information

Image Guidance and Beam Level Imaging in Digital Linacs

Image Guidance and Beam Level Imaging in Digital Linacs Image Guidance and Beam Level Imaging in Digital Linacs Ruijiang Li, Ph.D. Department of Radiation Oncology Stanford University School of Medicine 2014 AAPM Therapy Educational Course Disclosure Research

More information

ADVANCING CANCER TREATMENT

ADVANCING CANCER TREATMENT 3 ADVANCING CANCER TREATMENT SUPPORTING CLINICS WORLDWIDE RaySearch is advancing cancer treatment through pioneering software. We believe software has un limited potential, and that it is now the driving

More information

Acknowledgements. Atlas-based automatic measurements of the morphology of the tibiofemoral joint

Acknowledgements. Atlas-based automatic measurements of the morphology of the tibiofemoral joint Atlas-based automatic measurements of the morphology of the tibiofemoral joint M Brehler 1, G Thawait 2, W Shyr 1, J Ramsay 3, JH Siewerdsen 1,2, W Zbijewski 1 1 Dept. of Biomedical Engineering, Johns

More information

Ch. 4 Physical Principles of CT

Ch. 4 Physical Principles of CT Ch. 4 Physical Principles of CT CLRS 408: Intro to CT Department of Radiation Sciences Review: Why CT? Solution for radiography/tomography limitations Superimposition of structures Distinguishing between

More information

Methodological progress in image registration for ventilation estimation, segmentation propagation and multi-modal fusion

Methodological progress in image registration for ventilation estimation, segmentation propagation and multi-modal fusion Methodological progress in image registration for ventilation estimation, segmentation propagation and multi-modal fusion Mattias P. Heinrich Julia A. Schnabel, Mark Jenkinson, Sir Michael Brady 2 Clinical

More information

BME I5000: Biomedical Imaging

BME I5000: Biomedical Imaging 1 Lucas Parra, CCNY BME I5000: Biomedical Imaging Lecture 4 Computed Tomography Lucas C. Parra, parra@ccny.cuny.edu some slides inspired by lecture notes of Andreas H. Hilscher at Columbia University.

More information

3D Voxel-Based Volumetric Image Registration with Volume-View Guidance

3D Voxel-Based Volumetric Image Registration with Volume-View Guidance 3D Voxel-Based Volumetric Image Registration with Volume-View Guidance Guang Li*, Huchen Xie, Holly Ning, Deborah Citrin, Jacek Copala, Barbara Arora, Norman Coleman, Kevin Camphausen, and Robert Miller

More information

Methods for data preprocessing

Methods for data preprocessing Methods for data preprocessing John Ashburner Wellcome Trust Centre for Neuroimaging, 12 Queen Square, London, UK. Overview Voxel-Based Morphometry Morphometry in general Volumetrics VBM preprocessing

More information

Automated Image Analysis Software for Quality Assurance of a Radiotherapy CT Simulator

Automated Image Analysis Software for Quality Assurance of a Radiotherapy CT Simulator Automated Image Analysis Software for Quality Assurance of a Radiotherapy CT Simulator Andrew J Reilly Imaging Physicist Oncology Physics Edinburgh Cancer Centre Western General Hospital EDINBURGH EH4

More information

Frequency split metal artifact reduction (FSMAR) in computed tomography

Frequency split metal artifact reduction (FSMAR) in computed tomography The Johns Hopkins University Advanced Computer Integrated Surgery Group 4 Metal Artifact Removal in C-arm Cone-Beam CT Paper Seminar Critical Review of Frequency split metal artifact reduction (FSMAR)

More information

Video Registration Virtual Reality for Non-linkage Stereotactic Surgery

Video Registration Virtual Reality for Non-linkage Stereotactic Surgery Video Registration Virtual Reality for Non-linkage Stereotactic Surgery P.L. Gleason, Ron Kikinis, David Altobelli, William Wells, Eben Alexander III, Peter McL. Black, Ferenc Jolesz Surgical Planning

More information

3-D Compounding of B-Scan Ultrasound Images

3-D Compounding of B-Scan Ultrasound Images 3-D Compounding of B-Scan Ultrasound Images Jochen F. Krücker, Charles R. Meyer, Theresa A. Tuthill, Gerald L. LeCarpentier, J. Brian Fowlkes, Paul L. Carson University of Michigan, Dept. of Radiology,

More information

Response to Reviewers

Response to Reviewers Response to Reviewers We thank the reviewers for their feedback and have modified the manuscript and expanded results accordingly. There have been several major revisions to the manuscript. First, we have

More information

Model-Based Validation of a Graphics Processing Unit Algorithm to Track Foot Bone Kinematics Using Fluoroscopy

Model-Based Validation of a Graphics Processing Unit Algorithm to Track Foot Bone Kinematics Using Fluoroscopy Model-Based Validation of a Graphics Processing Unit Algorithm to Track Foot Bone Kinematics Using Fluoroscopy Matthew Kindig, MS 1, Grant Marchelli, PhD 2, Joseph M. Iaquinto, PhD 1,2, Duane Storti, PhD

More information

CT IMAGE PROCESSING IN HIP ARTHROPLASTY

CT IMAGE PROCESSING IN HIP ARTHROPLASTY U.P.B. Sci. Bull., Series C, Vol. 75, Iss. 3, 2013 ISSN 2286 3540 CT IMAGE PROCESSING IN HIP ARTHROPLASTY Anca MORAR 1, Florica MOLDOVEANU 2, Alin MOLDOVEANU 3, Victor ASAVEI 4, Alexandru EGNER 5 The use

More information

Fmri Spatial Processing

Fmri Spatial Processing Educational Course: Fmri Spatial Processing Ray Razlighi Jun. 8, 2014 Spatial Processing Spatial Re-alignment Geometric distortion correction Spatial Normalization Smoothing Why, When, How, Which Why is

More information

ADVANCING CANCER TREATMENT

ADVANCING CANCER TREATMENT The RayPlan treatment planning system makes proven, innovative RayStation technology accessible to clinics that need a cost-effective and streamlined solution. Fast, efficient and straightforward to use,

More information

Align3_TP Manual. J. Anthony Parker, MD PhD Beth Israel Deaconess Medical Center Boston, MA Revised: 26 November 2004

Align3_TP Manual. J. Anthony Parker, MD PhD Beth Israel Deaconess Medical Center Boston, MA Revised: 26 November 2004 Align3_TP Manual J. Anthony Parker, MD PhD Beth Israel Deaconess Medical Center Boston, MA J.A.Parker@IEEE.org Revised: 26 November 2004 General ImageJ is a highly versatile image processing program written

More information

Hybrid Spline-based Multimodal Registration using a Local Measure for Mutual Information

Hybrid Spline-based Multimodal Registration using a Local Measure for Mutual Information Hybrid Spline-based Multimodal Registration using a Local Measure for Mutual Information Andreas Biesdorf 1, Stefan Wörz 1, Hans-Jürgen Kaiser 2, Karl Rohr 1 1 University of Heidelberg, BIOQUANT, IPMB,

More information

Digital Volume Correlation for Materials Characterization

Digital Volume Correlation for Materials Characterization 19 th World Conference on Non-Destructive Testing 2016 Digital Volume Correlation for Materials Characterization Enrico QUINTANA, Phillip REU, Edward JIMENEZ, Kyle THOMPSON, Sharlotte KRAMER Sandia National

More information

3D Guide Wire Navigation from Single Plane Fluoroscopic Images in Abdominal Catheterizations

3D Guide Wire Navigation from Single Plane Fluoroscopic Images in Abdominal Catheterizations 3D Guide Wire Navigation from Single Plane Fluoroscopic Images in Abdominal Catheterizations Martin Groher 2, Frederik Bender 1, Ali Khamene 3, Wolfgang Wein 3, Tim Hauke Heibel 2, Nassir Navab 2 1 Siemens

More information

A global optimization strategy for 3D-2D registration of vascular images

A global optimization strategy for 3D-2D registration of vascular images 1 A global optimization strategy for 3D-2D registration of vascular images K. K. Lau and Albert C. S. Chung Lo Kwee-Seong Medical Image Analysis Laboratory, Department of Computer Science and Engineering,

More information

Towards full-body X-ray images

Towards full-body X-ray images Towards full-body X-ray images Christoph Luckner 1,2, Thomas Mertelmeier 2, Andreas Maier 1, Ludwig Ritschl 2 1 Pattern Recognition Lab, FAU Erlangen-Nuernberg 2 Siemens Healthcare GmbH, Forchheim christoph.luckner@fau.de

More information

A simple method to test geometrical reliability of digital reconstructed radiograph (DRR)

A simple method to test geometrical reliability of digital reconstructed radiograph (DRR) JOURNAL OF APPLIED CLINICAL MEDICAL PHYSICS, VOLUME 11, NUMBER 1, WINTER 2010 A simple method to test geometrical reliability of digital reconstructed radiograph (DRR) Stefania Pallotta, a Marta Bucciolini

More information

Scan Matching. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics

Scan Matching. Pieter Abbeel UC Berkeley EECS. Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Scan Matching Pieter Abbeel UC Berkeley EECS Many slides adapted from Thrun, Burgard and Fox, Probabilistic Robotics Scan Matching Overview Problem statement: Given a scan and a map, or a scan and a scan,

More information

A 2D-3D Image Registration Algorithm using Log-Polar Transforms for Knee Kinematic Analysis

A 2D-3D Image Registration Algorithm using Log-Polar Transforms for Knee Kinematic Analysis A D-D Image Registration Algorithm using Log-Polar Transforms for Knee Kinematic Analysis Masuma Akter 1, Andrew J. Lambert 1, Mark R. Pickering 1, Jennie M. Scarvell and Paul N. Smith 1 School of Engineering

More information

MEDICAL IMAGE ANALYSIS

MEDICAL IMAGE ANALYSIS SECOND EDITION MEDICAL IMAGE ANALYSIS ATAM P. DHAWAN g, A B IEEE Engineering in Medicine and Biology Society, Sponsor IEEE Press Series in Biomedical Engineering Metin Akay, Series Editor +IEEE IEEE PRESS

More information

Calibration Method for Determining the Physical Location of the Ultrasound Image Plane

Calibration Method for Determining the Physical Location of the Ultrasound Image Plane Calibration Method for Determining the Physical Location of the Ultrasound Image Plane Devin V. Amin, Ph.D. 1, Takeo Kanade, Ph.D 1., Branislav Jaramaz, Ph.D. 2, Anthony M. DiGioia III, MD 2, Constantinos

More information

AUTOMATED AND COMPUTATIONALLY EFFICIENT JOINT MOTION ANALYSIS USING LOW QUALITY FLUOROSCOPY IMAGES

AUTOMATED AND COMPUTATIONALLY EFFICIENT JOINT MOTION ANALYSIS USING LOW QUALITY FLUOROSCOPY IMAGES AUTOMATED AND COMPUTATIONALLY EFFICIENT JOINT MOTION ANALYSIS USING LOW QUALITY FLUOROSCOPY IMAGES BY SOHEIL GHAFURIAN A dissertation submitted to the Graduate School New Brunswick Rutgers, The State University

More information

Digital Imaging and Communications in Medicine (DICOM) Supplement 167: X-Ray 3D Angiographic IOD Informative Annex

Digital Imaging and Communications in Medicine (DICOM) Supplement 167: X-Ray 3D Angiographic IOD Informative Annex 5 Digital Imaging and Communications in Medicine (DICOM) Supplement 167: X-Ray 3D Angiographic IOD Informative Annex 10 15 Prepared by: DICOM Standards Committee, Working Group 2, Projection Radiography

More information

White Pixel Artifact. Caused by a noise spike during acquisition Spike in K-space <--> sinusoid in image space

White Pixel Artifact. Caused by a noise spike during acquisition Spike in K-space <--> sinusoid in image space White Pixel Artifact Caused by a noise spike during acquisition Spike in K-space sinusoid in image space Susceptibility Artifacts Off-resonance artifacts caused by adjacent regions with different

More information

2D Rigid Registration of MR Scans using the 1d Binary Projections

2D Rigid Registration of MR Scans using the 1d Binary Projections 2D Rigid Registration of MR Scans using the 1d Binary Projections Panos D. Kotsas Abstract This paper presents the application of a signal intensity independent registration criterion for 2D rigid body

More information

A Multiple-Layer Flexible Mesh Template Matching Method for Nonrigid Registration between a Pelvis Model and CT Images

A Multiple-Layer Flexible Mesh Template Matching Method for Nonrigid Registration between a Pelvis Model and CT Images A Multiple-Layer Flexible Mesh Template Matching Method for Nonrigid Registration between a Pelvis Model and CT Images Jianhua Yao 1, Russell Taylor 2 1. Diagnostic Radiology Department, Clinical Center,

More information

Design and performance characteristics of a Cone Beam CT system for Leksell Gamma Knife Icon

Design and performance characteristics of a Cone Beam CT system for Leksell Gamma Knife Icon Design and performance characteristics of a Cone Beam CT system for Leksell Gamma Knife Icon WHITE PAPER Introduction Introducing an image guidance system based on Cone Beam CT (CBCT) and a mask immobilization

More information

A comparison of three methods of ultrasound to computed tomography registration

A comparison of three methods of ultrasound to computed tomography registration A comparison of three methods of ultrasound to computed tomography registration by Neilson Mackay A thesis submitted to the School of Computing in conformity with the requirements for the degree of Master

More information

Basic principles of MR image analysis. Basic principles of MR image analysis. Basic principles of MR image analysis

Basic principles of MR image analysis. Basic principles of MR image analysis. Basic principles of MR image analysis Basic principles of MR image analysis Basic principles of MR image analysis Julien Milles Leiden University Medical Center Terminology of fmri Brain extraction Registration Linear registration Non-linear

More information

82 REGISTRATION OF RETINOGRAPHIES

82 REGISTRATION OF RETINOGRAPHIES 82 REGISTRATION OF RETINOGRAPHIES 3.3 Our method Our method resembles the human approach to image matching in the sense that we also employ as guidelines features common to both images. It seems natural

More information

Anomaly Detection through Registration

Anomaly Detection through Registration Anomaly Detection through Registration Mei Chen, Takeo Kanade, Henry A. Rowley, Dean Pomerleau CMU-RI-TR-97-41 The Robotics Institute Carnegie Mellon University Pittsburgh, Pennsylvania 15213 November,

More information