A comparison of three methods of ultrasound to computed tomography registration

Size: px
Start display at page:

Download "A comparison of three methods of ultrasound to computed tomography registration"

Transcription

1 A comparison of three methods of ultrasound to computed tomography registration by Neilson Mackay A thesis submitted to the School of Computing in conformity with the requirements for the degree of Master of Science Queen s University Kingston, Ontario, Canada January 2009 Copyright c Neilson Mackay, 2009

2 Abstract During orthopaedic surgery, preoperative CT scans can be aligned to the patient to assist the guidance of surgical instruments and the placement of implants. Registration (i.e. alignment) can be accomplished in many ways: by registering implanted fiducial markers, by touching a probe to the bone surface, or by aligning intraoperative two dimensional flouro images with the the three dimensional CT data. These approaches have problems: They require exposure of the bone, subject the patient and surgeons to ionizing radiation, or do both. Ultrasound can also be used to register a preoperative CT scan to the patient. The ultrasound probe is tracked as it passes over the patient and the ultrasound images are aligned to the CT data. This method eliminates the problems of bone exposure and ionizing radiation, but is computationally more difficult because the ultrasound images contain incomplete and unclear bone surfaces. In this work, we compare three methods to register a set of ultrasound images to a CT scan: Iterated Closest Point, Mutual Information and a novel method Points-to-Image. The average Target Registration Error and speed of each method is presented along with a brief summary of their strengths and weaknesses. i

3 Acknowledgments This thesis could not been written with out the support of Dr. James Stewart. During my undergraduate years he introduced me to computer graphics and medical imaging. He taught me that not only does computer graphics possess the ability to act as a creative outlet but the power to transform the medical community. His guidance throughout my undergraduate and graduate years has proven to be invaluable. While shorter in length, my relationship with Dr. Purang Abolmaesumi has been equal in admiration and gratitude. I will always be indented to both of my supervisors for their unwavering support and kind words. Both my mom and dad both served as a great source of inspiration. During my rocky beginnings in elementary school and junior high they kept reminding me that it is always possible to achieve anything given enough determination. This thesis is as much a result of their support as it is my hard work. Of course I must also thank Melissa, my honorary 3 rd supervisor. Her many hours of editing, comfort and even her delicious bowls of Spaghetti Monetti will never be forgotten. I cannot wait to spend my future years reciprocating her kindness and love. ii

4 Contents Abstract Acknowledgments Contents List of Tables List of Figures i ii iii v vi 1 Introduction Overview of Ultrasound/Computed Tomography Registration Contributed work Chapter Overview Background Physics of US and CT Formation Ultrasound Computed Tomography Registration Fiducial Landmark Registration Feature Registration Voxel Intensity Registration Image to Feature Registration Summary Method Purpose Mutual Information Theory Registration Using Mutual Information iii

5 3.3 Iterative Closest Point Motivation Iterative Closest Point Theory US-CT Registration Using ICP implementation Points To Image Motivation Theory Implementation Preprocessing Experiment and Results Experiment Arrangement CT Data Acquisition US Data Acquisition Experiment Design Gold Standard Target Registration Error Initial Starting Position Accuracy of Each Method Accuracy of Mutual Information Accuracy of Iterative Closest Point Accuracy of Points-to-Image Summary of Results Summary and Future Work Summary Future Work Improvement to Experiment Design Improving the thickness approximation of PTI Robustness of ICP Bibliography 81 Glossary 86 iv

6 List of Tables 2.1 Hounsfield Units Values of Common Structures Comparison of intra-operative CT-US registration techniques High level findings of each method v

7 List of Figures 1.1 (Left) Pre-operative MRI. (Right) Patient in OR with pre-operational MRI projected on patient s skull. With permission from M. Leventon [9] (Left) Ultrasound image. (Right) Ultrasound Image Rotated by 90 degrees Example of US-to-CT image registration - (left) US and CT images used for registration (right)registration result showing the US images superimposed on a CT bone surface mesh Ultrasound image of a radius bone phantom in a water-bath - speckle noise can be observed around the perimeter of the bone. Also, there is a partial echo effect on the bottom of the image, below the bone surface Ultrasound image of a distal radius bone acquired from a human subject. The image shows occlusion due to the bone surface Image Registration Spectrum Left - Ultrasound Image, Right - Ultrasound superimposed on a CT image of the same object Examples of Joint Entropy with a transformation of none, translation and rotation from left to right ultrasound image before and after thresholding Left - Ultrasound Image with segmented surface superimposed, Right - Segmented points from ultrasound on a CT mesh Ultrasound surface mesh registered to CT surface mesh left - ultrasound image, right - CT bone surface mesh of same object CT mesh (solid line) overlaid on ultrasound vi

8 3.9 Difference between sampling a three-dimensional ultrasound and a twodimensional ultrasound. Top two images are a three dimensional ultrasound from two different viewpoints, one looking down the normal(top right) of the US and the second looking 90 degrees from the normal (top left). Bottom two views are both looking 90 degrees from the normal. In the two-dimensional case (PTI), the US is a line. The lines extending from the plane represent the weight introduced in the PTI method ultrasound image before and after Gaussian smoothing filter Backfacing Normal Removal - CT mesh (solid line) overlaid on ultrasound - the mesh points with normals within 90 degrees to the direction of the ultrasound probe have been removed pelvis phantom image CT Image slice Polaris Tracking System Experiment Registration Flow Two Registration results. Both images are ultrasound pixels with CT points superimposed on top. Left - good registration result, Right - bad registration result Example of TRE calculation for points transformed from the GS to the perturbed position and the registered position MI Results - TRE MI Results - FRE ICP Results - TRE ICP Results - using FRE PTI Results - TRE PTI Results - FRE vii

9 Chapter 1 Introduction The incorporation of images depicting the internal autonomy of a patient has been done in a surgical setting for decades. In typical examples a surgeon will study an image modality such as Computed Tomography (CT), Magnetic Resonance Imagery (MRI), X-Ray or Ultrasound(US) prior to surgery. They will use this image as an aid prior to or during an operation. In these examples, the surgeon must use their intuition to find a relationship between the image taken before surgery to the patient in the operating room(or). The purpose of Image Guided Surgery (IGS) is to substitute a surgeon s intuition with something concrete. Figure 1.1 shows an example of a pre-operational three-dimensiaonl MRI image of a patients head projected on their skull during surgery. Within this environment, the surgeon no longer needs to infer how the image is related to the patient, the relationship is put into context with the projection. To provide the context, an IGS application requires a mapping to be known between corresponding anatomical structures of a pre-operational image and the patient undergoing surgery. This mapping is found through a process called registration. 1

10 CHAPTER 1. INTRODUCTION 2 Figure 1.1: (Left) Pre-operative MRI. (Right) Patient in OR with pre-operational MRI projected on patient s skull. With permission from M. Leventon [9] Registration can be defined as a process that finds a transformation that maps features in one coordinate space to their location in another coordinate space. For example, the transform produced by a registration algorithm used on the images in Figure 1.2 ideally would be to rotate the right image 90 degrees counter clockwise. Registration does not necessarily need to be between two dimensional Ultrasound images. It can involve other image modalities such as CT, MRI or even other data types such as points, lines, surfaces, three dimensional surfaces and more. A detailed discussion of registration and examples will be presented in Chapter 2. This chapter will focus on one example in particular: Registering Ultrasound images to Computed Tomography images. Ultrasound to Computed Tomography is a type of image registration that holds a number of benefits. Possibly the greatest benefit is the non-intrusive nature of an

11 CHAPTER 1. INTRODUCTION 3 Figure 1.2: (Left) Ultrasound image. (Right) Ultrasound Image Rotated by 90 degrees. ultrasound. There is no need for physically implanted markers that would normally require extra incisions. Also, repeated exposure to ultrasonic waves has fewer, if any, negative side effects compared to other modes of image acquisition such as CT. Choosing a registration technique to find a correspondence between US image and CT volume can be difficult. In the past ten years numerous registration methods have been developed. Survey papers of such techniques have references numbering in the hundreds [25, 28]. The purpose of this thesis is to implement and compare three registration methods that have been tailored specifically to register ultrasound to CT: Mutual information [26], Iterative Closest Point [13], and a novel method developed in this thesis called Points-to-Image that is an extension of Brendel s and Aylward s work [6]. A comparison of the registration results and a general list of conditions where each method would be most appropriate will be given.

12 CHAPTER 1. INTRODUCTION 4 Figure 1.3: Example of US-to-CT image registration - (left) US and CT images used for registration (right)registration result showing the US images superimposed on a CT bone surface mesh 1.1 Overview of Ultrasound/Computed Tomography Registration Each registration technique is designed to find a transformation that maps a set of two dimensional ultrasound images (which are embedded in 3D) to the same coordinate space as a three dimensional CT volume, such that the corresponding anatomical structures between images are coincident (Figure 1.3). Prior to registration, the CT volume and ultrasound images are defined in separate coordinate systems. The CT volume is represented as a volumetric data set. This is a three dimensional discreet grid of volume elements known as voxels. Analogous to a pixel in a two dimensional image, a voxel of a CT volume stores a tissue density value collected from a CT scan. The ultrasound images are time-varying two dimensional images embedded in a

13 CHAPTER 1. INTRODUCTION 5 three dimensional space. Each image is a two dimensional array of pixels, where a pixel of the image represents the intensity of ultrasonic waves reflected from the corresponding position in the body. An optical tracker (Northern Digital Polaris) is used to track the ultrasound images in three dimensional space. The ultrasound probe is augmented with a Dynamic Reference Body (DRB). The DRB is a positioning device that can be optically tracked by the Polaris and is used to determine orientation and location. When the probe acquires an image it is associated with the current DRB s orientation and location. This transform locates the two-dimensional pixels of the ultrasound image in a three dimensional coordinate system shared by every ultrasound tracked by the Polaris, which is called the tracker coordinate space. To register the ultrasound images to the CT volume, a transformation must be found that maps the pixels of the tracker coordinate space to their corresponding voxels in the CT coordinate space. First, the ultrasound images are manually placed into the CT coordinate system using an initial guess of the correct registered position. The registration algorithm then attempts to compute a transformation between the initial position and the true position. There are various difficulties in computing a transform between the initial position and the true position. First, the initial position can differ from true position by six degrees of freedom. A degree of freedom is an independent parameter that affects the transformation. For instance, in a three dimensional coordinate system, translation offers three possible degrees of freedom (movement along the x, y and z axes). Similarly, rotation (roll, pitch and yaw) also accounts for three degrees of freedom. The size of the registration search space is exponential to the number of degrees

14 CHAPTER 1. INTRODUCTION 6 of freedoms. For instance, a system limited to one degree of freedom needs only to change one variable to explore the entire search space to find the registered position. In general, the size of a registration search space is one of two options: rigid or deformable. A rigid registration algorithm will apply a transformation to the images consisting of six degrees of freedom: translation in three directions and rotation around three perpendicular axes. In this system the images will change in position but their original shape will remain the same. Deformable registration algorithms consist of the degrees of freedom of a rigid transformation combined with scale in three directions. In a deformable system, not only does the original image change in position but also in shape and size. For this thesis the registration algorithms that will be discussed will be limited rigid systems without deformations. A second source of difficulty occurs because the same tissue is represented differently in US and CT images. A typical registration algorithm will attempt to make similar features of an image coincident. Images of the same modality (e.g. CT, US or MRI) usually make good candidates as the image properties and voxels/pixels of the image have similar values for the same tissues. Because CT and US are different modalities, their images have inherent differences that make identifying similar features difficult. These differences and the resulting difficulties of registration will be discussed in subsequent chapters. The last prominent difficulty is associated with the need for the registration procedure to be completed intra-operatively. In the operating room, factors such as speed, accuracy, and dependability are imperative to the success of the surgery and to the patient s health. An intra-operative registration algorithm must be quick, reliable, and accurate.

15 CHAPTER 1. INTRODUCTION Contributed work The main goal of this thesis is to tailor and compare three of the most promising registration methods for CT/US alignment of the pelvis. A comparison of the three methods will demonstrate the accuracy and reliability of each registration method in calculating a transformation, given different initial positions and various CT/US images. The contributions of this thesis are: A comparison of specialized versions of Mutual Information, ICP, and the Points-to-Image method to align ultrasound and CT data for the pelvis. To the best of the author s knowledge, this is the first comparison of these three methods using CT/US registration of the pelvis. Each of the methods implemented in this thesis was modified to specialize it to CT/US registration. A new Points-to-Image registration method as an extension to work done by Brendel and Aylward [6]. The Points-to-Image method is discussed in detail in Chapter 3 and the results of its registration are presented in Chapter Chapter Overview The remainder of the thesis is presented in four chapters: Chapter 2 - Background : This chapter is a review of past work done in image registration with a focus on papers examining CT-to-Ultrasound intra-operative registration. This thesis includes an overview of both CT and ultrasound formation and properties.

16 CHAPTER 1. INTRODUCTION 8 Chapter 3 - Method: This chapter has a detailed description of the three registration methods implemented: Mutual Information, Iterative Closest Point and Points-to-Image. Each discussion of the method has the motivation behind choosing the method, a brief overview of its theory, and how it was implemented in this thesis. Chapter 4 - Results: This chapter contains the statistics of registration results of the three methods implemented. A description is given of the data acquisition, the tools used, and the validation of results in reference to the ground truth (i.e. the true registration). The accuracy of all three are presented in a table format. Chapter 5 - Conclusion: The bulk of this chapter is a discussion of the advantages and disadvantages of all three methods. Additionally, suggestions are made as to which registration method is most appropriate to certain surgical conditions and types of CT/ultrasound data.

17 Chapter 2 Background 2.1 Physics of US and CT Formation Knowledge of the properties of an image can be used to improve the performance of a registration algorithm. This can be done in many ways, such as removing noise and inherent artifacts, or segmenting features common to both images. The registration algorithms used in this thesis have all made assumptions based on specific image properties that will be discussed in this section. Each discussion of the ultrasound and CT image properties is presented in two parts. A brief description of the formation of the image is presented, followed by a discussion of the properties of the imaging modality important to CT-to-Ultrasound image registration. 9

18 CHAPTER 2. BACKGROUND Ultrasound Formation In general, an ultrasound image shows the final strength of sound waves that emanate from an ultrasound probe, reflect off an object, and return back to the same probe. An ultrasound probe transmits high-frequency (1 to 5 MHz) ultrasonic pulses through tissue. As the pulse travels, it may hit a boundary between tissues of different acoustic impedances (e.g. between fluid and soft tissue, or between soft tissue and bone). When this boundary is reached, a portion of the sound wave is reflected back to the probe, while the rest travels farther into the tissue. The ultrasound probe senses the intensities of the reflected waves. These intensities are shown on the ultrasound image where the pixel intensity represents the strength of a returning wave and the position corresponds to the time it took for the wave to travel to and from the probe. Artifacts An artifact is a feature that exists in the ultrasound image but doesn t exist in the scanned tissue. Since these features do not actually exist in the tissue, a CT image of the same tissue will not contain the same features as the US image. As a result, the artifact can be a source of error to a registration algorithm designed to find similarities between CT and ultrasound images. Three prominent artifacts are speckle noise, echo effects and occlusion. Speckle noise is the result of the ultrasonic pulse scattering when it hits a density change. The algorithm that constructs the ultrasound image from the ultrasonic pulses assumes that when a pulse hits a density change, some of the pulse will reflect at a 180 o angle. When a pulse reflects at any other angle it is said to have scattered.

19 CHAPTER 2. BACKGROUND 11 Figure 2.1: Ultrasound image of a radius bone phantom in a water-bath - speckle noise can be observed around the perimeter of the bone. Also, there is a partial echo effect on the bottom of the image, below the bone surface. The scattered pulse will now take longer to reach the probe and will intersect with the probe at an incorrect location. The resulting image will, as a consequence, have intensity values in locations that do not correspond to actual acoustical boundaries (Figure 2.1). This is one source of speckle noise. Echo effects are the result of the ultrasonic pulse reflecting multiple times before it returns to the ultrasound probe. After the pulse reflects off of a density change, it starts to propagate back towards the ultrasound probe. During this time the pulse could intersect another density change and reflect back in its original (outgoing) direction. If it then intersects with the original density change that caused the first refraction, the pulse will reflect again. If the reflected pulse is strong enough, it will reach the ultrasound probe. The result is two copies of the same object in one ultrasound image with the less intense copy displaced farther from the probe. This is typically seen in images that contain strong density changes, such as tissue to bone.

20 CHAPTER 2. BACKGROUND 12 Figure 2.2: Ultrasound image of a distal radius bone acquired from a human subject. The image shows occlusion due to the bone surface. Occlusion occurs when a tissue boundary has a density change so strong that when the ultrasonic pulse reaches it, the pulse is fully reflected. When this occurs, the resulting ultrasound image will contain a dark section below the strong density region (Figure 2.2). Bone shadow is an example of occlusion where the strong density change is between the soft tissue and the surface for the bone Computed Tomography Formation The formation of a CT image is based on the exponential attenuation of X-ray energy [1]. The tissue to be scanned is placed between an X-ray transmitter and receiver. X-rays pass through the tissue and the attenuated energy is recorded as a single view. The transmitter and receiver are then rotated by a predefined step and X-rays are once again transmitted and their attenuation is recorded. This is repeated until 300 to 1000 views have been taken and the transmitter and receiver have rotated a total

21 CHAPTER 2. BACKGROUND 13 of 180 degrees. [23]. There are four main techniques used to reconstruct an image given multiple views Simultaneous linear equations Iterative techniques such as: Algebraic Reconstruction Technique (ART) Simultaneous Iterative Reconstruction Technique (SIRT) Iterative Least Squares Technique ( ILST ) Filtered Back projection Fourier reconstruction Probably the most popular and certainly the most interesting is the Fourier Reconstruction algorithm. Interested readers are referred to Smith [23] page 442 for further reading. The values of each pixel in the reconstructed image are stored as CT numbers which represent the attenuation value of the corresponding voxel. CT numbers are given in Hounsfield Units (HU). The attenuation of water is 0 HU. A standard grayscale CT image displays Hounsfield Units as values ranging from HU (black) to positive 1000 HU (white). Typical ranges of Hounsfield Units for various tissue are shown in Table 2.1 [24].

22 CHAPTER 2. BACKGROUND 14 Table 2.1: Hounsfield Units Values of Common Structures Substance HU Bone 400 to 1000 Soft Tissue 40 to 80 Water 0 Fat -60 to -100 Lung -400 to 600 Air Artifacts There exists a wide range of CT artifacts that originate from a many sources. Outlining every artifact and their origin goes beyond the scope of this paper. An interested reader can consult an article by Barrett et al. [4] which summarizes each type of common CT artifact: Physics-based, Patient-based, Scanner-based and Helical. In contrast to ultrasound, CT images tend to have relatively fewer artifacts. Typically, CT images do not suffer from more obtrusive artifacts seen in ultrasound such as echo effects, speckle noise and occlusion. While some of these artifacts and others can exist, careful patient positioning, optimum scanning parameters and modern CT scanners minimize their occurrence. Properties The amount of attenuation experienced by the X-ray is a function of the electron density of the tissue it passes through. density will have the same CT number. This means that two tissues of the same It is important to understand how this property distinguishes CT from ultrasound. In CT, a voxel stores a CT number that has a fixed range for each type of tissue. In ultrasound, a pixel represents a density difference or acoustic impedance between neighboring tissues. In a sense, CT pixels

23 CHAPTER 2. BACKGROUND 15 are absolute values while ultrasound pixels are relative values. The importance of this will become evident when the topic of image segmentation is discussed. 2.2 Registration The purpose of intra-operative transformation is to find a spatial transformation between the anatomical structures of the patient and the three dimensional image data used for surgical planning and intra-operative guidance. Intra-operative registration techniques can be classified in various ways: by the type of data to be registered, by the type of mapping produced, and whether the technique is automatically and or manually assisted. This thesis concentrates on work with orthopedics where the structure (that is, bone) is inflexible. Given that deformations consisting of breathing or other movement would not deform these structures a rigid transformation is appropriate. We will examine three forms of rigid registration that will be categorized by the type of data used: Fiducial Landmark registration, Feature registration, and Voxel Intensity Similarity registration Fiducial Landmark Registration One of the first methods used to register a patient s anatomy to a medical image found a transformation between corresponding fiducial markers in a medical image and on the patient. These markers, which are typically 1 mm beads of a radiopaque material such as tantalum, which are placed in the patient prior to acquiring the preoperative image used for planning. Then, during surgery, the corresponding markers are identified in a frame of reference relative to the operating room. Registering the patient to

24 CHAPTER 2. BACKGROUND 16 the preoperative image involves finding a transformation between the points identified in the OR and the ones visible in the medical image. The most common approach to finding this transformation is known as Least Squares which will be discussed in greater detail in the next chapter. Fiducial registration is very accurate, but it does involve certain risks. The need for extra incisions to place the markers can expose the patient to health risks associated with surgery, possible infection, and longer recovery time. Due to its invasive nature, fiducial registration is not recommended unless absolutely necessary Feature Registration A feature-based registration method finds a transformation between corresponding features in the preoperative image and another image captured during surgery. Features are extracted from the image through a process known as segmentation; a term given to any algorithm that removes or identifies a feature in a image. Segmentation typically has a high processing cost, especially when used in a three dimensional environment. For this reason, feature-based registration algorithms will segment the preoperative image features prior to surgery while images captured during surgery are segmented on the fly. The registration step finds a transform that best aligns the segmented features. Since the features share the same coordinate space as their original image, the registration transform that aligns the features also aligns the original images. A common technique in feature-based registration is to convert the segmented features into point sets prior to registration. An early example that achieves this is the head in hat method [16], where one point set is labeled as the head and the

25 CHAPTER 2. BACKGROUND 17 other the hat. The algorithm iteratively moves the hat point set closer to the head until all the points are within a desired distance of each other. Perhaps the most popular point-based feature registration method is Iterative Closest Point (ICP)[5]. This method is similar to the fiducial landmark registration method mentioned in Section 2.2.1, where Least Squares was used to find a transformation between point sets. The difference between the two methods is that there is no known initial correspondence between the point sets. In fiducial-based methods, a surgeon will typically identify what fiducial points in the OR represent the same points in the preoperative image. But ICP produces point-pair correspondence under the following assumption: for every point in one set, the closest point in the other set represents the same anatomical structure. ICP s algorithm is designed to minimize the distance between all pairs of closest points. The ICP algorithm iteratively finds the least squares transformation between all pairs, applies the transform to one point set, and then finds a new set of corresponding pairs of points. The process terminates when the average distance between corresponding points is less than a user defined distance. A review of image guided surgery methods that have incorporated ICP can be found in Section of this chapter and in Section 3.3. Feature Registration using Preoperative CT and Intraoperative US In the scope of feature based methods, ICP is a commonly used technique in intraoperative CT/US registration. Penney et al. developed a technique using the ICP method to register preoperative 3D CT or MR to a set of intra-operative ultrasound images [18]. Their

26 CHAPTER 2. BACKGROUND 18 purpose was to aid needle placement during thermal ablation of liver metastases. The method used automatic segmentation in the CT/MR and manual segmentation in the ultrasound to find the points of the inferior vena cava, hepatic veins, and portal veins of the liver. These three point sets were passed into a modified ICP algorithm that gave equal weighting based on the number of points in each set. This was done to make up for the discrepancy in the sizes of the three point sets. Another method that used a modified ICP algorithm was developed by Amin et al. to register the bone surface of a pre-operative CT volume to a 3D ultrasound of the pelvis [1]. Segmentation of US images are typically error prone. Errors in segmentation can produce points that are not members of the US bone surface. The modification to the classic ICP algorithm proposed by Amin et al. was to associate a weight to each point segmented from the ultrasound to indicate the probability the point is a member of the bone surface. The calculation of the weight is a multiplication of three metrics: the intensity of the associated US pixel, the probability the associated pixel lies on a edge in the US image and, the spacial distance between the US and CT bone surface positioned at the initial estimate provided by the surgeon. The major disadvantage in all of these papers is the reliance on segmentation of the ultrasound image. In Penney s paper, for example, an undisclosed error in segmentation occurred because of manual segmentation of the ultrasound. This led to the lowest accuracy of the presented papers. Amin s thesis had excellent results, but was reliant on the assumption that the bone surface will always have a shadow. Erroneous segmentation results could lead to a larger error of the registration algorithm.

27 CHAPTER 2. BACKGROUND Voxel Intensity Registration A voxel intensity (or voxel similarity ) based registration method attempts to produce a mapping between two images by optimizing a similarity measure based on the intensity values of both images. Like most registration techniques, the data to be registered starts in a shared coordinate space but not in the same position. In voxel based registration, this initial placement is close enough such that a subset of each image occupies the same space. This area shared by both images is referred to as the overlap. Rather than matching a set of points or features, voxel intensity schemes attempt to maximize the similarity between the intensity values in the overlap. One of the most simple voxel intensity similarity approaches is the Sum of Squared Difference (SSD), sometimes referred to as mean squares [10]. This approach measures the average intensity difference in the overlapping sections of the images. For images A and B the metric is: SSD(A, B) = xɛa B A(x) B(x) 2 N (2.1) where x is a discrete position in the image coordinate space, A B represents all the overlapping positions of the voxels in images A and B, and N = A B. The metric relies on the assumption that the intensity of a spatial point is the same in both images. This limits the use of the metric to single modality images. A more popular voxel similarity method is the normalized correlation or the correlation coefficient (CC)[10, 14]. This metric has looser assumptions. As opposed to SSD (where spatial points in both images must have identical intensity values), CC is designed under the assumption that points may differ by a linear scaling.

28 CHAPTER 2. BACKGROUND 20 CC = xɛa B (A(x)B(x)) xɛa B A2 (x) xɛa B B2 (x) (2.2) With this method, images of the same modality that have a linear relationship can be registered. For this reason, CC is commonly used in intermodality registration of MRI and CT images [10]. It is also used in other applications, such as a method developed by Blackall et al. [17] which will be discussed in Section Mutual information (MI) is one of the more popular methods used in voxel intensity registration, especially in cases of multimodal registration. Its success rate is largely due to a more complex approach of comparing the similarity of two images. Instead of comparing the absolute difference between intensities as in SSD or, the linear difference as in CC, MI has a more mathematically complex comparison scheme. The basis behind MI is to try to maximize a common intensity ratio between overlapping pixels. It does this by assigning a probability that intensity k will overlap with intensity j for every intensity pairing possible. The algorithm considers it has a good registration when it can best predict with the highest probability the intensity of a pixel in A given the intensity of its overlapping pixel in B. MI will be discussed in greater detail in Section Also, a survey paper by Pluim et al. is an excellent reference of the MI registration algorithm and work done involving MI [25]. Voxel Intensity Registration of Preoperative CT and Intraoperative US Multimodal registration of CT/US images using Voxel-Similarity or Intensity-Similarity methods is difficult. The general goal of a voxel registration technique is to maximize

29 CHAPTER 2. BACKGROUND 21 the similarities between image intensities, which, in a multimodal case, may be few. Not only do the intensities of each modality usually differ, but objects that exist in one image may not exist in another. A successful technique will account for these differences during the registration process. Schorr and Worn developed a registration technique to align a 3D intraoperative ultrasound to a 3D preoperative CT volume [21]. In this method, the CT and ultrasound are never put in direct relationship; instead, a simulated ultrasound image is derived from the CT volume and that image is compared with the intraoperative, real ultrasound. The similarity metric used to compare the two is Mutual Information. Blackall et al. developed a method to enhance corresponding features and suppress non-corresponding features by assigning a probability to each voxel [17]. The probability resembles the likelihood of the voxel representing a corresponding feature. To register the images, the probability images were used to minimize a Normalized Cross Correlation metric similar to the one described in Section Image to Feature Registration Image to feature registration is a term introduced by this thesis to categorize any registration technique that produces a mapping between a medical image and a feature set. The medical papers that would fit into this category are relatively recent and do not use a standard process for registration. That being said, there are two similar papers that are closely related to this thesis. The first paper, by Brendel et al, described a method of registration that finds a correspondence between pre-operative CT and an intra-operative 3D ultrasound of the bone surface of the spinal cord [6]. In this paper, the bone from the CT was

30 CHAPTER 2. BACKGROUND 22 segmented from the image and converted into points in 3D. The authors state that the bone surface in the ultrasound contains the brightest intensities of the image. Under this assumption, a similarity metric was created that measures the average value of the ultrasound intensities that are located in the same 3D positions of the CT bone surface points. The author assumes that the similarity measure is at a maximum when the points of the CT are positioned on the brightest intensities of the ultrasound. The second paper, by Aylward et al, discusses a technique to register three dimensional ultrasound to a segmented three dimensional CT surface model of the veins of the liver [3]. Similar to Brendel s method, it attempts to maximize the average value of the ultrasound intensities that overlap the CT surface points. In this method, the value of the overlap was weighted by the points proximity to the centre of the hepatic veins. Both Brendel s and Aylward s methods were a major influence on this thesis. An in-depth description of both papers and a novel method that is an extension of their work will be given in Section Summary Intra-operative registration spans a wide breadth of possible techniques including: Fiducial Landmark registration, Feature registration, Voxel Intensity Similarity registration and Image to Feature registration. These techniques have been applied to US-CT registration with varying limitations (See Table 2.3). The primary challenges of CT-US registration are a result of the data used. While pre-operative CT images are accurate, the intra-operative US images are susceptible

31 CHAPTER 2. BACKGROUND 23 Method Data used Invasiveness Accuracy Fiducial Fiducial Landmarks Incision into pa- Limited to resolu- and image tient tion of image Feature Segmented CT and Preoperative ionization Limited to segmen- US exposure tation of US Voxel Similarity CT and US image Preoperative ionization Limited to similar- exposure ity between US and Image to Feature Points and image Preoperative ionization exposure CT images Limited to segmentation and similarity between US and CT images Table 2.2: Comparison of intra-operative CT-US registration techniques to artifacts. This makes the US images difficult to segment and dissimilar from CT. The next chapter discusses the implementation of three promising techniques picked from each category of registration that address the challenges associated with multimodal data.

32 Chapter 3 Method 3.1 Purpose The purpose of this thesis is to find a method that is best suited to align multiple two dimensional ultrasounds to a three dimensional CT volume. Three methods that accomplish this were chosen for comparison: Mutual Information based alignment developed by Viola and Wells [26], the ICP method developed by Besl and Mckay [5], and the Points to Image (PTI) method that is a novel approach developed in this thesis that is an extension of work done by Aylward et al. [3] and Brendel et al. [6]. The methods were chosen as promising techniques within their niche of the registration spectrum presented in Chapter 2. On one side of the spectrum (Figure 3.1) lies Mutual Information based registration, a Voxel-Similarity based method that aligns two or more images. On the opposite side lies ICP, a Feature based method that aligns point sets. In the middle lies PTI, a hybrid of Voxel-Similarity and Feature based methods that aligns a point set to an image. 24

33 CHAPTER 3. METHOD 25 Figure 3.1: Image Registration Spectrum The alignment in each method is in the form of a rigid transformation: A = RB + o (3.1) where R is a Euler rotation matrix and o is a translation offset in three-dimensions. The remainder of this chapter is a description of how each method finds this alignment between its two images of different modalities. A discussion of the motivation, theory and implementation of each method will be given. 3.2 Mutual Information Mutual Information (MI) first developed in 1996 by Viola et al. [26], aligns two images (Figure 3.2). It has shown promising results in image registration, especially when used with images of multiple modalities [25]. The discussion of the metric is broken up into two sections. First, the theory behind the metric will be presented. Then, the following section will concentrate on how the theory is applied to register ultrasound and CT image data.

34 CHAPTER 3. METHOD 26 Figure 3.2: Left - Ultrasound Image, Right - Ultrasound superimposed on a CT image of the same object Theory Mutual Information is a similarity metric used in registration. It can be thought of as a measure of how well one image can predict another. MI compares the information between images to arrive at its measure where information is defined as entropy. The assumption that MI makes is that when images are perfectly registered their shared entropy is minimized. The following sections cover the theory needed to fully understand the metric. A excellent survey of mutual information was written by Josien et al. [25]. The subsequent explanation of the metric is an overview of that report. First an introduction to and definition of entropy will be given. Following that, there will be a discussion of joint entropy and its role in image registration. Finally, these concepts will be used to explain what is known as mutual information.

35 CHAPTER 3. METHOD 27 Entropy Mutual information defines information as entropy. Entropy was first introduced in 1928 by Hartley [20] who developed a method to measure how much information was transferred during telegraph and radio communications. Telegraph messages were arranged into a string of n symbols with s possibilities for each symbol. For a given n and s there exists s n potential string combinations. As the length of the string (n) grows, the number of possible string combinations grows exponentially. As a result, strings with different lengths are difficult to compare. Hartley developed a measure that grows linearly with the strings length to make information measures comparable: H = log s n (3.2) = n log s (3.3) Hartley s information measure, H, is dependent on the number of possible string combinations. If only one type of symbol can be transmitted (s = 1), then, for a given n, there will only one possible outcome. As a result no information is gained (n log 1 = 0). Conversely, as s or n increase so does the number of possible string combinations, yielding a higher H. A disadvantage of Hartley s entropy measure is the assumption that all symbols have an equal chance of occurring. This means that strings of a given s and n will yield the same amount of information regardless of what symbols are in the string. For example the sting aaaaaaaaab yields the same amount of information as the string ababababa. Another measure of information, developed by Shannon, handles this problem

36 CHAPTER 3. METHOD 28 [22]. It weighs information by the probability of the event occurring. N H = p i log 1 (3.4) p i i=1 = N p i log p i (3.5) i=1 where N represents the number of events in a system and p i represents the probability of event i occurring. Shannon s entropy is maximized when all probabilities are equal. In this respect, entropy can be viewed as a measure of dispersion of probability, where dispersion refers to the possibility of multiple events occurring. So, when there is only one event possible (p = 1) there is no dispersion in probability and the entropy level is at a minimum ( 1 log 1 = 0). Conversely, when there are multiple events, all with the same probability of occurring, there is a high dispersion of probabilities and entropy level is at a maximum. Joint Entropy Many mutual information variants extend from an assumption made by Woods [27]. He observed that two images of the same object often share certain regions or tissues. Wood s ideal assumption was that all pixels with a similar value represent the same tissue type therefore, the values of corresponding pixels in another image should also be similar to each other. Further, while the pixel values representing a tissue in one image may not have the same values in another, there exists an intensity ratio between them. Ideally the ratios between every corresponding pixel would be the same. This is not always the case. Factors associated with image modality and quality can lead to pixels of the same region or tissue in each image having different intensity values.

37 CHAPTER 3. METHOD 29 Figure 3.3: Examples of Joint Entropy with a transformation of none, translation and rotation from left to right Hill developed a voxel-similarity metric that can be used to visualize Woods s assumption [11]. Like other voxel-similarity metrics discussed in Chapter 2, Hill s metric, given a transformation, measures how alike the overlapping sections of two images are. The metric uses a joint histogram to do this. A joint histogram is a two dimensional array that is used to record the different combinations/mappings that overlapping voxel intensities can take. For example, in a joint histogram produced from image A and B, a position (x, y) in the array represents the number of times intensity x in image A overlaps with an intensity y in image B. When the joint histogram is visualized as a two dimensional image, patterns can be seen for specific overlaps. In Figure 3.3, the leftmost image is an example of a joint histogram produced from overlapping identical images. Since the images that make the histogram are aligned and identical, all pixels in one image share the same intensity value to the pixel they overlap in the other image. Thus, for every given intensity i in the images, only the positions (i, i) will be populated in the joint histogram. This is visualized by the histogram as a straight, diagonal line. As the images become misaligned, regions from one image no longer overlap on the correct regions of the other image. The resulting histograms from image misalignment of translation and rotation are shown in Figure 3.3 as the center and right image. As

38 CHAPTER 3. METHOD 30 the images become misaligned, the number of different overlapping intensity pairs grows causing the joint histogram values to be more dispersed. Registering the images is related to minimizing the dispersion in the joint histogram. Hill et al. suggested using Shannon s entropy to do this. Recall from Section of this chapter that entropy measures the dispersions of a probability. Also recall that the value of each (x, y) location in the joint histogram represents the number of times intensity x in image A overlaps with an intensity y in image B. If the joint histogram can instead be used to estimate the probability of an intensity overlap occurring then Shannon s entropy can be used to measure the dispersion in the histogram: H(A, B) = p(a, B) log p(a, B) (3.6) where the summation is over the number of overlapping intensities in image A on image B. To calculate p(a, B), each (x, y) value in the joint histogram is divided by the total number of overlaps: p(a(x), B(y)) = jointhist(x, y) N (3.7) where jointhist(x, y) is the value of the joint histogram at position (x, y). This new joint histogram is commonly referred to as a Probability Density Function (PDF). Equation (3.6) is referred to as the joint entropy. If the joint entropy is zero the overlaps are identical. The joint entropy increases as the amount of dispersion in the PDF increases. As discussed earlier in this section, dispersion is related to misalignment of the images. So, to register both images is to minimize the joint entropy thereby reducing the dispersion in the PDF. One of the drawbacks in minimizing joint entropy to find a registration is that joint entropy is very sensitive to the content of the overlapping sections of the images.

39 CHAPTER 3. METHOD 31 For instance, take two images both containing a large area of uniform intensities, such as air or water. Alignments that cause these large areas to overlap will result in a joint histogram having high probability values for p(i, z) where i and z represents the intensity values of the uniform areas in each image. The higher a single probability is, the less dispersion there will be in the joint histogram. Thus, the larger the area of uniform overlaps, the lower the dispersion in the joint histogram and in turn, the lower the joint overall entropy. Therefore, an optimizer seeking to minimize joint entropy will tend to increase the amount of uniform intensity overlaps. Mutual Information Mutual information was introduced by Viola et al. as an attempt to account for the drawbacks in joint entropy [26]. Viola et al. developed a new metric that extended Hill et al. s metric, (3.6) to include the entropy of each image in the calculation. The entropy contributed by each image is only calculated for the pixels that overlap with pixels from the other image. H(A) = p(a) log p(a) for all a that overlap with image B (3.8) H(B) = p(b) log p(b) for all b that overlap with image A (3.9) Equations(3.8) and(3.9) are referred to as image s marginal entropy. Viola et al. used this to create a new metric named Mutual Information : I(A, B) = H(A) + H(B) H(A, B) (3.10) where H(A, B) is the joint entropy from Equation(3.6). As mentioned in Section(3.2.1), minimizing the joint entropy reduces dispersion in the PDF which can lead to a desired alignment. So, registration using MI is related to maximizing Equation (3.10)

40 CHAPTER 3. METHOD 32 in order to minimize the joint entropy. The added benefit of MI over joint entropy is its resilience to bad alignments due to large overlaps of uniform intensity values. As discussed in section(3.2.1), such an overlap would result in a lower joint entropy. In MI such an overlap would also lead to a lower marginal entropies (H(A) and H(B)). This could cause a lower overall MI value making these alignments less desirable to an optimizer. Registration of Multiple 2D - to - 3D MI, Equation (3.10), can be used as a metric to register multiple two dimensional images to a three dimensional volume. To obtain this registration a single transformation needs to be found that maps all of the ultrasound images to the CT coordinate frame. This can be done using a modified MI metric and the gradient descent optimizer. The modified MI algorithm is an iterative process. At each iteration a new transformation is formulated by the optimizer to map the ultrasound image s coordinate space to the CT volume s coordinate space. The MI metric is used to gauge how similar the overlapping sections of the transformed ultrasound images and the CT volume are. The iterations terminate when the metric value is above a user set threshold or if the movement of the ultrasound images from the transformation produced by the optimizer is minimal. Metric - Modified MI Equations(3.6),(3.8) and(3.9) are calculated over a summation of pixel overlaps, where each summation iteration represents a pixel from image A overlapping with a pixel

41 CHAPTER 3. METHOD 33 from image B. Each iteration is independent of the others, meaning that any calculations made in an iteration involving a certain pixel will not affect any other iteration calculation involving any other pixel. So, Equations (3.6),(3.8) and(3.9) can be thought of as comparing two sets of pixels rather then two images. If a multiple set of two dimensional images is thought of as a collection of N pixels rather then a set of n images, joint entropy can be rewritten to account for the additional images: n N i H 1 (M, B) = p(m ij, B) log( p(m ij, B) ) (3.11) = i=1 j=1 n H(M i, B) (3.12) i=1 where M i represents image i in the collection of images M. Similarly, M ij represents the j th pixel in image M i. The marginal entropy of the image set can be calculated in a similar fashion: n H (M) = H(M i ) (3.13) The value of the metric needs to be calculated for each iteration of the registration i=1 process. To find out the metric value given a transformation, the transformation must be applied to the pixels of ultrasound images before calculating the metric. The above equations (3.8) and (3.9) can be rewritten to get the final components needed to define MI given a transformation (R,o): n H (M, R, o, B) = H(M i R + o, B) (3.14) i=1 H (M, R, o) = n H(M i R + o) (3.15) i=1

42 CHAPTER 3. METHOD 34 where o and R is the translation and rotation, offset respectively, between M and B. The final metric is: I (M, R, o, B) = H (MR + o) + H (B) H (MR + o, B) (3.16) This has the same properties as the original mutual information, Equation(3.10). Finding a registration is related to minimizing the entropy in the PDF, Equation (3.14). Then, to find the ideal alignment, an R and o must be found that maximizes Equation(3.16). Optimization To maximize Equation(3.16) the gradient descent optimizer developed by the National Library of Medicine Insight Segmentation and Registration Toolkit (ITK) was used [14]. It requires a metric and a gradient at a given transformation. The gradient for a single image at a given transformation is calculated by taking the average of the gradient at each overlapping pixel in the CT image, then transforming the average gradient into world coordinates by multiplying it by its Jacobian matrix, J. G(R, o) = 1 N J i (R, o) B(x i R + o) (3.17) N i=1 where x i is the i th element of N overlapping points and B is the gradient of the CT volume at x i. The Jacobian can be interpreted as a matrix that indicates for a point in the input space how much its mapping on the output space will change as a response to a small variation in one of the transform parameters. It is used in image registration to map the gradient found in image space(pixel space) to world space(absolute position of the pixel).

43 CHAPTER 3. METHOD Registration Using Mutual Information The ITK registration library was used as the framework to implement the MI algorithm. ITK s registration framework functions by transforming a moving image about a fixed image. The transformation of the moving image was created using the output of a metric and optimizer. At each iteration the metric produced a value indicating how well the images were aligned. The optimizer would use this value to move the images to a position that was thought to produce a better metric. This cycle continues until the incremental translation produced by the optimizer transform is under a user specified threshold, or the maximum amount of iterations has been reached: Given A set of 2D ultrasound data M; A 3D CT volume, B, An initial guess of alignment, T = (R, o); A minimal step S that the optimizer must move at each iteration Preprocessing the CT and US images are changed to adhere to the prerequisites of Mutual information. For each iteration 1. Pass I(MT, B) and G(T) to the optimizer to produce T 2. If the distance between T and T is less than S, stop registration.

44 CHAPTER 3. METHOD T = T. 4. Begin a new iteration using T. The metric we used was a modified version of ITK s Mattes Mutual Information. In its unmodified form, Mattes Mutual Information Metric registers only two two dimensional or three dimensional images. As stated in Section 3.2.1, for this experiment, it is necessary for the metric to register multiple two dimensional images to a three dimensional image. Specifically we want to implement the modified metric outlined in Section As stated in section the set of US images can be thought of as a group of voxels scattered in a three dimensional setting rather than multiple two dimensional images. Using this, the ITK Mattes Mutual Information metric was modified to accept a three dimensional volume (CT) and create a group of voxels from multiple two dimensional images. This was done by exploiting a performance enhancement of the metric. The performance of the metric is closely tied to the number of pixels of the moving image: the greater the number of pixels the slower the execution of the metric. To improve performance, ITK only uses a subset of the moving images pixels. The subset is populated by randomly sampling x pixels from the Moving image; where x is some tuned value by the user. The smaller the value of x, the fewer calculations are needed and the faster the metric executes. The modified version of the metric populates the subset of moving image pixels by randomly sampling x/n non-zero value pixels from each of the N US images. This gives the same size of subset with a uniform pixel sample of the bone surface from each image. This subset is treated as the moving image of the registration process.

45 CHAPTER 3. METHOD 37 Preprocessing In order to use mutual information as a metric, the images used must follow the assumption made by Woods: In an image, pixels that represent the same tissue will have similar values, so pixels of the corresponding tissues in another image should also have similar values. The values of the same tissue from image to image are not necessarily the same but are assumed to have a consistent mapping. The purpose of most of the preprocessing decisions was to make the US and CT images follow Woods assumption. Preprocessing Ultrasound As discussed in Chapter 2, ultrasound images contain a large amount of noise that is not present in CT images. These differences can be reduced through threshold filtering [25]. The average value of some of the speckle noise is lower than the values of pixels representing actual tissue. If a threshold could be found that is lower than pixels representing real tissue and higher than some speckle noise, eliminating all the values below that threshold would result in an ultrasound image with less noise. In Figure 3.4, pixels lower than a certain threshold are changed to black. Using this method, a large amount of noise can be removed. Caution must be taken when choosing the threshold value. A value too large might remove pixels that represent tissue. A value too small could have little effect on noise reduction. The method used in this thesis tuned the threshold value by studying the effect of different values on various ultrasound images acquired using the same calibration setting. The results of MI registration can be found in Chapter 4 Section

46 CHAPTER 3. METHOD 38 Figure 3.4: ultrasound image before and after thresholding 3.3 Iterative Closest Point The ICP method was introduced by Besl and Mckay for the purpose of registering point sets, line segments, and implicit curves representing the same object [5]. Besl and Mckay assumed that every point in one point set has a corresponding point in the other. The sets of points can be registered by finding a transformation that minimizes the distance between these corresponding points. ICP s robust algorithm makes it an attractive solution for many medical imaging registration problems. Only recently has it been applied to ultrasound registration [6, 3]. Since ICP only registers sets of points, any medical image registration technique designed to use ICP must first convert images into points resembling the same object. Those points can be used in the ICP method to find an alignment that can later be applied to the original images (Figure 3.5). The following will describe how to apply the ICP method to US-CT registration. First is a discussion of the motivations to use ICP as a technique to register ultrasound

47 CHAPTER 3. METHOD 39 Figure 3.5: Left - Ultrasound Image with segmented surface superimposed, Right - Segmented points from ultrasound on a CT mesh. and CT. Next is a summary of Besl and Mckay s ICP. Then a description of the algorithm developed for this paper to register ultrasound to CT using ICP is provided Motivation Multi-modal registration using ultrasound images is difficult. This is partially due to the lack of similarity that ultrasound images have to other modalities. For instance, voxel-similarity methods such as mean squares or normalized correlation are based on the assumption that there exists an intensity mapping from voxels of one image to the voxels of the other. Due to factors discussed in Chapter 2 such as speckle noise and acoustic artifacts like bone shadow and echo artifacts, an intensity mapping between images could be difficult or even impossible to find. Feature based methods, discussed in Chapter 2, have a potential to work around a similarity difference. They are based on the assumption that images have shared features. The benefit of feature-based methods over voxel-similarity based methods

48 CHAPTER 3. METHOD 40 is there is no assumed mapping of intensities between images. The only assumptions made about the images are that they share a similar feature and that feature is distinguishable. As long as those assumptions remain true the images to be registered can differ in any other aspect. If a method were developed to separate the bone surface from the rest of the image in both CT and US, a feature-based approach could be used to align the segmented parts. ICP is an effective feature based registration method. It converges to a minima fast and accurately. Probably the most beneficial aspect of ICP is its flexibility. It is fully automatic, meaning that a predefined correspondence between points is not needed. Other feature matching techniques require a correspondence to be already known. Upon proper initialization, ICP can produce result with very precise outcomes without having this information Iterative Closest Point Theory The ICP algorithm is based on the assumption that when two point sets are aligned, the average distance from one point set to the other is at a minimum. Distance is defined as the average distance between corresponding points. The corresponding points are assumed to be two points, one from each point set, that have the shortest Euclidian distance between each other. The ICP method uses these distances between closest points as a metric to gauge how aligned the two sets of points are: 1 N (a i R + o) b i (3.18) N i=1 Equation (3.18) measures the average distance from every point in A to its closest point in B, where a i is the i th of N points in A and b i is the closest point in B to a i. The set of points that contain all b s, B, represent the alleged corresponding points

49 CHAPTER 3. METHOD 41 to A. If the distance between B and A is not at a minimum then the points in A are not aligned with the points in B. The ICP algorithm is designed to minimize Equation (3.18) to find the ideal alignment. It does this, as one might infer, in an iterative manner. The algorithms begins by finding the closest points from T (A) to B, B, for a given transformation T = (R, o). It uses A, B and T to evaluate Equation (3.18). If that value is not below a user specified threshold it finds a new transformation, T, that minimizes Equation (3.18). This transform minimizes the distances from A to B. The final step in this iteration is to multiply T by T and restart the entire process. The two complex aspects of this method are finding the closest points in B to A and estimating a transform between A and B that minimizes Equation (3.18). Transform Estimation For two different point sets A and B the basis of the ICP algorithm is to find a transformation in the form: b i = a i R + o (3.19) for every ith point in A and B. Assuming that the transformation will not be perfect, a resulting residual error will occur: e = b i (a i R + o) (3.20) To find the best rotation and translation one must minimize the sum of squares e 2 (3.21) where the summation is over all n points in A and B. This is in fact the metric, Equation (3.18), defined earlier in this chapter.

50 CHAPTER 3. METHOD 42 Translation The translation between A and B makes the centroid of A coincident to that of B. Picture two point sets, A and B, differing only by a translation offset o. Meaning: b i = a i + o (3.22) The centroids are defined as c a = 1 N N a and c b = 1 N i=1 N b (3.23) From Equation (3.22) it can be written c b = 1 N a i + o (3.24) N = 1 N i=1 N a i + 1 N i=1 i=1 N o (3.25) i=1 = c a + o (3.26) If the point sets also differed by rotation, Equation (3.22) becomes: b i = a i R + o (3.27) Similar calculations as above can show that o is independent of R. c b = 1 N a i R + o (3.28) N = 1 N i=1 N a i R + 1 N i=1 N o (3.29) i=1 = c a R + o (3.30) meaning that no matter the rotational offset, the vector between the centriods should be o. Since R is independent of o, if the point set s centroids were matched by translating one by the centroid difference, the only variation between the two sets should

51 CHAPTER 3. METHOD 43 be the rotation. Equation 3.20 can be rewritten as: e = b i a i R (3.31) e 2 (3.32) Rotation The final part of the transformation needed is the rotation matrix, R, that minimizes Equation (3.32): b i a i R 2 The process to reduce Equation (3.32) goes beyond the scope of this thesis. The most fundamental approaches include a quaternion method of Horn [12], a orthonormal method of Horn et al. [13] and a Singular Value Decomposition method of Arun et al. [2]. ICP Algorithm Now that we have defined the fundamentals to understand the ICP method a formal algorithm using ICP for registration can be presented: Given Two points sets A and B A initial guess at alignment, T 0 A user set threshold equaling the maximum distance the two point sets should differ by.

52 CHAPTER 3. METHOD 44 For Iteration k: 1. Create a new point set B where b i is the closest point in B to point a i in A. 2. Estimate the best transformation T k using Horn s method that minimizes the distance between A and B. 3. Compute the average distance between the sets A and B using Equation (3.18). If less than threshold stop. 4. Apply T k to a i. The final transformation, T, is the product of all T k s i.e. T = T k T 3 T 2 T 1 T US-CT Registration Using ICP implementation Our interest is to use the ICP method to register multiple ultrasound images to a CT volume. As described above the ICP method can only register two point sets. Furthermore, a prerequisite of ICP states that these point sets need to be of the same object. So, to use ICP, a set of points of the same object must be extracted from both the ultrasound and CT images. Once these points are known, ICP can be used to find a transformation between the two point sets and then apply that transformation to the original images (Figure 3.6).

53 CHAPTER 3. METHOD 45 Figure 3.6: Ultrasound surface mesh registered to CT surface mesh Segmentation In order to decide which feature to segment from both the ultrasound and CT images the characteristics of both should be studied. A good feature to segment is an object that is noticeable in both that provides enough information to be used for registration. In Figure 3.2 one of the most distinguishable features in both the CT and ultrasound is the bone surface. In the ultrasound this is represented as the bright band followed by a dark shadow. In the CT the bone surface is the uniform intensity values. Given certain assumptions these properties tend to be one of the most invariant features in each modality. Segmentation of Ultrasound There has been extensive work in the area of bone segmentation. Amin et al. developed a method to segment bone from soft tissue using a directional edge detector

54 CHAPTER 3. METHOD 46 [1]. It takes advantage of the acoustic artifact known as bone shadow (explained in Section 2.1.1). Amin et al. s assumption is all pixels below the bone surface will be zero. The edge detector propagates upwards, in the opposite direction of the US probe waves, along each column of the ultrasound image finding the first intensity higher than some threshold and labels that point as the bone surface. The technique used for the ICP algorithm of this thesis is similar to Amin et al. s technique. A filter was used that propagates vertically, finding the brightest point in each column. These brightest points are then labeled as the bone. This technique relies on the notion that the acoustic impedance between the soft tissue and the bone will be the largest acoustic impedance difference in the entire image. The benefit of choosing the brightest pixel in each column is it does not rely on a user-set threshold. Segmentation by picking first intensity larger then some threshold is dependent on the values contained in the US image. If the threshold value was too small, noise from the image could be assumed to be a bone surface point. If the value was too large the bone could be overlooked. Although the brightest point is not always the bone surface, it usually is when the ultrasound is at an angle within 20 degrees of the normal to the bone surface [1]. Registration Algorithm Our proposed algorithm to register a US image to a CT Volume using ICP is as follows: Given A set of 2D ultrasound data; A 3D CT volume,

55 CHAPTER 3. METHOD 47 An initial guess of alignment, T = (R, o); The maximum average distance, D, that segmented points sets can differ by. Do 1. Extract the bone surface from the multiple Ultrasounds and CT images in the form of a mesh. 2. Find a transformation between the meshes using Besl and MaKay s ICP given T and D. 3. Apply this transformation to the original images. 3.4 Points To Image The Points to Image (PTI) method aligns a set of segmented CT bone surface points with multiple ultrasound images (Figure 3.7). This is accomplished by attempting to position the CT mesh of points to their correct locations on the multiple ultrasound images. The following section has a description of the PTI method. This includes the motivation behind the method as well as a detailed description of the theory and implementation of this technique Motivation A popular method for image registration is Feature-Based Alignment. This occurs when a transformation is identified that aligns a feature common in both images.

56 CHAPTER 3. METHOD 48 Figure 3.7: left - ultrasound image, right - CT bone surface mesh of same object Feature-Based registration is commonly performed with high quality images such as CT or MRI. This is because a major contributor of RMS error is tied to the ability to accurately segment the images used. High quality images are typically easier to segment while images of low quality or with lots of noise will typically have less successful segmentation results. Feature-Based Alignment proves to be more difficult when dependent on ultrasound segmentation. Due to problems associated with ultrasound data such as noise, speckle and acoustic artifacts, segmentation of ultrasound data is difficult. This makes the segmentation ultrasound data more prone to error when compared to segmenting CT or MRI. Since feature-based alignment is dependent on segmentation, a method based on segmentation prone to errors will also be error prone. Feature based registration using ultrasound data has been done before by Amin [1] in their Ph.D dissertation and the ICP method introduced Section 3.3. In each case the result is heavily dependent on the success of the segmentation method. The PTI method is an attempt to generate a feature-based method that does not rely on segmentation of ultrasound data. Instead of registering two sets of points like

57 CHAPTER 3. METHOD 49 Figure 3.8: CT mesh (solid line) overlaid on ultrasound ICP, PTI registers a set of points to an image. This is accomplished by finding a transformation that places the points from the CT mesh in the correct location on the ultrasound image (Figure 3.8). When the ultrasound data is placed in the correct position, its bone surface, represented by the bright curve, overlaps with the bone surface mesh extracted from the CT volume. If the images are misaligned, the bone surfaces will not overlap. The PTI method attempts to find the best possible overlap of the two bone surfaces Theory The purpose of PTI is to register a two dimensional image with a three dimensional set of points. While this method of registration is rather new, there are similar techniques that have been attempted. Brendel et al. developed a method to register a three dimensional ultrasound volume to a three dimensional CT bone surface mesh [6]. They take advantage of the property that bone in ultrasound images tends to be the brightest pixels. All the points in the CT mesh represent the location of the

58 CHAPTER 3. METHOD 50 bone surface in the CT volume. So, when aligned, the average value of pixels in the ultrasound overlapping with points from the CT mesh should be higher than any other alignment. Their metric is a measure of this: M(R, o) = 1 n I(x i R + o) (3.33) n i=1 where I is the pixel data of the ultrasound and x i is the three dimensional position of a CT bone surface point. R is a rotation matrix defined by three perpendicular axes and o is a translational offset. The optimizer changes R and o to relocate the points to maximize M. Aylward et al. developed a similar technique to Brendel et al. [3]. They attempted to align a single ultrasound image with segmented liver lesions which are tubular in structure. Aylward et al. weighs each point by its proximity to the centre of the tube: M(R, o) = 1 n n w i I(x i R + o) (3.34) i=1 where w is the weighting assigned. This method proves to be effective but difficult to transfer to bone surface registration. The PTI method is similar to Brendel et al. s and Alyward et al. s method. Like Brendel, PTI is based on the assumption that the brightest pixels in an ultrasound image represent the bone s surface. When the alignment between the CT and ultrasound is ideal, the average value of the pixels overlapping with the bone surface points will be at a maximum. The novel aspect of PTI is that it samples two-dimensional ultrasound whereas Brendal s and Aylward s methods sample three-dimensional ultrasound. Figure 3.9 shows the difference between the methods. The challenge of using two-dimensional ultrasound over three-dimensional ultrasound is finding intersections between the CT and ultrasound. Figure 3.9 displays

59 CHAPTER 3. METHOD 51 Figure 3.9: Difference between sampling a three-dimensional ultrasound and a twodimensional ultrasound. Top two images are a three dimensional ultrasound from two different viewpoints, one looking down the normal(top right) of the US and the second looking 90 degrees from the normal (top left). Bottom two views are both looking 90 degrees from the normal. In the two-dimensional case (PTI), the US is a line. The lines extending from the plane represent the weight introduced in the PTI method.

60 CHAPTER 3. METHOD 52 the difference between sampling a three-dimensional ultrasound(brendal) verses a two-dimensional ultrasound(pti). In Brendal s case, the ultrasound is a volume that acts as a bounding box. A CT point within the bounding box intersects with the ultrasound. To sample the ultrasound given a CT point, the x,y,z positions of each intersecting CT points is used. In PTI s case, the ultrasound is a plane rather then a volume. It is very unlikely a CT point will intersect with it. We introduce a modification to Brendel s metric (Equation 3.33) that uses CT points that do not necessarily intersect with the ultrasound plane. For N images and n points, M(R, o) = 1 n N w ij I j (x i R + o) (3.35) n where x i i=1 j=1 is the projection of point x i from the CT mesh onto image I j along the normal of image I j (lines extending from the ultrasound plane in Figure 3.9). w is the weight given to pixel value at x i. It is dependent on distance that x i is from the plane. A maximum weighting is given for points intersecting with the plane and is progressively reduced until a user specified maximum distance where any points further are given no weighting at all. Then w ij is defined as, { d(x 1 i,i j ) ifd(x c i, I j ) <= c w ij = 0 ifd(x i, I j ) > c where d(x i, I j ) is the distance of x i user-specified maximum distance. (3.36) from the plane of US image I j, and c is the The influence of each ultrasound image thus extends into the three dimensional space surrounding the image. Even if no points directly intersect with the plane, a registration can be found using the points that lie near it.

61 CHAPTER 3. METHOD 53 Optimization To maximize M in Equation (3.35), a gradient descent optimizer developed by the National Library of Medicine Insight Segmentation and Registration Toolkit (ITK) is used. The optimizer requires a metric, M, and a gradient for a given transformation. The gradient for a given R and o is calculated by taking the average of the gradient at each intersecting pixel in the ultrasound image and transforming it into three dimensional space by multiplying it by a Jacobian matrix. G(R, o) = 1 n N w ij J j (R, o) I j (x i R + o) (3.37) n The gradient is weighted by the same w ij used in Equation(3.35). i=1 j= Implementation The PTI matrix (Equation 3.35), can be used to define a method to register a CT image to a ultrasound. For: A set of two dimensional ultrasound data A CT bone surface mesh of points An initial guess of alignment, T = (R,o). A minimal step S that the optimizer must move at each iteration Do 1. Position the ultrasound images at T. 2. Pass the values G(T) and M(T) to the optimizer to produce T*.

62 CHAPTER 3. METHOD If the distance between T* and T is less then S, stop registration. 4. T=T*. 5. Return to 1. Speed One of the speed bottlenecks of the algorithm is the calculation of the metric and gradient. These are summations of a set of points from the CT mesh for every ultrasound image. A typical input for registration could be a CT mesh of up to 100,000 points and anywhere from 5 to 200 ultrasound images. If every point in the CT mesh was used then , 000 = 20, 000, 000 calculations would be made at every iteration of the registration process. To increase the speed of the algorithm the number of points used to calculate the metric and gradient can be reduced. The area that the CT spans is usually much bigger than the size of an ultrasound image. For instance a General Electric ultrasound probe s average image size is about 3cm x 3cm. A CT scan of a pelvis can be upwards of 30cm x 30cm x 30cm. To calculate the metric and gradient only the points that are within distance c of the ultrasound are needed. The metric and gradient uses a different subset of points from the CT mesh for each image in the summations of Equations (3.35) and (3.37). The area chosen should be based on how close the initial alignment guess will be to the ideal one. For initial guesses with a large error the area should be big. For a more precise guess a small area can be chosen.

63 CHAPTER 3. METHOD Preprocessing Gradient One problem with this algorithm is its dependence on the size of the US bone section. At the bone section the pixels are at their brightest. Pixels extending from that area become increasingly dimmer until the value reaches zero (the occluded section behind the bone) or some other value defined by the soft tissue beside the bone. It is that change in value (from small to large) that a gradient descent optimizer will use to climb to a maximized alignment. This area is the influence of the bone section. Unfortunately, this area is quite small. For example, in most of the US images collected in this experiment the width of the US bone reflection is 1cm. To reduce the effect of the problem, the US images were preprocessed to increase the influence of the bone reflection area. Each image was blurred along its vertical axis using a Gaussian smoothing filter. G(x) = 1 2πσ e x2 2σ 2 (3.38) where x is the distance (in pixels) from the pixel being blurred and value of σ was five. A σ value of five was found to be a value that best balanced the length and distribution of the Gaussian blur (See Figure 3.10). The addition of the blur increased the size of the bone reflection without changing the direction of the derivative between pixels. While the derivative between pixels is smaller, the influence of the bone is increased.

64 CHAPTER 3. METHOD 56 Figure 3.10: ultrasound image before and after Gaussian smoothing filter Removal of Backfacing Normals It is possible for bone surface points of the CT not to have matching bright pixels in the US image. US waves cannot penetrate bone due to its high density. Any bone surface below a section of bone would be occluded in a US image. This is problematic when the occluding bone surface is thin or concave. Figure 3.8 shows an example of this occlusion. In the figure, the US image is large enough to encapsulate the upper and lower sections of CT bone surface yet only the upper section is present in the US image. Even if the US image and CT points were perfectly aligned the metric would provide a poor alignment result as many of the overlapping points would fall on zero value pixels. In these cases it is advantageous to remove the points that would mostly likely not exist in the US image. As a preprocessing step, occluded points in the CT were removed by eliminating points with a back facing normal relative to the US probe (See Figure 3.11). Segmented CT surface points can only match bright points of the US image that have a normal of about 90 degrees or less from the direction vector of the US probe. Any

65 CHAPTER 3. METHOD 57 Figure 3.11: Backfacing Normal Removal - CT mesh (solid line) overlaid on ultrasound - the mesh points with normals within 90 degrees to the direction of the ultrasound probe have been removed points with a larger normal would cease to exist in the US image because it would either be occluded by a bone surface closer to the US probe.

66 Chapter 4 Experiment and Results This chapter describes the experiments that were performed to determine the accuracy of the Mutual Information, Iterative Closest Point and Points to Image registration methods on a phantom pelvis. The first section (4.1) describes the data acquisition of the experiment which focuses on the apparatus used, the experiment arrangement and data acquisition. The next section (4.2) outlines the experiment design. The final section (4.3) describes the accuracy of each method. 4.1 Experiment Arrangement All of the data used in the experiment was taken from a Pelvis Phantom produced by Sawbones, WA, USA (Figure 4.1). The three registration methods that were tested required the following data from the Pelvis Phantom: 58

67 CHAPTER 4. EXPERIMENT AND RESULTS 59 Figure 4.1: pelvis phantom image Method CT Data used US Data used ICP Segmented surface points from CT Segmented surface points from US MI CT image slices Multiple 2D US images PTI Segmented surface points from CT Multiple 2D US images CT Data Acquisition A GE Lightspeed CT machine was used to collect 155 grey-scaled DICOM slices of the phantom pelvis (Figure 4.2). Each slice was 512x512 pixels with an actual size of 310mm x 310mm. This yields a resolution of 0.61mm 2 per pixel. The slices were 1.25mm apart from each other for a total length of mm. A three dimensional mesh was created from the CT images using an in-house program called Mesher that uses the Marching Cubes [15] algorithm to create the mesh.

68 CHAPTER 4. EXPERIMENT AND RESULTS 60 Figure 4.2: CT Image slice US Data Acquisition To collect the US images from the pelvis phantom, the phantom was placed in a plastic basin and immersed in water. To keep the phantom submerged it was clamped to a heavy ring stand. In total, 2000 images were collected, concentrating on areas of the pelvis accessible to a US probe: the iliac crest and ilium. To place the US images in a three dimensional position, each image was associated with an affine transformation relative to a common origin. The transformations were found using a Polaris Optical tracking system (Figure 4.3). It consists of two elements: An infrared emitter/receiver (camera) and a Dynamic Reference Body (DRB) mounted to the US probe. A DRB has three reflective points on it that the camera can use to trace position to an accuracy of 0.3mm. An additional set of 38 pelvis phantom surface points were collected by a stylus with an attached DRB. These points were used to determine the Gold Standard (discussed in the next section). The calibration of the stylus and US probe used the technique of Chen [7] (page

69 CHAPTER 4. EXPERIMENT AND RESULTS 61 Figure 4.3: Polaris Tracking System 73). 4.2 Experiment Design The common element between MI, ICP and PTI is they all take US and CT data at some given initial position and return a new, registered position. The experiment was designed such that the initial position for each method was consistent and the final registered position returned by each method was compared using the same technique. Figure 4.4 outlines the experiment flow. First, the CT and US data are placed at the gold standard (GS), the perfectly aligned position. Then, the US is perturbed from the GS by some random amount of rotation and translation. The perturbed US and CT data are given to each registration technique as an initial starting position where registration is performed. Finally, the Target Registration Error (TRE) is calculated for the result of each method. The details of each step are explained in the following sections. Figure 4.5 shows an example of a good registration result and a bad registration

70 CHAPTER 4. EXPERIMENT AND RESULTS 62 Figure 4.4: Experiment Registration Flow result. The good registration result the CT points almost perfectly align with the US image bone surface points. In the bad registration result, the CT points overlap with a small percentage of non-zero values Gold Standard The Gold Standard (GS) is the transformation that accurately maps US images from US space to their corresponding positions in CT space. In this experiment the gold standard serves two purposes: to gauge the quality of a registration and to create a registration test. The GS transformation was produced by using the ICP method to find the transformation between the stylus points collected in US space and the CT surface mesh produced from the CT images. When transformed into CT space, the two data sets are registered within 1.55mm of error. This is a combination of 0.3mm of calibration

71 CHAPTER 4. EXPERIMENT AND RESULTS 63 Figure 4.5: Two Registration results. Both images are ultrasound pixels with CT points superimposed on top. Left - good registration result, Right - bad registration result error from the Polaris [7] and 1.25mm residual distance between the stylus points and the CT after ICP Target Registration Error The accuracy of a registration method can be gauged by finding the transformation error of the following: T Before = The Transform applied to the US images to move them from the GS location to another location that is meant to represent the alignment error of a surgeon s initial guess before registration. (perturbed position) T After = The transform applied to the US positioned at the GS after registration. This is produced by the registration algorithm to align the CT and US. When perfectly registered T After is the identity.

72 CHAPTER 4. EXPERIMENT AND RESULTS 64 The standard technique to measure transformation error in the field of medical image registration is to calculate the Target Registration Error (TRE). TRE is the distance between a point set at one position to the same point set at another position: T RE(T ) = 1 x i T x i (4.1) X x i ɛx where X is a set of points on the GS and T is a transform being measured. The point set, X, used for this experiment consisted of six fiducial points, not used in the registration process, located on the pelvis surface at the GS. Each point was chosen manually to match the following constraints: they were not to be coplanar and they had to be well distributed (minimum 1cm distance from any other point). The TRE of T Before was found by finding the average distance between the fiducial points at the GS and the fiducial points transformed to the initial starting position. Similarly, the TRE for T After was calculated by finding the average distance between corresponding fiducial points positioned at the GS and the same points transformed from the initial starting position to the final registration position (See figure 4.6). The convergence space of a registration method is the range of TRE values of TRE(T Before ) for which the registration method can reliably register. In a clinical environment, a reliable registration is considered to be a TRE(T After ) under 2.0mm. An accurate registration method has a large convergence space with a low standard deviation of TRE(T After ) values. It should be noted that the gold standard error discussed in Section is 1.5mm, 0.5mm below the clinical accepted level Initial Starting Position The initial starting position of each test is the location of both the US and CT prior to registration. For all of our tests, the CT was placed at the GS and the US

73 CHAPTER 4. EXPERIMENT AND RESULTS 65 Figure 4.6: Example of TRE calculation for points transformed from the GS to the perturbed position and the registered position was transformed from the GS using T Before, the transform discussed in the previous section. The transform is a combination of a rotation and a translation offset. The translation offset was produced by assigning a random x, y and z value to form a direction vector. This vector was normalized and then multiplied by a constant. This constant represented the magnitude of the translation offset. It was a random value chosen between 0mm and 10mm. The rotation offset was similarly produced. Random values of x, y and z were used to form a vector that was then normalized. The vector was then multiplied by a constant representing the rotation angle. The constant was a random value between 0 and 20 degrees. The initial starting position was formed by applying the rotation offset then the translation offset to the US images at the GS. In total, 400 initial starting positions were produced and uniformly distributed over [0,10] mm of translation error and [0,20] degrees of rotational error in a random direction.

Intraoperative Prostate Tracking with Slice-to-Volume Registration in MR

Intraoperative Prostate Tracking with Slice-to-Volume Registration in MR Intraoperative Prostate Tracking with Slice-to-Volume Registration in MR Sean Gill a, Purang Abolmaesumi a,b, Siddharth Vikal a, Parvin Mousavi a and Gabor Fichtinger a,b,* (a) School of Computing, Queen

More information

Whole Body MRI Intensity Standardization

Whole Body MRI Intensity Standardization Whole Body MRI Intensity Standardization Florian Jäger 1, László Nyúl 1, Bernd Frericks 2, Frank Wacker 2 and Joachim Hornegger 1 1 Institute of Pattern Recognition, University of Erlangen, {jaeger,nyul,hornegger}@informatik.uni-erlangen.de

More information

Towards an Estimation of Acoustic Impedance from Multiple Ultrasound Images

Towards an Estimation of Acoustic Impedance from Multiple Ultrasound Images Towards an Estimation of Acoustic Impedance from Multiple Ultrasound Images Christian Wachinger 1, Ramtin Shams 2, Nassir Navab 1 1 Computer Aided Medical Procedures (CAMP), Technische Universität München

More information

BME I5000: Biomedical Imaging

BME I5000: Biomedical Imaging 1 Lucas Parra, CCNY BME I5000: Biomedical Imaging Lecture 4 Computed Tomography Lucas C. Parra, parra@ccny.cuny.edu some slides inspired by lecture notes of Andreas H. Hilscher at Columbia University.

More information

MEDICAL IMAGE ANALYSIS

MEDICAL IMAGE ANALYSIS SECOND EDITION MEDICAL IMAGE ANALYSIS ATAM P. DHAWAN g, A B IEEE Engineering in Medicine and Biology Society, Sponsor IEEE Press Series in Biomedical Engineering Metin Akay, Series Editor +IEEE IEEE PRESS

More information

Image Registration. Prof. Dr. Lucas Ferrari de Oliveira UFPR Informatics Department

Image Registration. Prof. Dr. Lucas Ferrari de Oliveira UFPR Informatics Department Image Registration Prof. Dr. Lucas Ferrari de Oliveira UFPR Informatics Department Introduction Visualize objects inside the human body Advances in CS methods to diagnosis, treatment planning and medical

More information

Chapter 3 Set Redundancy in Magnetic Resonance Brain Images

Chapter 3 Set Redundancy in Magnetic Resonance Brain Images 16 Chapter 3 Set Redundancy in Magnetic Resonance Brain Images 3.1 MRI (magnetic resonance imaging) MRI is a technique of measuring physical structure within the human anatomy. Our proposed research focuses

More information

Navigation System for ACL Reconstruction Using Registration between Multi-Viewpoint X-ray Images and CT Images

Navigation System for ACL Reconstruction Using Registration between Multi-Viewpoint X-ray Images and CT Images Navigation System for ACL Reconstruction Using Registration between Multi-Viewpoint X-ray Images and CT Images Mamoru Kuga a*, Kazunori Yasuda b, Nobuhiko Hata a, Takeyoshi Dohi a a Graduate School of

More information

Tomographic Reconstruction

Tomographic Reconstruction Tomographic Reconstruction 3D Image Processing Torsten Möller Reading Gonzales + Woods, Chapter 5.11 2 Overview Physics History Reconstruction basic idea Radon transform Fourier-Slice theorem (Parallel-beam)

More information

2D-3D Registration using Gradient-based MI for Image Guided Surgery Systems

2D-3D Registration using Gradient-based MI for Image Guided Surgery Systems 2D-3D Registration using Gradient-based MI for Image Guided Surgery Systems Yeny Yim 1*, Xuanyi Chen 1, Mike Wakid 1, Steve Bielamowicz 2, James Hahn 1 1 Department of Computer Science, The George Washington

More information

CHAPTER 2 MEDICAL IMAGING WITH NON-IONIZING RADIATION

CHAPTER 2 MEDICAL IMAGING WITH NON-IONIZING RADIATION CHAPTER 2 MEDICAL IMAGING WITH NON-IONIZING RADIATION 1 Ultrasound Imaging 1.1 Ultrasound Production and Detection Ultrasound is frequency vibration. To produce and detect ultrasound, we use crystals which

More information

2 Michael E. Leventon and Sarah F. F. Gibson a b c d Fig. 1. (a, b) Two MR scans of a person's knee. Both images have high resolution in-plane, but ha

2 Michael E. Leventon and Sarah F. F. Gibson a b c d Fig. 1. (a, b) Two MR scans of a person's knee. Both images have high resolution in-plane, but ha Model Generation from Multiple Volumes using Constrained Elastic SurfaceNets Michael E. Leventon and Sarah F. F. Gibson 1 MIT Artificial Intelligence Laboratory, Cambridge, MA 02139, USA leventon@ai.mit.edu

More information

Automated segmentation methods for liver analysis in oncology applications

Automated segmentation methods for liver analysis in oncology applications University of Szeged Department of Image Processing and Computer Graphics Automated segmentation methods for liver analysis in oncology applications Ph. D. Thesis László Ruskó Thesis Advisor Dr. Antal

More information

Prostate Detection Using Principal Component Analysis

Prostate Detection Using Principal Component Analysis Prostate Detection Using Principal Component Analysis Aamir Virani (avirani@stanford.edu) CS 229 Machine Learning Stanford University 16 December 2005 Introduction During the past two decades, computed

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

GE Healthcare. Agile Ultrasound. The Next Revolution in Ultrasound Imaging

GE Healthcare. Agile Ultrasound. The Next Revolution in Ultrasound Imaging Agile Ultrasound The Next Revolution in Ultrasound Imaging Abstract Diagnostic use of ultrasound has greatly expanded over the past couple of decades because it offers many advantages as an imaging modality.

More information

Object Identification in Ultrasound Scans

Object Identification in Ultrasound Scans Object Identification in Ultrasound Scans Wits University Dec 05, 2012 Roadmap Introduction to the problem Motivation Related Work Our approach Expected Results Introduction Nowadays, imaging devices like

More information

A Non-Linear Image Registration Scheme for Real-Time Liver Ultrasound Tracking using Normalized Gradient Fields

A Non-Linear Image Registration Scheme for Real-Time Liver Ultrasound Tracking using Normalized Gradient Fields A Non-Linear Image Registration Scheme for Real-Time Liver Ultrasound Tracking using Normalized Gradient Fields Lars König, Till Kipshagen and Jan Rühaak Fraunhofer MEVIS Project Group Image Registration,

More information

3/27/2012 WHY SPECT / CT? SPECT / CT Basic Principles. Advantages of SPECT. Advantages of CT. Dr John C. Dickson, Principal Physicist UCLH

3/27/2012 WHY SPECT / CT? SPECT / CT Basic Principles. Advantages of SPECT. Advantages of CT. Dr John C. Dickson, Principal Physicist UCLH 3/27/212 Advantages of SPECT SPECT / CT Basic Principles Dr John C. Dickson, Principal Physicist UCLH Institute of Nuclear Medicine, University College London Hospitals and University College London john.dickson@uclh.nhs.uk

More information

Ch. 4 Physical Principles of CT

Ch. 4 Physical Principles of CT Ch. 4 Physical Principles of CT CLRS 408: Intro to CT Department of Radiation Sciences Review: Why CT? Solution for radiography/tomography limitations Superimposition of structures Distinguishing between

More information

Lecture 6: Medical imaging and image-guided interventions

Lecture 6: Medical imaging and image-guided interventions ME 328: Medical Robotics Winter 2019 Lecture 6: Medical imaging and image-guided interventions Allison Okamura Stanford University Updates Assignment 3 Due this Thursday, Jan. 31 Note that this assignment

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

Assessing Accuracy Factors in Deformable 2D/3D Medical Image Registration Using a Statistical Pelvis Model

Assessing Accuracy Factors in Deformable 2D/3D Medical Image Registration Using a Statistical Pelvis Model Assessing Accuracy Factors in Deformable 2D/3D Medical Image Registration Using a Statistical Pelvis Model Jianhua Yao National Institute of Health Bethesda, MD USA jyao@cc.nih.gov Russell Taylor The Johns

More information

Implementation and Comparison of Four Different Boundary Detection Algorithms for Quantitative Ultrasonic Measurements of the Human Carotid Artery

Implementation and Comparison of Four Different Boundary Detection Algorithms for Quantitative Ultrasonic Measurements of the Human Carotid Artery Implementation and Comparison of Four Different Boundary Detection Algorithms for Quantitative Ultrasonic Measurements of the Human Carotid Artery Masters Thesis By Ghassan Hamarneh Rafeef Abu-Gharbieh

More information

Learning-based Neuroimage Registration

Learning-based Neuroimage Registration Learning-based Neuroimage Registration Leonid Teverovskiy and Yanxi Liu 1 October 2004 CMU-CALD-04-108, CMU-RI-TR-04-59 School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract

More information

82 REGISTRATION OF RETINOGRAPHIES

82 REGISTRATION OF RETINOGRAPHIES 82 REGISTRATION OF RETINOGRAPHIES 3.3 Our method Our method resembles the human approach to image matching in the sense that we also employ as guidelines features common to both images. It seems natural

More information

Classification of Abdominal Tissues by k-means Clustering for 3D Acoustic and Shear-Wave Modeling

Classification of Abdominal Tissues by k-means Clustering for 3D Acoustic and Shear-Wave Modeling 1 Classification of Abdominal Tissues by k-means Clustering for 3D Acoustic and Shear-Wave Modeling Kevin T. Looby klooby@stanford.edu I. ABSTRACT Clutter is an effect that degrades the quality of medical

More information

Generation of Triangle Meshes from Time-of-Flight Data for Surface Registration

Generation of Triangle Meshes from Time-of-Flight Data for Surface Registration Generation of Triangle Meshes from Time-of-Flight Data for Surface Registration Thomas Kilgus, Thiago R. dos Santos, Alexander Seitel, Kwong Yung, Alfred M. Franz, Anja Groch, Ivo Wolf, Hans-Peter Meinzer,

More information

3D Modeling of Objects Using Laser Scanning

3D Modeling of Objects Using Laser Scanning 1 3D Modeling of Objects Using Laser Scanning D. Jaya Deepu, LPU University, Punjab, India Email: Jaideepudadi@gmail.com Abstract: In the last few decades, constructing accurate three-dimensional models

More information

Mutual Information Based Methods to Localize Image Registration

Mutual Information Based Methods to Localize Image Registration Mutual Information Based Methods to Localize Image Registration by Kathleen P. Wilkie A thesis presented to the University of Waterloo in fulfilment of the thesis requirement for the degree of Master of

More information

Advanced Image Reconstruction Methods for Photoacoustic Tomography

Advanced Image Reconstruction Methods for Photoacoustic Tomography Advanced Image Reconstruction Methods for Photoacoustic Tomography Mark A. Anastasio, Kun Wang, and Robert Schoonover Department of Biomedical Engineering Washington University in St. Louis 1 Outline Photoacoustic/thermoacoustic

More information

Image Thickness Correction for Navigation with 3D Intra-cardiac Ultrasound Catheter

Image Thickness Correction for Navigation with 3D Intra-cardiac Ultrasound Catheter Image Thickness Correction for Navigation with 3D Intra-cardiac Ultrasound Catheter Hua Zhong 1, Takeo Kanade 1,andDavidSchwartzman 2 1 Computer Science Department, Carnegie Mellon University, USA 2 University

More information

GPU Ultrasound Simulation and Volume Reconstruction

GPU Ultrasound Simulation and Volume Reconstruction GPU Ultrasound Simulation and Volume Reconstruction Athanasios Karamalis 1,2 Supervisor: Nassir Navab1 Advisor: Oliver Kutter1, Wolfgang Wein2 1Computer Aided Medical Procedures (CAMP), Technische Universität

More information

Light and the Properties of Reflection & Refraction

Light and the Properties of Reflection & Refraction Light and the Properties of Reflection & Refraction OBJECTIVE To study the imaging properties of a plane mirror. To prove the law of reflection from the previous imaging study. To study the refraction

More information

Advanced Visual Medicine: Techniques for Visual Exploration & Analysis

Advanced Visual Medicine: Techniques for Visual Exploration & Analysis Advanced Visual Medicine: Techniques for Visual Exploration & Analysis Interactive Visualization of Multimodal Volume Data for Neurosurgical Planning Felix Ritter, MeVis Research Bremen Multimodal Neurosurgical

More information

Image Acquisition Systems

Image Acquisition Systems Image Acquisition Systems Goals and Terminology Conventional Radiography Axial Tomography Computer Axial Tomography (CAT) Magnetic Resonance Imaging (MRI) PET, SPECT Ultrasound Microscopy Imaging ITCS

More information

3D Ultrasound System Using a Magneto-optic Hybrid Tracker for Augmented Reality Visualization in Laparoscopic Liver Surgery

3D Ultrasound System Using a Magneto-optic Hybrid Tracker for Augmented Reality Visualization in Laparoscopic Liver Surgery 3D Ultrasound System Using a Magneto-optic Hybrid Tracker for Augmented Reality Visualization in Laparoscopic Liver Surgery Masahiko Nakamoto 1, Yoshinobu Sato 1, Masaki Miyamoto 1, Yoshikazu Nakamjima

More information

A SYSTEM FOR COMPUTER-ASSISTED SURGERY WITH INTRAOPERATIVE CT IMAGING

A SYSTEM FOR COMPUTER-ASSISTED SURGERY WITH INTRAOPERATIVE CT IMAGING A SYSTEM FOR COMPUTER-ASSISTED SURGERY WITH INTRAOPERATIVE CT IMAGING by ANTON OENTORO A thesis submitted to the School of Computing in conformity with the requirements for the degree of Master of Science

More information

A Study of Medical Image Analysis System

A Study of Medical Image Analysis System Indian Journal of Science and Technology, Vol 8(25), DOI: 10.17485/ijst/2015/v8i25/80492, October 2015 ISSN (Print) : 0974-6846 ISSN (Online) : 0974-5645 A Study of Medical Image Analysis System Kim Tae-Eun

More information

Bone registration with 3D CT and ultrasound data sets

Bone registration with 3D CT and ultrasound data sets International Congress Series 1256 (2003) 426 432 Bone registration with 3D CT and ultrasound data sets B. Brendel a, *,1, S. Winter b,1, A. Rick c,1, M. Stockheim d,1, H. Ermert a,1 a Institute of High

More information

Shadow casting. What is the problem? Cone Beam Computed Tomography THE OBJECTIVES OF DIAGNOSTIC IMAGING IDEAL DIAGNOSTIC IMAGING STUDY LIMITATIONS

Shadow casting. What is the problem? Cone Beam Computed Tomography THE OBJECTIVES OF DIAGNOSTIC IMAGING IDEAL DIAGNOSTIC IMAGING STUDY LIMITATIONS Cone Beam Computed Tomography THE OBJECTIVES OF DIAGNOSTIC IMAGING Reveal pathology Reveal the anatomic truth Steven R. Singer, DDS srs2@columbia.edu IDEAL DIAGNOSTIC IMAGING STUDY Provides desired diagnostic

More information

3D Scanning. Qixing Huang Feb. 9 th Slide Credit: Yasutaka Furukawa

3D Scanning. Qixing Huang Feb. 9 th Slide Credit: Yasutaka Furukawa 3D Scanning Qixing Huang Feb. 9 th 2017 Slide Credit: Yasutaka Furukawa Geometry Reconstruction Pipeline This Lecture Depth Sensing ICP for Pair-wise Alignment Next Lecture Global Alignment Pairwise Multiple

More information

Digital Image Processing

Digital Image Processing Digital Image Processing SPECIAL TOPICS CT IMAGES Hamid R. Rabiee Fall 2015 What is an image? 2 Are images only about visual concepts? We ve already seen that there are other kinds of image. In this lecture

More information

Medicale Image Analysis

Medicale Image Analysis Medicale Image Analysis Registration Validation Prof. Dr. Philippe Cattin MIAC, University of Basel Prof. Dr. Philippe Cattin: Registration Validation Contents 1 Validation 1.1 Validation of Registration

More information

Digital Volume Correlation for Materials Characterization

Digital Volume Correlation for Materials Characterization 19 th World Conference on Non-Destructive Testing 2016 Digital Volume Correlation for Materials Characterization Enrico QUINTANA, Phillip REU, Edward JIMENEZ, Kyle THOMPSON, Sharlotte KRAMER Sandia National

More information

Imaging protocols for navigated procedures

Imaging protocols for navigated procedures 9732379 G02 Rev. 1 2015-11 Imaging protocols for navigated procedures How to use this document This document contains imaging protocols for navigated cranial, DBS and stereotactic, ENT, and spine procedures

More information

Deformable Registration Using Scale Space Keypoints

Deformable Registration Using Scale Space Keypoints Deformable Registration Using Scale Space Keypoints Mehdi Moradi a, Purang Abolmaesoumi a,b and Parvin Mousavi a a School of Computing, Queen s University, Kingston, Ontario, Canada K7L 3N6; b Department

More information

INTRODUCTION TO MEDICAL IMAGING- 3D LOCALIZATION LAB MANUAL 1. Modifications for P551 Fall 2013 Medical Physics Laboratory

INTRODUCTION TO MEDICAL IMAGING- 3D LOCALIZATION LAB MANUAL 1. Modifications for P551 Fall 2013 Medical Physics Laboratory INTRODUCTION TO MEDICAL IMAGING- 3D LOCALIZATION LAB MANUAL 1 Modifications for P551 Fall 2013 Medical Physics Laboratory Introduction Following the introductory lab 0, this lab exercise the student through

More information

Methodological progress in image registration for ventilation estimation, segmentation propagation and multi-modal fusion

Methodological progress in image registration for ventilation estimation, segmentation propagation and multi-modal fusion Methodological progress in image registration for ventilation estimation, segmentation propagation and multi-modal fusion Mattias P. Heinrich Julia A. Schnabel, Mark Jenkinson, Sir Michael Brady 2 Clinical

More information

Corso di laurea in Fisica A.A Fisica Medica 4 TC

Corso di laurea in Fisica A.A Fisica Medica 4 TC Corso di laurea in Fisica A.A. 2007-2008 Fisica Medica 4 TC Computed Tomography Principles 1. Projection measurement 2. Scanner systems 3. Scanning modes Basic Tomographic Principle The internal structure

More information

Nonrigid Registration using Free-Form Deformations

Nonrigid Registration using Free-Form Deformations Nonrigid Registration using Free-Form Deformations Hongchang Peng April 20th Paper Presented: Rueckert et al., TMI 1999: Nonrigid registration using freeform deformations: Application to breast MR images

More information

A Radiometry Tolerant Method for Direct 3D/2D Registration of Computed Tomography Data to X-ray Images

A Radiometry Tolerant Method for Direct 3D/2D Registration of Computed Tomography Data to X-ray Images A Radiometry Tolerant Method for Direct 3D/2D Registration of Computed Tomography Data to X-ray Images Transfer Function Independent Registration Boris Peter Selby 1, Georgios Sakas 2, Stefan Walter 1,

More information

Lecture 13 Theory of Registration. ch. 10 of Insight into Images edited by Terry Yoo, et al. Spring (CMU RI) : BioE 2630 (Pitt)

Lecture 13 Theory of Registration. ch. 10 of Insight into Images edited by Terry Yoo, et al. Spring (CMU RI) : BioE 2630 (Pitt) Lecture 13 Theory of Registration ch. 10 of Insight into Images edited by Terry Yoo, et al. Spring 2018 16-725 (CMU RI) : BioE 2630 (Pitt) Dr. John Galeotti The content of these slides by John Galeotti,

More information

Automatic Vascular Tree Formation Using the Mahalanobis Distance

Automatic Vascular Tree Formation Using the Mahalanobis Distance Automatic Vascular Tree Formation Using the Mahalanobis Distance Julien Jomier, Vincent LeDigarcher, and Stephen R. Aylward Computer-Aided Diagnosis and Display Lab, Department of Radiology The University

More information

Classification of Subject Motion for Improved Reconstruction of Dynamic Magnetic Resonance Imaging

Classification of Subject Motion for Improved Reconstruction of Dynamic Magnetic Resonance Imaging 1 CS 9 Final Project Classification of Subject Motion for Improved Reconstruction of Dynamic Magnetic Resonance Imaging Feiyu Chen Department of Electrical Engineering ABSTRACT Subject motion is a significant

More information

Enhanced material contrast by dual-energy microct imaging

Enhanced material contrast by dual-energy microct imaging Enhanced material contrast by dual-energy microct imaging Method note Page 1 of 12 2 Method note: Dual-energy microct analysis 1. Introduction 1.1. The basis for dual energy imaging Micro-computed tomography

More information

FOREWORD TO THE SPECIAL ISSUE ON MOTION DETECTION AND COMPENSATION

FOREWORD TO THE SPECIAL ISSUE ON MOTION DETECTION AND COMPENSATION Philips J. Res. 51 (1998) 197-201 FOREWORD TO THE SPECIAL ISSUE ON MOTION DETECTION AND COMPENSATION This special issue of Philips Journalof Research includes a number of papers presented at a Philips

More information

Intensity Augmented ICP for Registration of Laser Scanner Point Clouds

Intensity Augmented ICP for Registration of Laser Scanner Point Clouds Intensity Augmented ICP for Registration of Laser Scanner Point Clouds Bharat Lohani* and Sandeep Sashidharan *Department of Civil Engineering, IIT Kanpur Email: blohani@iitk.ac.in. Abstract While using

More information

Recovery of 3D Pose of Bones in Single 2D X-ray Images

Recovery of 3D Pose of Bones in Single 2D X-ray Images Recovery of 3D Pose of Bones in Single 2D X-ray Images Piyush Kanti Bhunre Wee Kheng Leow Dept. of Computer Science National University of Singapore 3 Science Drive 2, Singapore 117543 {piyushka, leowwk}@comp.nus.edu.sg

More information

Iterative Estimation of 3D Transformations for Object Alignment

Iterative Estimation of 3D Transformations for Object Alignment Iterative Estimation of 3D Transformations for Object Alignment Tao Wang and Anup Basu Department of Computing Science, Univ. of Alberta, Edmonton, AB T6G 2E8, Canada Abstract. An Iterative Estimation

More information

Computer Graphics. - Volume Rendering - Philipp Slusallek

Computer Graphics. - Volume Rendering - Philipp Slusallek Computer Graphics - Volume Rendering - Philipp Slusallek Overview Motivation Volume Representation Indirect Volume Rendering Volume Classification Direct Volume Rendering Applications: Bioinformatics Image

More information

AUTOMATIC DETECTION OF ENDOSCOPE IN INTRAOPERATIVE CT IMAGE: APPLICATION TO AUGMENTED REALITY GUIDANCE IN LAPAROSCOPIC SURGERY

AUTOMATIC DETECTION OF ENDOSCOPE IN INTRAOPERATIVE CT IMAGE: APPLICATION TO AUGMENTED REALITY GUIDANCE IN LAPAROSCOPIC SURGERY AUTOMATIC DETECTION OF ENDOSCOPE IN INTRAOPERATIVE CT IMAGE: APPLICATION TO AUGMENTED REALITY GUIDANCE IN LAPAROSCOPIC SURGERY Summary of thesis by S. Bernhardt Thesis director: Christophe Doignon Thesis

More information

Computational Medical Imaging Analysis Chapter 4: Image Visualization

Computational Medical Imaging Analysis Chapter 4: Image Visualization Computational Medical Imaging Analysis Chapter 4: Image Visualization Jun Zhang Laboratory for Computational Medical Imaging & Data Analysis Department of Computer Science University of Kentucky Lexington,

More information

Constructing System Matrices for SPECT Simulations and Reconstructions

Constructing System Matrices for SPECT Simulations and Reconstructions Constructing System Matrices for SPECT Simulations and Reconstructions Nirantha Balagopal April 28th, 2017 M.S. Report The University of Arizona College of Optical Sciences 1 Acknowledgement I would like

More information

MR IMAGE SEGMENTATION

MR IMAGE SEGMENTATION MR IMAGE SEGMENTATION Prepared by : Monil Shah What is Segmentation? Partitioning a region or regions of interest in images such that each region corresponds to one or more anatomic structures Classification

More information

Computed tomography - outline

Computed tomography - outline Computed tomography - outline Computed Tomography Systems Jørgen Arendt Jensen and Mikael Jensen (DTU Nutech) October 6, 216 Center for Fast Ultrasound Imaging, Build 349 Department of Electrical Engineering

More information

White Pixel Artifact. Caused by a noise spike during acquisition Spike in K-space <--> sinusoid in image space

White Pixel Artifact. Caused by a noise spike during acquisition Spike in K-space <--> sinusoid in image space White Pixel Artifact Caused by a noise spike during acquisition Spike in K-space sinusoid in image space Susceptibility Artifacts Off-resonance artifacts caused by adjacent regions with different

More information

K-Means Clustering Using Localized Histogram Analysis

K-Means Clustering Using Localized Histogram Analysis K-Means Clustering Using Localized Histogram Analysis Michael Bryson University of South Carolina, Department of Computer Science Columbia, SC brysonm@cse.sc.edu Abstract. The first step required for many

More information

William Yang Group 14 Mentor: Dr. Rogerio Richa Visual Tracking of Surgical Tools in Retinal Surgery using Particle Filtering

William Yang Group 14 Mentor: Dr. Rogerio Richa Visual Tracking of Surgical Tools in Retinal Surgery using Particle Filtering Mutual Information Computation and Maximization Using GPU Yuping Lin and Gérard Medioni Computer Vision and Pattern Recognition Workshops (CVPR) Anchorage, AK, pp. 1-6, June 2008 Project Summary and Paper

More information

Visualisation : Lecture 1. So what is visualisation? Visualisation

Visualisation : Lecture 1. So what is visualisation? Visualisation So what is visualisation? UG4 / M.Sc. Course 2006 toby.breckon@ed.ac.uk Computer Vision Lab. Institute for Perception, Action & Behaviour Introducing 1 Application of interactive 3D computer graphics to

More information

Computed tomography (Item No.: P )

Computed tomography (Item No.: P ) Computed tomography (Item No.: P2550100) Curricular Relevance Area of Expertise: Biology Education Level: University Topic: Modern Imaging Methods Subtopic: X-ray Imaging Experiment: Computed tomography

More information

Deviceless respiratory motion correction in PET imaging exploring the potential of novel data driven strategies

Deviceless respiratory motion correction in PET imaging exploring the potential of novel data driven strategies g Deviceless respiratory motion correction in PET imaging exploring the potential of novel data driven strategies Presented by Adam Kesner, Ph.D., DABR Assistant Professor, Division of Radiological Sciences,

More information

C a t p h a n / T h e P h a n t o m L a b o r a t o r y

C a t p h a n / T h e P h a n t o m L a b o r a t o r y C a t p h a n 5 0 0 / 6 0 0 T h e P h a n t o m L a b o r a t o r y C a t p h a n 5 0 0 / 6 0 0 Internationally recognized for measuring the maximum obtainable performance of axial, spiral and multi-slice

More information

Basic principles of MR image analysis. Basic principles of MR image analysis. Basic principles of MR image analysis

Basic principles of MR image analysis. Basic principles of MR image analysis. Basic principles of MR image analysis Basic principles of MR image analysis Basic principles of MR image analysis Julien Milles Leiden University Medical Center Terminology of fmri Brain extraction Registration Linear registration Non-linear

More information

Annales UMCS Informatica AI 1 (2003) UMCS. Registration of CT and MRI brain images. Karol Kuczyński, Paweł Mikołajczak

Annales UMCS Informatica AI 1 (2003) UMCS. Registration of CT and MRI brain images. Karol Kuczyński, Paweł Mikołajczak Annales Informatica AI 1 (2003) 149-156 Registration of CT and MRI brain images Karol Kuczyński, Paweł Mikołajczak Annales Informatica Lublin-Polonia Sectio AI http://www.annales.umcs.lublin.pl/ Laboratory

More information

REAL-TIME ADAPTIVITY IN HEAD-AND-NECK AND LUNG CANCER RADIOTHERAPY IN A GPU ENVIRONMENT

REAL-TIME ADAPTIVITY IN HEAD-AND-NECK AND LUNG CANCER RADIOTHERAPY IN A GPU ENVIRONMENT REAL-TIME ADAPTIVITY IN HEAD-AND-NECK AND LUNG CANCER RADIOTHERAPY IN A GPU ENVIRONMENT Anand P Santhanam Assistant Professor, Department of Radiation Oncology OUTLINE Adaptive radiotherapy for head and

More information

Additional file 1: Online Supplementary Material 1

Additional file 1: Online Supplementary Material 1 Additional file 1: Online Supplementary Material 1 Calyn R Moulton and Michael J House School of Physics, University of Western Australia, Crawley, Western Australia. Victoria Lye, Colin I Tang, Michele

More information

Certificate in Clinician Performed Ultrasound (CCPU)

Certificate in Clinician Performed Ultrasound (CCPU) Certificate in Clinician Performed Ultrasound (CCPU) Syllabus Physics Tutorial Physics Tutorial Purpose: Training: Assessments: This unit is designed to cover the theoretical and practical curriculum for

More information

Image Segmentation and Registration

Image Segmentation and Registration Image Segmentation and Registration Dr. Christine Tanner (tanner@vision.ee.ethz.ch) Computer Vision Laboratory, ETH Zürich Dr. Verena Kaynig, Machine Learning Laboratory, ETH Zürich Outline Segmentation

More information

A Multiple-Layer Flexible Mesh Template Matching Method for Nonrigid Registration between a Pelvis Model and CT Images

A Multiple-Layer Flexible Mesh Template Matching Method for Nonrigid Registration between a Pelvis Model and CT Images A Multiple-Layer Flexible Mesh Template Matching Method for Nonrigid Registration between a Pelvis Model and CT Images Jianhua Yao 1, Russell Taylor 2 1. Diagnostic Radiology Department, Clinical Center,

More information

Fundamentals of CT imaging

Fundamentals of CT imaging SECTION 1 Fundamentals of CT imaging I History In the early 1970s Sir Godfrey Hounsfield s research produced the first clinically useful CT scans. Original scanners took approximately 6 minutes to perform

More information

2D Rigid Registration of MR Scans using the 1d Binary Projections

2D Rigid Registration of MR Scans using the 1d Binary Projections 2D Rigid Registration of MR Scans using the 1d Binary Projections Panos D. Kotsas Abstract This paper presents the application of a signal intensity independent registration criterion for 2D rigid body

More information

Modeling and preoperative planning for kidney surgery

Modeling and preoperative planning for kidney surgery Modeling and preoperative planning for kidney surgery Refael Vivanti Computer Aided Surgery and Medical Image Processing Lab Hebrew University of Jerusalem, Israel Advisor: Prof. Leo Joskowicz Clinical

More information

Biomedical Image Processing

Biomedical Image Processing Biomedical Image Processing Jason Thong Gabriel Grant 1 2 Motivation from the Medical Perspective MRI, CT and other biomedical imaging devices were designed to assist doctors in their diagnosis and treatment

More information

Response to Reviewers

Response to Reviewers Response to Reviewers We thank the reviewers for their feedback and have modified the manuscript and expanded results accordingly. There have been several major revisions to the manuscript. First, we have

More information

Medical Images Analysis and Processing

Medical Images Analysis and Processing Medical Images Analysis and Processing - 25642 Emad Course Introduction Course Information: Type: Graduated Credits: 3 Prerequisites: Digital Image Processing Course Introduction Reference(s): Insight

More information

Index. aliasing artifacts and noise in CT images, 200 measurement of projection data, nondiffracting

Index. aliasing artifacts and noise in CT images, 200 measurement of projection data, nondiffracting Index Algebraic equations solution by Kaczmarz method, 278 Algebraic reconstruction techniques, 283-84 sequential, 289, 293 simultaneous, 285-92 Algebraic techniques reconstruction algorithms, 275-96 Algorithms

More information

Biomechanically Constrained Ultrasound to Computed Tomography Registration of the Lumbar Spine

Biomechanically Constrained Ultrasound to Computed Tomography Registration of the Lumbar Spine Biomechanically Constrained Ultrasound to Computed Tomography Registration of the Lumbar Spine by Sean Gill A thesis submitted to the School of Computing in conformity with the requirements for the degree

More information

Computed tomography of simple objects. Related topics. Principle. Equipment TEP Beam hardening, artefacts, and algorithms

Computed tomography of simple objects. Related topics. Principle. Equipment TEP Beam hardening, artefacts, and algorithms Related topics Beam hardening, artefacts, and algorithms Principle The CT principle is demonstrated with the aid of simple objects. In the case of very simple targets, only a few images need to be taken

More information

Machine Learning for Medical Image Analysis. A. Criminisi

Machine Learning for Medical Image Analysis. A. Criminisi Machine Learning for Medical Image Analysis A. Criminisi Overview Introduction to machine learning Decision forests Applications in medical image analysis Anatomy localization in CT Scans Spine Detection

More information

A Generation Methodology for Numerical Phantoms with Statistically Relevant Variability of Geometric and Physical Properties

A Generation Methodology for Numerical Phantoms with Statistically Relevant Variability of Geometric and Physical Properties A Generation Methodology for Numerical Phantoms with Statistically Relevant Variability of Geometric and Physical Properties Steven Dolly 1, Eric Ehler 1, Yang Lou 2, Mark Anastasio 2, Hua Li 2 (1) University

More information

Volume visualization. Volume visualization. Volume visualization methods. Sources of volume visualization. Sources of volume visualization

Volume visualization. Volume visualization. Volume visualization methods. Sources of volume visualization. Sources of volume visualization Volume visualization Volume visualization Volumes are special cases of scalar data: regular 3D grids of scalars, typically interpreted as density values. Each data value is assumed to describe a cubic

More information

Multi-Modal Volume Registration Using Joint Intensity Distributions

Multi-Modal Volume Registration Using Joint Intensity Distributions Multi-Modal Volume Registration Using Joint Intensity Distributions Michael E. Leventon and W. Eric L. Grimson Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA leventon@ai.mit.edu

More information

ADAPTIVE GRAPH CUTS WITH TISSUE PRIORS FOR BRAIN MRI SEGMENTATION

ADAPTIVE GRAPH CUTS WITH TISSUE PRIORS FOR BRAIN MRI SEGMENTATION ADAPTIVE GRAPH CUTS WITH TISSUE PRIORS FOR BRAIN MRI SEGMENTATION Abstract: MIP Project Report Spring 2013 Gaurav Mittal 201232644 This is a detailed report about the course project, which was to implement

More information

Scanning Real World Objects without Worries 3D Reconstruction

Scanning Real World Objects without Worries 3D Reconstruction Scanning Real World Objects without Worries 3D Reconstruction 1. Overview Feng Li 308262 Kuan Tian 308263 This document is written for the 3D reconstruction part in the course Scanning real world objects

More information

COMPARATIVE STUDIES OF DIFFERENT SYSTEM MODELS FOR ITERATIVE CT IMAGE RECONSTRUCTION

COMPARATIVE STUDIES OF DIFFERENT SYSTEM MODELS FOR ITERATIVE CT IMAGE RECONSTRUCTION COMPARATIVE STUDIES OF DIFFERENT SYSTEM MODELS FOR ITERATIVE CT IMAGE RECONSTRUCTION BY CHUANG MIAO A Thesis Submitted to the Graduate Faculty of WAKE FOREST UNIVERSITY GRADUATE SCHOOL OF ARTS AND SCIENCES

More information

Slide 1. Technical Aspects of Quality Control in Magnetic Resonance Imaging. Slide 2. Annual Compliance Testing. of MRI Systems.

Slide 1. Technical Aspects of Quality Control in Magnetic Resonance Imaging. Slide 2. Annual Compliance Testing. of MRI Systems. Slide 1 Technical Aspects of Quality Control in Magnetic Resonance Imaging Slide 2 Compliance Testing of MRI Systems, Ph.D. Department of Radiology Henry Ford Hospital, Detroit, MI Slide 3 Compliance Testing

More information

Refraction Corrected Transmission Ultrasound Computed Tomography for Application in Breast Imaging

Refraction Corrected Transmission Ultrasound Computed Tomography for Application in Breast Imaging Refraction Corrected Transmission Ultrasound Computed Tomography for Application in Breast Imaging Joint Research With Trond Varslot Marcel Jackowski Shengying Li and Klaus Mueller Ultrasound Detection

More information

An Intuitive Explanation of Fourier Theory

An Intuitive Explanation of Fourier Theory An Intuitive Explanation of Fourier Theory Steven Lehar slehar@cns.bu.edu Fourier theory is pretty complicated mathematically. But there are some beautifully simple holistic concepts behind Fourier theory

More information

Physiological Motion Compensation in Minimally Invasive Robotic Surgery Part I

Physiological Motion Compensation in Minimally Invasive Robotic Surgery Part I Physiological Motion Compensation in Minimally Invasive Robotic Surgery Part I Tobias Ortmaier Laboratoire de Robotique de Paris 18, route du Panorama - BP 61 92265 Fontenay-aux-Roses Cedex France Tobias.Ortmaier@alumni.tum.de

More information