Medical Image Analysis

Size: px
Start display at page:

Download "Medical Image Analysis"

Transcription

1 Medical Image Analysis 15 (2011) Contents lists available at ScienceDirect Medical Image Analysis journal homepage: DRAMMS: Deformable registration via attribute matching and mutual-saliency weighting Yangming Ou a, *, Aristeidis Sotiras b,c, Nikos Paragios b,c, Christos Davatzikos a a Section of Biomedical Image Analysis (SBIA), University of Pennsylvania, 3600 Market St., Ste 380, Philadelphia, PA 19104, USA b MAS Laboratory, Ecole Centrale de Paris, Grande Voie des Vignes, Chatenay-Malabry, France c Equipe GALEN, INRIA Saclay Ile-de-France, Orsay 91893, France article info abstract Article history: Available online 17 July 2010 Keywords: Image registration Attribute matching Gabor filter bank Mutual-saliency Outlier data A general-purpose deformable registration algorithm referred to as DRAMMS is presented in this paper. DRAMMS bridges the gap between the traditional voxel-wise methods and landmark/feature-based methods with primarily two contributions. First, DRAMMS renders each voxel relatively distinctively identifiable by a rich set of attributes, therefore largely reducing matching ambiguities. In particular, a set of multi-scale and multi-orientation Gabor attributes are extracted and the optimal components are selected, so that they form a highly distinctive morphological signature reflecting the anatomical and geometric context around each voxel. Moreover, the way in which the optimal Gabor attributes are constructed is independent of the underlying image modalities or contents, which renders DRAMMS generally applicable to diverse registration tasks. A second contribution of DRAMMS is that it modulates the registration by assigning higher weights to those voxels having higher ability to establish unique (hence reliable) correspondences across images, therefore reducing the negative impact of those regions that are less capable of finding correspondences (such as outlier regions). A continuously-valued weighting function named mutual-saliency is developed to reflect the matching uniqueness between a pair of voxels implied by the tentative transformation. As a result, voxels do not contribute equally as in most voxel-wise methods, nor in isolation as in landmark/feature-based methods. Instead, they contribute according to the continuously-valued mutual-saliency map, which dynamically evolves during the registration process. Experiments in simulated images, inter-subject images, single-/multi-modality images, from brain, heart, and prostate have demonstrated the general applicability and the accuracy of DRAMMS. Ó 2010 Elsevier B.V. All rights reserved. 1. Introduction Image registration is the process of finding the optimal transformation that aligns different imaging data into spatial correspondence, so that after registration, the same anatomic structures occupy the same spatial locations in different images (Zitova and Flusser, 2003; Crum et al., 2004). Image registration is the building block for a variety of medical image analysis tasks, such as motion correction (e.g. Zuo et al., 1996; Bai and Brady, 2009), multi-modality information fusion (e.g., Meyer et al., 1997; Verma et al., 2008), atlas-based image segmentation (e.g., Prastawa et al., 2005; Lawes et al., 2008; Gee et al., 1993), population studies (e.g., Geng et al., 2009; Sotiras et al., 2009; Ou et al., 2009b; Zollei et al., 2005; Good et al., 2004), longitudinal studies (e.g., Csapo et al., 2007; Shen and Davatzikos, 2004; Xue et al., 2006), computational anatomy (e.g., Joshi et al., 2004; Ashburner, 2007; Leow et al., 2007; Thompson * Corresponding author. Tel.: ; fax: address: Yangming.Ou@uphs.upenn.edu (Y. Ou). and Apostolova, 2007) and image-guided surgery (e.g., Muratore et al., 2003; Gering et al., 2001; Hata et al., 1998). During the past two decades, a large number of deformable (non-rigid) image registration methods have been developed, and several reviews can be found in Maintz and Viergever (1998), Lester and Arridge (1999), Hill et al. (2001), Zitova and Flusser (2003), Pluim et al. (2003), Crum et al. (2004), Holden (2008). These existing deformable registration methods can be generally classified into two main categories: landmark/feature-based methods and voxel-wise methods. Examples of the former category include (Thompson and Toga, 1996; Davatzikos et al., 1996; Rohr et al., 2001; Shen et al., 2003; Joshi and Miller, 2000; Chui and Rangarajan, 2003; Zhan et al., 2007; Ou et al., 2009a; Gee et al., 1994), among others. Examples of the latter category include (Glocker et al., 2008; Vercauteren et al., 2007, 2009; Ou and Davatzikos, 2009; D Agostino et al., 2003; Kybic and Unser, 2003; Christensen et al., 1994; Collins et al., 1994; Thirion, 1998; Rueckert et al., 1999; Maes et al., 1997; Wells et al., 1996; Friston et al., 1995), among others. Specifically, landmark/feature-based /$ - see front matter Ó 2010 Elsevier B.V. All rights reserved. doi: /j.media

2 Y. Ou et al. / Medical Image Analysis 15 (2011) methods, the former category, often explicitly detect and establish correspondences on those anatomically and/or geometrically salient regions (e.g., surfaces, corners, boundary points, line intersections), and then spread the correspondences throughout the entire image space. This category of methods is fairly intuitive and computationally efficient, since only a small proportion of the imaging data detected as landmarks/features is used. However, they usually suffer from inevitable errors in the landmark/feature detection and matching processes, and suffer from the often nonuniform distribution of landmarks within the image space, which causes non-uniform distribution of registration accuracy. Another drawback is that, the choice of landmark/feature detection tools is often heuristic and task-specific. For instance, the landmark/ feature detection tool used to recognize the surface of corpus callosum for brain image registration tasks is usually quite different from the landmark/feature detection tool used to recognize nipples for breast image registration tasks. Because of all these limitations in landmark/feature-based methods, recent literature on generalpurpose registration has mostly focused on voxel-wise methods. Voxel-wise methods, i.e., the second main category of registration methods, usually equally utilize all imaging data and find the optimal transformation by maximizing an overall similarity defined on certain voxel-wise attributes (e.g., intensities, intensity distributions). The general-purpose registration method proposed in this paper belongs essentially to voxel-wise methods. It is developed to overcome the following two limitations in traditional voxel-wise methods. The first major limitation in traditional voxel-wise methods is that, they characterize voxels by attributes that are not necessarily optimal. Since the matching between a pair of voxels is based upon the matching between the attributes characterizing them, suboptimal attributes will unavoidably lead to matching ambiguities (Shen et al., 2003; Xue et al., 2004; Wu et al., 2006; Liao and Chung, 2009). Indeed, most voxel-wise methods characterize voxels only by image intensity; however, intensity alone does not encode sufficient anatomic information for accurate matching for instance, hundreds of thousands of gray matter voxels in a brain image would have similar intensities; but they all correspond to different anatomical regions. To reduce matching ambiguities, other methods (e.g., Shen et al., 2003) attempted to characterize voxels by a richer set of attributes, such as surface curvatures and tissue membership. These attributes, although capable of reducing matching ambiguities, are often task- and parameter-specific, hence not generally applicable for diverse registration tasks. In this paper, we argue and demonstrate that, an ideal set of attributes should satisfy two conditions: (1) discriminative, i.e., attributes of two voxels will be similar if and only if they are anatomically corresponding to each other, therefore leaving minimum ambiguity in matching; (2) generally applicable, i.e., a good set of image attributes should be extractable from almost all medical images, while still satisfying the first condition, regardless of the image modalities or contents. The second major limitation of traditional voxel-wise methods is that, they often utilize all imaging data equally, which usually undermines the performance of the optimization process. Actually, different anatomical regions/voxels usually have different abilities to establish correspondences across images (Anandan, 1989; McEachen and Duncan, 1997; Wu et al., 2007). An obvious example is when there is missing data, or loss of correspondence. For instance, in Fig. 1, the lesion regions in the diseased brain image (subject) could hardly find correspondences in the normal brain image (template). In a more general setting, even if there is no missing data or loss of correspondence, certain parts of the anatomy are still more easily identifiable than others. In Fig. 2, for instance, three voxels (depicted by red, blue and orange crosses) are examined. A similarity map is generated by calculating attribute-based similarity between one of these three voxels in the subject image and all voxels in the template image. These similarity maps (Fig. 2c e) reveal a smaller number of matching candidates for the red point, more candidates for the blue point and much more candidates for the yellow point. This indicates their different abilities to establish unique correspondences, or conversely, different degrees of matching ambiguities. Naturally, these three points should be treated with different confidences during registration. Unfortunately, most voxel-wise methods, such as the mutualinformation based methods, treat all voxels equally, ignoring such differences. Other approaches attempted to address this issue, by driving the registration adaptively/hierarchically using certain anatomically salient regions. However, in their approaches only a small proportion of voxels are finally utilized, and the potential Fig. 1. Need for weighting voxels differently when there is partial loss of correspondence caused by outlier regions. The lesion (white region) in the subject image is less capable of establishing unique correspondences across images and hence should be assigned with lower weights to reduce its negative impact to registration process. Fig. 2. Need for weighting voxels differently in a typical image registration scenario. Similarity maps (c e) are generated between one specific voxel (depicted by red, blue and orange crosses) in the subject image (a) and all voxels in the template image (b). The red point has higher ability to establish unique correspondences, hence should be assigned with higher weight during registration, followed by blue point and orange point in descending order. This figure is better viewed in color version. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)

3 624 Y. Ou et al. / Medical Image Analysis 15 (2011) Fig. 3. DRAMMS framework. AM stands for attribute matching and MS stands for mutual-saliency weighting. They are two major components in DRAMMS and will be described in detail in the subsequent Sections 2.2 and 2.3. contributions from other non-utilized voxels are ignored. Moreover, the determination of which voxels to be utilized and which not is largely heuristic or dependent on some a priori knowledge. In this paper, we propose and demonstrate that, an ideal optimization process should utilize all imaging voxels, but with different weights, reflecting different levels of confidence for them to establish unique correspondences across images. This is important for increasing the matching reliability in general, and for minimizing the negative impact of the loss of correspondence, or outlier data, when they exist (like the lesion in Fig. 1). In this paper, we present Deformable Registration via Attribute Matching and Mutual-Saliency Weighting DRAMMS to overcome these two aforementioned limitations. To overcome the first limitation, DRAMMS extracts a rich set of multi-scale and multiorientation Gabor attributes at each voxel and automatically selects the optimal attributes. The optimal Gabor attributes render it relatively robust in attribute-based image matching, and are also constructed in a way that is readily applicable to various image contents and image modalities. To overcome the second limitation, DRAMMS continuously weights all voxels during the optimization process, based on a function referred to as mutual-saliency. Mutual-saliency measure quantifies the matching uniqueness between a pair of voxels implied by the transformation. In contrast to equally treating voxels or isolating voxels that have more distinctive attributes, this mutual-saliency based continuous weighting mechanism utilizes all imaging data with appropriate and dynamically-evolving weights and leads to a more effective optimization process. This paper is an augmented version of a previously published conference version (Ou and Davatzikos, 2009). The extensions include more detailed method descriptions, more validations and more thorough discussions. Also, the gradient-descent-based optimization strategy in the conference version has now been replaced by the state-of-the-art discrete optimization strategy (Komodakis et al., 2008), leading to higher speed while maintaining registration accuracy. In the remainder of this paper, DRAMMS is elaborated in Section 2, with implementation details in Section 3 and experimental results in Section 4. The whole paper is discussed and concluded in Section Methods 2.1. Problem formulation Given two images I 1 : X 1 #R and I 2 : X 2 #R in 3D image domains X i ði ¼ 1; 2Þ R 3, DRAMMS seeks a transformation T that maps every voxel u 2 X 1 to its correspondence T(u) 2 X 2, by minimizing an overall cost function E(T), Z min EðTÞ ¼ T u2x 1 þ krðtþ msðu; TðuÞÞ fflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflffl} Mutual-Saliency 1 d kai 1 ðuþ AI 2 ðtðuþþk2 fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} Attribute Matching du ð1þ where A I i ðuþði ¼ 1; 2Þ is the optimal attribute vector that reflects the geometric and anatomical contexts around voxel u, and d is its dimension. By minimizing 1 d kai 1 ðuþ AI 2 ðtðuþþk2, we seek a transformation T that minimizes the dissimilarity between a pair of voxels u 2 X 1 and T(u) 2 X 2. The extraction of attributes and selection of optimal attribute vector A I i ðþði ¼ 1; 2Þ will be detailed in Sections and ms(u,t(u)) is a continuously-valued mutual-saliency weight between two voxels u 2 X 1 and T(u) 2 X 2 higher uniqueness of their matching in the neighborhood indicates higher mutual-saliency, and hence higher weight in the optimization process. The definition of mutual-saliency will be elaborated in Section 2.3. R(T) is a smoothness/regularization term usually corresponding to the Laplacian operator, or the bending energy (Bookstein, 1989), of the deformation field T, whereas k is a balancing parameter that controls the extent of smoothness. Such a choice of regularization is widely adopted in numerous registration methods. In accordance to the overall cost function in Eq. (1), the framework of DRAMMS is also sketched in Fig. 3. The two major components optimal attribute matching (AM) and mutual-saliency (MS) weighting will be detailed in the subsequent Sections 2.2 and Attribute matching (AM) This section will address the first major component of DRAM- MS, namely, attribute matching (AM). The aim is to extract and select the optimal attributes that can reflect the geometric and anatomic contexts of each voxel, and at the same time, to keep the attribute extraction and selection readily applicable to diverse registration tasks. This component consists of two modules: attribute extraction (described in Section 2.2.1) and attribute selection (described in Section 2.2.2) Attribute extraction The idea of characterizing each image voxel with a rich set of attributes has been previously explored in the community. In their pioneer work, (Shen et al., 2003) incorporated geometric-momentinvariant (GMI) attributes, tissue membership attributes and boundary/edge attributes into a high-dimensional attribute vector to better characterize a voxel in human brain images. Since then, a number of geometric and texture attributes have been explored, including, but not limited to, intensity attributes (e.g., Ellingsen and Prince, 2009; Foroughi and Abolmaesumi, 2005), boundary/ edge attributes (e.g., Shen et al., 2003; Zacharaki et al., 2008; Sundar et al., 2009; Ellingsen and Prince, 2009), tissue membership attributes (requiring task-specific segmentation) (e.g., Shen et al., 2003; Xing et al., 2008; Zacharaki et al., 2008; Ellingsen and Prince, 2009), wavelet-based attributes (e.g., Xue et al., 2004), local frequency attributes (e.g., Liu et al. (2002), Jian et al. (2005)), local intensity histogram attributes (e.g., Shen, 2007; Yang et al., 2008), geodesic intensity histogram attributes (e.g., Ling and Jacobs, 2005; Li et al., 2009), tensor orientation attributes (specifically for diffusion tensor imaging registration) (e.g., Verma and Davatzikos, 2004; Munoz-Moreno et al., 2009; Yap et al., 2009),

4 Y. Ou et al. / Medical Image Analysis 15 (2011) and curvature attributes (specifically for surface matching) (e.g., Shen et al., 2001; Liu et al., 2004; Zhan et al., 2007; Ou et al., 2009a). These studies have all demonstrated improved registration accuracies as a result of more distinctive characterizations of voxels. However, as most of these attributes are pre-conditioned on segmentation, tissue labeling, or edge detection, and are tailored to specific image content or modality, there is still need to systematically find a common set of attributes that are generally applicable for various image contents. And also, there is need to automatically, other than heuristically, select the optimal attribute components. DRAMMS aims in this direction. DRAMMS extracts a set of multi-scale and multi-orientation Gabor attributes at each voxel by convolving the images with a set of Gabor filter banks. The use of Gabor attributes in DRAMMS is mainly motivated by the following three properties of Gabor filter banks: (1) General applicability and successful application in numerous tasks. Almost all anatomical images have texture information, at some scale and orientation, reflecting the underlying geometric and anatomical characteristics. This texture information can be effectively captured by Gabor attributes, as demonstrated in a variety of studies, including texture segmentation (e.g., Jain and Farrokhnia, 1991), image retrieval (e.g., Manjunath and Ma, 1996), cancer detection (e.g., Zhang and Liu, 2004) and tissue differentiation (e.g., Zhan and Shen, 2006; Xue et al., 2009). Recently, Gabor attributes have also been successfully used in Liu et al. (2002), Verma and Davatzikos (2004), Elbakary and Sundareshan (2005) to register images of different contents/organs. This promises the use of Gabor attributes for a variety of image registration tasks. The previous methods have also pointed out some drawbacks related to the use of Gabor attributes, such as the high computational cost and the need for an appropriate choice of attribute components due to the conveyed redundant information. DRAMMS addresses both drawbacks by developing a module for selecting the optimal Gabor components, which will be detailed in Section (2) Suitability for single- and multi-modality registration tasks. This is mainly because of the multi-scale property in Gabor filter banks. Specifically, those high-frequency Gabor filters often serve as edge detectors. The detected edge information is, to some extent, independent of the underlying intensity distributions, and is therefore suitable for multi-modality registration tasks. This holds even when intensity distributions in the two images no longer follow consistent relationship, in which case mutual-information (Wells et al., 1996; Maes et al., 1997) based methods are challenged (Liu et al., 2002). On the other hand, the low-frequency Gabor filters often serve as local image smoothers, and the corresponding attributes are analogous to images at coarse resolutions, which will largely help prevent the cost function from being trapped at local minima. As a demonstration, Fig. 4 shows a typical set of high- and low-frequency Gabor attributes for two typical human brain images from different modalities. In this case, edge information extracted by high-frequency Gabor filters (first row in the figure) provides basis for the multi-modality registration. (3) Multi-scale and multi-orientation nature. As(Kadir and Brady, 2001) pointed out, scale and orientation are closely related to the distinctiveness of attributes. The multi-scale and multiorientation attributes are more likely to render each voxel distinctively identifiable, therefore reducing ambiguities in attribute-based voxel matching. For instance, compared with traditional edge detectors (e.g., Canny, Sobel), the multi-orientation Gabor attributes will easily differentiate edges in horizontal and vertical directions (see Fig. 4). Direct computation of 3D Gabor attributes is usually time consuming. To save computational cost, the approximation strategy in Manjunath and Ma (1996), Zhan and Shen (2006) is adopted in this paper. Specifically, 3D Gabor attributes at each voxel are approxi- Fig. 4. Multi-scale and multi-orientation Gabor attributes extracted from two typical human brain images. To save space, shown here are only representative attributes from the real part of three orientations (p/2, p/3 and p) in the x y plane (a full set of Gabor attributes is mathematically described in Eq. (6)).

5 626 Y. Ou et al. / Medical Image Analysis 15 (2011) mated by convolving a 3D image with two groups of 2D Gabor filter banks in two orthogonal planes (x y and y z planes). Mathematically, the two groups of 2D Gabor filter banks in two orthogonal planes are g m;n ðx; yþ ¼a m gða m x 0 g ; a m y 0 gþ in x y plane; ð2þ h m;n ðy; zþ ¼a m ha m y 0 h ; a m z 0 h in y z plane; ð3þ where a is the scale basis set to 2 for image re-sampling. m = 1,2,..., M is the index for scales, with M being the total number of scales. As m varies, the scale factor a m leads to different sizes of the neighborhood from which the multi-scale Gabor attributes are extracted. x 0 g ¼ x cos np N þ y sin np N ; y 0 g ¼ xsin np N þ y cos np N Š; y 0 ¼ y cos np h N þ z sin np N and z 0 ¼ ysin np h N þ z cos np N are rotated coordinates, where n = 1, 2,..., N is the index for orientations, with N being the total number of orientations. As n varies, Gabor envelops rotate in x y and y z planes, leading to multi-orientation Gabor attributes. g(x, y) and h(y, z) in the equations above are known as the mother Gabor filters, which are complex-valued functions in the spatial domain. They are each obtained by modulating a Gaussian envelope with a complex exponential. Intuitively, they are elliptically-shaped covers (Gaussian envelops) that specify the neighborhood around each voxel, from which the Gabor attributes will be extracted. Mathematically, the mother Gabor filters g(x, y) and h(y,z) are expressed as 2 3! 1 gðx; yþ ¼ exp 1 x 2 þ y2 þj2pf 2pr x r y 2 r 2 x r 2 x y 5 ; ð4þ fflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflffl} Gaussian Envelope 2 3! 1 hðy; zþ ¼ exp 1 y 2 þ z2 þj2pf 2pr y r z 2 r 2 y r 2 y z 5 ; ð5þ fflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflffl} Gaussian Envelope where r x, r y and r z are the standard deviations of the Gaussian envelope in the spatial domain; f x and f y are modulating (shifting) factors in the frequency domain (often known as central frequencies ) (Kamarainen, 2003). As a result of attribute extraction, each voxel u =(x,y,z) will be characterized by a Gabor attribute vector e A i ðuþ with dimension D = M N 4, as denoted in Eq. (6). Here the factor 4 is so because of two parts (real and imaginary) of Gabor responses from two orthogonal planes. ea i ðuþ¼½jreal½ði i g m;n Þðx; yþšj; jimaginary½ði i g m;n Þðx; yþšj; jreal½ði i h m;n Þðy; zþšj; jimaginary½ði i h m;n Þðy; zþšjš m¼1;2;...;m;n¼1;2;...;n: ð6þ We call this attribute vector e AðÞ a full Gabor attribute vector, in contrast to the optimal attribute vector A q () to be selected in Section The numbers of their dimensions are denoted as D and d, respectively, where D is determined by the number of scales and orientations (D = M N 4), and d is determined automatically during the subsequent attribute selection process. Note that in Eq. (6), we have taken magnitude (absolute value) of the Gabor response in the real and imaginary parts, therefore even image contrast changes, the edge information extracted by high-frequency Gabor filters will remain relatively stable, which is suitable for multi-modality registration. Role of full Gabor attributes. Before selecting optimal attributes and using attribute vector for across-image matching, we need to make sure that the extracted full attribute vector ~ A i ðuþ in Eq. (6) is able to characterize voxels distinctively at least within the image itself. By distinctive characterization, we mean that the attributes should be able to differentiate a point from all other points in the image. In Fig. 5, two special points (labelled by red and blue crosses in sub-figure (a)) and two ordinary points (labelled by orange and purple crosses in sub-figure (a)) are examined in a typical brain MR image. Similarity maps (b e) are calculated between the attributes of each of these examined voxels and attributes of all other voxels in the same image. From these similarity maps it can be observed that, both special voxels and ordinary voxels have been well localized with fairly small ambiguities by the full Gabor attributes ea i ðuþ. Fig. 5. Role of full Gabor attributes in characterizing voxels relatively distinctively within a typical brain image. Similarity maps (b e) are created by calculating attributewise similarity between a single point (depicted by either red, blue, orange or purple crosses in (a)) with all points in the same image. Compared to intensity attribute, the full Gabor attribute vector described in Eq. (6) has led to far smaller number of voxels that are similar to the points being examined. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)

6 Y. Ou et al. / Medical Image Analysis 15 (2011) Attribute selection A disadvantage of Gabor attributes is the redundancy among attributes, which is caused by the non-orthogonality among Gabor filters at different scales and orientations. This redundancy not only increases computational cost, but more importantly, it may reduce the distinctiveness of attribute representation, causing ambiguities in the attribute matching (Manjunath and Ma, 1996; Kamarainen, 2003). A learning-based method is therefore designed to select the optimal components. The main idea of attribute selection is the following: if provided with some pairs of corresponding voxels from the two images, we can treat them as training samples and employ a machine learning method to select the optimal attribute components, such that their similarity and matching uniqueness are preserved or preferably increased. Two issues sit in the center of formulating this idea: (1) selection of training voxel pairs in the context of general-purpose registration algorithm like ours, they should be selected right from the images being registered and the selection should be fully automatic without assumption of any a priori knowledge. Moreover, representative voxel pairs should have high similarity and matching uniqueness; (2) selection of optimal attributes here the criterion is to preserve or increase the similarity and matching uniqueness between these training voxel pairs. Both issues rely on the quantifications of matching similarity and matching uniqueness between voxel pairs. Matching similarity sim(p,q) between a pair of voxels (p 2 X 1,q 2 X 2 ) can be defined based on their attribute vectors, sim A e 1 ðpþ; A e 1 2 ðqþ ¼ 1 þ 1 ke A D 1 ðpþ A e 2½0; 1Š ð7þ 2 2 ðqþk where smaller Euclidean distance between two attribute vectors indicate higher similarity. Note that the squared Euclidean distance between two attribute vectors is normalized by their dimensionality D, so as to make the similarity calculated from different number of attributes comparable. Matching uniqueness can be encoded by the mutual-saliency weight ms(,), which will be elaborated in the subsequent Section 2.3. With these quantifications, we can proceed to the two steps towards selecting optimal Gabor attribute components. Step 1: Selecting training voxel pairs. To encourage the training voxel pairs to represent different anatomical regions, they should be scattered in the image domain and far away from each other. Accordingly, DRAMMS regularly partitions the subject image I 1 into a number of J regions X ðjþ 1 ðj ¼ 1; 2;...; JÞ and selects from each region a voxel pair ðp I j 2 X ðjþ 1 X 1; q I j 2 X 2 Þ that has highest matching similarity and matching uniqueness (measured by mutual-saliency), under the full Gabor attributes. Mathematically, 2 3 ðp I j ; q I j Þ¼arg max p 2 X ðjþ 1 X 1; q 2 X 2 6 4simðA e 1 ðpþ; A e 2 ðqþþ fflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} Similarity msðp; qþ fflfflfflfflffl{zfflfflfflfflffl} Mutual-Saliency Note that the full Gabor attribute vector e AðÞ is used at this step, since no selection has been conducted till this stage. The mutualsaliency measure ms(p, q) will be elaborated in a later Section 2.3; for now, the bottom line is that it reflects the matching uniqueness between p 2 X 1 and q 2 X 2. Fig. 6 illustrates the selection of training voxel pairs. Here regular partition is used instead of more complicated organ/tissue segmentation, in order to keep DRAMMS as a general-purpose registration method without assumptions on segmentation. Note also that the template image I 2 is not partitioned because at this stage, 7 5 ð8þ Fig. 6. Sketch of the selection of representative training voxel pairs. X i (i = 1,2) are image spaces where two images I i (i = 1,2) reside. ðp I j ; q I j Þ J j¼1 are J pairs of voxels that have been selected with highest matching similarity and highest matching uniqueness (measured by mutual-saliency). no transformation is performed and no corresponding regions should be assumed. Please also note that the training voxels selected here are only for selecting optimal attributes. They are not landmarks for driving registration. Registration is instead driven by all voxels. Step 2: Selecting optimal attributes. Once training voxel pairs ðp I j ; q I j Þ J j¼1 have been determined, DRAMMS selects a subset of optimal attribute components A q () out of the full Gabor attribute vector AðÞ, e such that they maximize the overall similarity and matching uniqueness on these selected training voxel pairs. Mathematically, fa I 1 ; X J AI 2 g¼arg max A j¼1 2 6sim A 1 ðp I j Þ; A 2 ðq I j Þ msðp I j ; q I j Þ7 4 fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} fflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflffl} 5 ð9þ Similarity Mutual-Saliency Like in Eq. (8), ms(,) is the mutual-saliency that reflects the matching uniqueness and will be elaborated in the subsequent Section 2.3. Actually, Eqs. (8) and (9) share almost the same objective function [sim(,) ms(,)]. Difference is that, we are optimizing over all possible voxel pairs in Eq. (8), holding the full attribute vector e AðÞ unchanged; whereas we are optimizing over all possible attribute subsets in Eq. (9), holding the training voxel pairs ðp I ; q I Þ J j¼1 unchanged. To implement Eq. (9), DRAMMS adopts an iterative backward elimination (BE) and forward inclusion (FI) strategy for attribute selection. Such a strategy is commonly used for attribute/variable selection in the machine learning community (e.g. Guyon and Elisseeff, 2003; Fan et al., 2007). The iterative process is specified as follows. Starting from a full set of D Gabor attributes e AðÞ, each time we eliminate one Gabor component such that the objective function [sim(,) ms(,)] increases by a larger amount than eliminating other components (this is known as backward elimination). We continue the elimination until there is no one to be eliminated (meaning that, eliminating any one attribute component will lead to decrease of the objective function). This ends one round of backward elimination. Staring from there, we start to include the previously-eliminated Gabor components once at a time, so that the cost function [sim(,) ms(,)] increases by a largest amount than including other components (this is known as forward inclusion). We keep on this inclusion until there is no one to be included (meaning that, including any one attribute component will lead to decrease of the objective function). This ends one round of forward inclusion. Finally, we iterate between backward elimination and forward inclusion and monotonically increase the objective function, until there is no attribute component to be eliminated or included. Convergence is guaranteed as the number of attributes is bounded below and above. Upon convergence, the objective function [sim(,) ms(,)] is maximized and the remaining attributes are the optimal set of attributes we will eventually use for attribute matching, denoted as A q (). Fig. 7 shows a typical 3

7 628 Y. Ou et al. / Medical Image Analysis 15 (2011) Fig. 7. A typical scenario for attribute selection. (a) The value of the objective function [sim(,) ms(,)] with regard to the iteration index. (b) The number of attributes selected with regard to the iteration index. Starting from a full set of attributes, iterative backward elimination (BE) and forward inclusion (FI) processes are employed to select the optimal attributes. The objective function increases monotonically until no other attribute could be eliminated or included to increase it further. When the cost function stops increasing, the corresponding attributes are the ones selected as the optimal set. attribute selection process for two cardiac images being registered. In this case, the objective function keeps increasing during one round of BE process (where 39 attributes were eliminated from the full attribute set), one round of FI process (where 9 previously-eliminated attributes were included back to the remaining attribute set), and one more round of BE processes (where 4 attributes were eliminated from the attribute set), until it reaches the maximum, and then the optimal attribute set has been selected. Role of optimal Gabor attributes. Fig. 8 compares the intensity attribute, gray-level-cooccurance-matrix (GLCM) texture attributes (Haralick, 1979), full Gabor attributes, and the optimized Gabor attributes, in terms of matching ambiguities caused by different attributes. Similarity maps have been generated by these attributes between a special/ordinary point in the subject brain/cardiac image and all points in the template image. The optimal Gabor attributes lead to highly distinctive attribute characterizations, with less matching candidates both for a special point (depicted by red crosses) and for an ordinary point (depicted by blue crosses). Since the similarity map for a given voxel now looks more like a delta function in the neighborhood of this voxel, the optimal Gabor attributes offer great promise in reducing the risk of being trapped at local minimum Using mutual-saliency map to modulate registration The above Section 2.2 has described extracting a full set of Gabor attributes (Section 2.2.1) and selecting the optimal Gabor attributes (Section 2.2.2) to characterize each voxel distinctively. That finishes the first component of DRAMMS, namely, attribute matching (AM). This section will elaborate the second component of DRAMMS, namely, mutual-saliency (MS) weighting, to better modulate the registration process. As addressed in the introduction, an ideal optimization process should utilize all voxels but assign a continuously-valued weight to each voxel, based on the capability of this voxel to establish unique correspondences across images. The idea of quantifying matching uniqueness was perhaps first investigated in Anandan (1989), Duncan et al. (1991) and their follow-up studies (McEachen and Duncan, 1997; Shi et al., 2000). They developed a confidence quantity for motion tracking on surface points, which measures whether the matching of two surface points are unique in the neighborhood, based on the similarity defined on their curvatures. Other works explicitly segment the regions of interest (ROI) in a joint segmentation and registration framework (Yezzi et al., 2001; Wyatt and Nobel, 2003; Chen et al., 2005; Pohl et al., 2006; Wang et al., 2006; Xue et al., 2008; Ou et al., 2009a; Periaswamy and Farid, 2006) and assign higher weights in segmented ROIs. Since both approaches depend on explicit detection or segmentation of certain feature voxels/regions, they are not generally applicable to diverse tasks. For extension, we aim to develop a quantity that measures matching uniqueness for each and every single voxel in the image; also, the quantification method should not rely on any prerequisite detection or segmentation. Recent work (Huang et al., 2004; Bond and Brady, 2005; Yang et al., 2006; Wu et al., 2006; Mahapatra and Sun, 2008) assumed that more salient regions could establish more unique correspondence and hence should be assigned with higher weights. They reported improved registration accuracies in a number of registration tasks. However, this intuitive assumption does not always hold, because regions that are salient in one image are not necessarily salient in the other image, or more importantly, do not necessarily have unique correspondences across images. A simple counter-example can be observed in Fig. 1: the boundary of the lesion is salient in the diseased brain image, but it does not have a correspondence in the normal brain image, so assigning higher weights to it will most probably harm the registration process. In other words, (single) saliency in one image does not necessarily indicate matching uniqueness between two images. To remedy this problem, (Luan et al., 2008) extended the single saliency criterion into double (joint) saliency criterion. In their work, higher weights are assigned to a pair of voxels if they are respectively salient in two input images. However, two nearby points may belong to completely different anatomical structures even though they are both salient in their own images. For instance, also in Fig. 1, the boundary of the lesion is salient in the subject image, whereas another voxel at the same spatial location may be coincidently salient in the other image, but they indeed belong to different tissue types. The dilemma in double (joint) saliency is actually similar to the dilemma in single saliency, that saliency is individually measured in one image, so it does not necessarily indicate matching uniqueness across images. To directly measure matching uniqueness across two images, DRAMMS extends the concepts of single saliency or double saliency into the concept of mutual-saliency. Generally speaking, the matching is unique if the two points are similar to each other, and meanwhile not similar to anything else in the neighborhood. That is, the similarity map between a point u in one image and all the points in the neighborhood in the other image should exhibit a delta shaped function, with the pulse located right at its corresponding point T(u), as shown in Fig. 9. Therefore, to calculate mutual-saliency value ms(u, T(u)), we need to quantitatively check the existence and height of a delta function in the similarity map generated between voxel u and all voxels in the neighborhood of T(u). This is the idea behind the definition of mutual-saliency. This idea is formulated in Eq. (10) and in the associated Fig. 10. Mutual-saliency value ms( u, T(u)) is calculated by dividing the mean similarity between u and all voxels in the core neighborhood (CN) of T(u), with the mean similarity between u and all voxels in

8 Y. Ou et al. / Medical Image Analysis 15 (2011) Fig. 8. Role of attribute selection in reducing matching ambiguities, as illustrated on special voxels (red crosses) and ordinary voxels (blue crosses) in brain and cardiac images of different individuals. Similarity maps are generated between a voxel (red or blue) in the subject image and all voxels in the template image. GLCM, gray-level cooccurrence matrix (Haralick, 1979), is another commonly used texture attribute descriptor. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) the peripheral neighborhood (PN) of T(u). High mean similarity in CN and low mean similarity in PN will indicate unique matching, and therefore high mutual-saliency value. msðu;tðuþþ ¼ MEAN v2cnðtðuþþ½simða 1 ðuþ;a 2 ðvþþš MEAN v2pnðtðuþþ ½simðA 1 ðuþ;a 2 ðvþþš ð10þ where sim(,), the attribute-based similarity between two voxels, is defined in the same way as in Eq. (7). Note that voxels in between the core and peripheral neighborhoods of T(u) are not included in the calculation, because there is typically a smooth transition from high to low similarities, especially for coarse-scale Gabor attributes. For the same reason, the radii of these neighborhoods are adaptive to the scale in which Gabor attributes are extracted. In particular, in accordance to the Gabor parameter that we will discuss in Section 5.2, the radii of core, transitional and peripheral neighborhood are 2, 5, 8 voxels, respectively, for a typical isotropic 3D brain image. In implementation, mutual-saliency map is updated each time when the transformation T is updated, such as in EM strategy, since mutual-saliency ms(,) and transformation T are interleaved by Eqs. (10) and (1). Role of mutual-saliency maps. The mutual-saliency map effectively identifies outlier regions in which no good correspondence can be found and assigns them with low weights. This helps reduce their negative impact in the optimization process. In Fig. 11, motivated by our work on matching histological sections with MRI, we have simulated cross-shaped tears (cuts) in the subject image (a), which are typical when sections are stitched together. The mutual-saliency map (f) assigns low weights to the tears/cuts, therefore reducing their negative impact towards registration. On the contrary, registration without using mutual-saliency map tends to fill in the tears by over-aggressively pulling other regions. This causes artificial results as pointed out by the arrows in Fig. 11c. Besides, by Fig. 9. The idea of mutual-saliency measure. The matching between a pair of voxels u and T(u) is unique if they are similar to each other and not similar to anything else in the neighborhood. Therefore, a delta function in the similarity map indicates an unique matching, and hence high mutual-saliency value. Fig. 10. Explanation of mutual-saliency definition in Eq. (10). Different colors encode different layers of neighborhoods. High similarity in the core neighborhood (CN) and low similarity in the peripheral neighborhood (PN) indicate unique matching and hence high mutual-saliency values. This figure is better viewed in color version. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)

9 630 Y. Ou et al. / Medical Image Analysis 15 (2011) Fig. 11. Role of mutual-saliency map in accounting for outlier region and the consequent partial loss of correspondence. (a) Subject image, with simulated deformation and tears from (b) template image. (c) Registered images without using mutual-saliency map. (d) Deformation field associated with (c). (e) Registered image using mutualsaliency map. (f) Mutual-saliency map associated with (e). (g) Deformation field associated with (e). With red points marking spatial locations having exactly the same coordinates in all the sub-figures, we show that the upper-left corner of the gray patch (pointed by yellow arrow in (c)) are initially corresponding between source (a) and target (b) image, and this correspondence is preserved in registered image (e) when mutual-saliency map is used, but destroyed in registered image (c) when mutual-saliency map is not used. Since this corner point is right next the image cut, the preservation of correspondence demonstrates the decreased negative impact caused by missing data when mutual-saliency map is used. using mutual-saliency weighting, the deformation field in Fig. 11g is smoother and more realistic than the one without using mutualsaliency map (d). 3. Numerical implementation A complete registration framework usually consists of three parts: (1) similarity measure, which defines the criterion to align the two images; (2) deformation model, which defines the mechanism to transform one image to the other; and (3) optimization strategy, which is used to calculate the best parameters of the deformation model such that the similarity between the two images is maximized. The attribute matching and mutual-saliency weighting components together in DRAMMS (Eq. (1)) are essentially a new definition of image similarity measure, namely the first part of a registration framework. It should be readily combined with any deformation model and any optimization strategy to form a complete registration framework. For the sake of completeness, we have specified free form deformation (FFD) as the deformation model and the discrete optimization as the optimization strategy in the following two sub-sections Choice of deformation model A wide variety of voxel-wise deformation models can be selected, such as demons (Thirion, 1998; Vercauteren et al., 2009), fluid flow model (Christensen et al., 1994; D Agostino et al., 2003), and free form deformation (FFD) model (Rueckert et al., 1999, 2006). In our implementation, the diffeomorphic FFD (Rueckert et al., 2006) is chosen, because of its flexibility to handle a smooth and diffeomorphic deformation field, and its popularity among the deformable registration community. In the following experiments, the distance between the control points is chosen at 7 voxels in x y directions and 2 voxels in z direction. The deformation is set diffeomorphic by constraining the displacement at each control point (node) in the FFD model not to exceed 40% of the spacing between two adjacent control points (nodes). Choi and Lee (2000) has elegantly shown that this 40% hard constraint will guarantee diffeomorphism of deformation field. Note that, this 40% hard constrain is the maximum displacement in each iteration of each resolution. Therefore, composition of transformations in multiple iterations and multiple resolutions can still cope with large deformation while maintaining diffeomorphism Choice of optimization strategy To optimize the parameters for the FFD deformation model, a numerical optimization strategy is needed. A large pool of optimization strategies can be used, including gradient descent, Newton s method, Powell s method, and discrete optimization (Komodakis et al., 2008). In our implementation, we have chosen discrete optimization, a state-of-the-art optimization strategy known for computational efficiency and robustness with regard to local optima (Komodakis et al., 2008; Glocker et al., 2008). Loosely speaking, the discrete optimization strategy densely samples the displacement space in discrete manner, and seeks a group of displacement vectors on the FFD control points (nodes) in a combinatorial fashion, so that they collectively minimize the matching cost. Table 1 compares the computational time between gradient descent optimization and discrete optimization. The speedup from discrete optimization is not difficult to be observed. Since the choice of numerical optimization is not the primary contributions in this work, we did not include any more comparisons. More comparisons can be found though, in Glocker et al. (2008). For the sake of completeness of this paper, the discrete optimization strategy is briefly described below. Interested readers are referred to (Komodakis et al., 2008; Glocker et al., 2008) for details. Let U ={/} be the entire set (grid) of control points (nodes) /, then the energy in Eq. (1) can be rewritten after being projected onto the nodes as:

10 Y. Ou et al. / Medical Image Analysis 15 (2011) Table 1 Demonstration of high computational efficiency of discrete optimization (DisOpt) strategy over the traditional gradient descent (GradDes) strategy. Registration accuracy from the two optimization strategies is almost equivalent so not listed in this table. Computational time is recorded on registering two (inter-subject) brain images, two (inter-subject) cardiac images and two (multi-modality) prostate images, for attribute matching (AM) both with and without mutual-saliency (MS) weighting. Unit: minutes. Brain images ( ) Prostate images ( ) Cardiac images ( ) EðTÞ ¼ 1 juj X Z g 1 ðku /kþ msðu; TðuÞÞ ka I 1 ðuþ /2U u2x 1 A I 2 ðtðuþþk2 du þ krðr U T / Þ ð11þ where g 1 () is an inverse mapping function that maps the influence from a node / 2 U to every ordinary point u;t / is the translation that is applied to the node / of the grid U. Compared to Glocker et al. (2008), here the inverse weighting function g 1 () acts only as an apodizer function, which does not give weights to the voxels with respect to their distance from the control points. The only weights given are the ones due to the mutual-saliency value ms(,). Following (Glocker et al., 2008), the registration problem will be cast as a multi-labelling one and the theory of Markov random fields (MRF) will be used to formulate it mathematically. The solution space was discretized by sampling along the x, y and z directions as well as along their diagonals, resulting in a set L of (18 n + 1) labels, where n is the number of sampling labels (discretized displacement) along each direction. What we search now is which label to attribute to each node of the deformation grid, i.e., which displacement to be applied to each node. The energy in Eq. (11) can be approximated by a MRF energy whose general form is the following: E MRF ðlþ ¼ X V / ðl / Þþ X! V /w ðl / ; l w Þ ð12þ /2U w2nð/þ where l is the labeling (discretized displacement) that we search, l /, l w 2 L and V /, V /w are the unary and the pair-wise potentials respectively. The unary and the pair-wise potentials encode the data and the regularization term of the energy in Eq. (11). Specifically, for the case of the data term: Z V / ðl / Þ¼ g 1 ðku /kþ msðu; TðuÞÞ ka I 1 ðuþ AI 2 ðtðuþþk2 du u2x 1 ð13þ The regularization will be defined in the label domain as a simple vector difference, thus: V /w ðl / ; l w Þ¼kT / T w k FFD + GradDes AM w/ MS AM w/o MS FFD + DisOpt AM w/ MS AM w/o MS ð14þ which encourages nearby control points (nodes) to have consistent geometric displacements. In this discrete optimization, the number of labels (discretized displacement) in the search space is the key factor to balance between the registration accuracy and computational efficiency. More labels correspond to denser sampling of the possible displacements, therefore higher accuracy but lower computational efficiency. A good trade-off is obtained by a multi-resolution approach, where greater deformations (coarsely-distributed labels in larger search space) will be recovered in the beginning and the solution will be refined in every step by considering smaller deformations (densely-distributed labels in a small search space). In that way, the solution can be precise enough while the method rests computational tractable. A list of parameters in this implementation is provided as follows. The regularization parameter k was set to 0.1 for the discrete optimization case in Eq. (11). The g 1 () indicator function was defined in such a way that all the voxels belonging to a 26-neighborhood centered to a node contribute to the energy projected to it. A number of n = 4 labels were sampled per direction. The distance between the control points is chosen at 7 voxels in x y directions and 2 voxels in z direction. All the experiments were operated in C code on a 2.8 G Intel Xeon processor with LINUX operation system, and computational time of discrete optimization for several typical scenarios can be found in the rightmost two columns in Table Results DRAMMS was tested extensively in various registration tasks containing different image modalities and organs. The results are compared with those obtained from mutual-information (MI) based FFD, another commonly-used deformable registration method. A public software package Medical Image Processing Analysis and Visualization (MIPAV) (McAuliffe et al., 2001) available from NIH is used for MI-based FFD registration. The use of MIPAV is because of its friendly cross-platform user s interface, its computational efficiency and its popularity within the society. Specially, the MI-based FFD in MIPAV is implemented in multi-resolution and by Powell s optimization strategy Simulated images Registration results on a set of simulated images have already been shown in Fig. 11. In that case, we have simulated partial loss of correspondence by generating a cut in the subject image Fig. 11a. DRAMMS largely reduced the negative impact caused by outlier regions and therefore generated anatomically more meaningful results than that from MI-based FFD Inter-subject registration Our second set of experiments is performed on registration between brain images and between cardiac images from different individuals (the ones shown in the left column of Fig. 8). The images being registered are of the same modality. Therefore, the registration results can be quantitatively compared in terms of mean squared difference (MSD) and correlation coefficient (CC) between the registered and the template images high registration accuracy should correspond to decreased MSD and increased CC. This evaluation criterion has also been used in Rueckert et al. (1999). In this experiment, we recorded registration accuracy obtained by intensity, by Gabor attributes, by optimal Gabor attributes, both with and without mutual-saliency weighting. Overall, Fig. 12 shows that, each of DRAMMS components provides additive improvement of registration accuracy over MI-based FFD. The largest improvement from MI-based occurs at replacing the one-dimensional intensity attribute with high-dimensional Gabor attributes Atlas construction Our third set of experiments consists of atlas construction of human brain images. Fig. 13 shows 30 randomly selected brain

11 632 Y. Ou et al. / Medical Image Analysis 15 (2011) Fig. 12. Quantitative evaluation of the registration accuracies of inter-subject brain and cardiac images, in terms of MSD and CC between registered and template images. The various parts of DRAMMS feature extraction, feature selection, mutual-saliency weighting are added into the experiment sequentially. As a result, this figure shows that each of DRAMMS components provides additive improvement over MI-based FFD. Fig. 13. Thirty randomly selected brain images used for constructing the atlas. Template image is outlined by a red box. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)

12 Y. Ou et al. / Medical Image Analysis 15 (2011) Fig. 14. Brain atlases obtained from the randomly selected 30 images in Fig. 13. (a) Template, the same as the one in the red box in Fig. 13. (b) Atlas by affine registration. (c) Atlas obtained by MI-based FFD registration. (d) Atlas obtained by DRAMMS. The sharper average in (d) indicates higher registration accuracy for DRAMMS on average. Fig. 15. Further comparison of affine, MI-based FFD and DRAMMS registration algorithms in atlas construction. (a1 c1) The difference image (=katlas-templatek) obtained from these three registration methods. (a2 c2) The standard deviation among all the warped images, by these registration methods, respectively. Smaller difference with template and smaller standard deviation among all warped images indicate the accuracy and consistency of DRAMMS.

13 634 Y. Ou et al. / Medical Image Analysis 15 (2011) Fig. 16. Quantitative comparison of registration methods for each subject. (a) The mean squared difference (MSD) between warped image and the template for all foreground voxels. (b) The correlation coefficient between warped image and the template. (c) The errors (in voxels) against manually-defined landmark correspondences. MR images used for constructing an atlas. Each brain image is registered (in 3D) to the template (outlined by the red box), and then the warped images are averaged to form an atlas. The constructed atlases are shown in Fig. 14, from there DRAMMS is observed to result in sharper average, indicative of higher registration accuracy on average. Differences between the constructed atlases and the template image are further visualized, as shown in the top row in Fig. 15. Also shown in the bottom row Fig. 15 is the voxel-wise standard deviation among all warped images, by affine, MI-based FFD and DRAMMS methods, respectively. The smaller difference with the template in the top row and the smaller standard deviation among all warped images in the bottom row indicate the accuracy and the consistency of DRAMMS method over MI-based FFD. Fig. 16 shows mean squared difference (MSD), correlation coefficient (CC), and errors against expert-defined landmarks, between each warped subject image and the template image, by three registration methods, respectively. We specially emphasize Fig. 16c, where DRAMMS recovers voxel correspondences closer to expert-defined locations in each single pair-wise case compared with MI-FFD method Multi-modality registration Our fourth set of experiments is on some fairly challenging multi-modality registration tasks. Normally, mutual information method is used for multi-modality registrations, as long as - intensity distributions from the two images follow consistent relationship. However, when this assumption of consistent relationship in intensity distributions is violated, mutual-information based methods are usually greatly challenged. The registration between histological and MR images shown in Fig. 17 is one such case. For a brief background, histological images are usually taken in the ex vivo environment, by physically sectioning an organ into slices and examining the pathology of those slices under microscopy. Because of the ability to reveal pathologically-authenticated ground truth for lesions and tumor, it is often used as reference to label lesions and tumor in in vivo MR images of the same organ, so as to help future disease detection directly in MRI (Meyer et al., 2006; Dauguet et al., 2007; Zhan et al., 2007; Ou et al., 2009a). Therefore, histology-mri registration is often needed.

14 Y. Ou et al. / Medical Image Analysis 15 (2011) Fig. 17. Multi-modality registration between (a) histological and (b) MR images of the same prostate. Crosses of the same color denote the same spatial locations in all images, in order to better visualize the registration accuracy. Circles of the same colors roughly denote the corresponding regions, which are used to demonstrate the lack of consistent relationship in their intensity distributions. Histology-MRI registration is challenged by different imaging principles (ex vivo vs. in vivo), different resolutions (lm vs. mm level) and different image contrasts, which lead to inconsistent relationship in their intensity distributions. For instance, in Fig. 17, we can observe dark regions corresponding to dark regions across images (blue circle), white regions corresponding to white regions across images (blue circles), but also dark regions corresponding to white regions (purple circles). In addition, there are some structures that are clearly present in one image but almost invisible in the other. In this case, mutual information methods tend to be challenged, as demonstrated in a number of studies (Pitiot et al., 2006; Meyer et al., 2006; Dauguet et al., 2007). To make things worse, tears/cuts often appear in the histological images, causing partial loss of correspondences. In Fig. 17, crosses of the same color have been placed at the same spatial locations in sub-figure (b), MR image, and (d) the warped histological image by DRAMMS, in order to visually reveal whether the anatomical structures have been successfully aligned. Compared with the artificial results obtained by MI-based FFD, DRAMMS leads to a smooth deformation field that better aligns tumor and other complicated structures. 5. Discussion In this paper, a general-purpose registration method named DRAMMS is presented. DRAMMS characterizes each voxel by an

15 636 Y. Ou et al. / Medical Image Analysis 15 (2011) Table 2 List of how DRAMMS bridges the gap between voxel-wise and landmark/feature-based registration methods. Voxel-wise Landmark/feature-based DRAMMS Voxel characterization Spatial adaptivity (voxel utilization) Intensity (generally applicable to various tasks, but not distinctive for voxels) Not adaptive (weight 1 for all voxels) Task-specific attributes (distinctive for voxels, but not generally applicable to various tasks) Binarily adaptive (weight = 1 for landmarks/features and 0 otherwise) Optimal Gabor attributes (generally applicable to various tasks, and distinctive for voxels) Continuously adaptive (weighting voxels continuously by mutual-saliency measure) optimized multi-scale and multi-orientation Gabor attribute vector, therefore increases distinctiveness of each voxel and reduces the matching ambiguity. A second feature is that DRAMMS assigns continuously-valued weights for each voxel, based on a mutualsaliency measure that evaluates whether this voxel is capable of establishing unique correspondences. With the assistance of this mutual-saliency weighting mechanism, the registration relies more on regions that can be effectively identified and matched between images, thereby capable of reducing the negative impact caused by missing data and/or partial loss of correspondences. Experiments in simulated images, across-subject images and multi-modality images from human brain, heart and prostate have demonstrated the general applicability and registration accuracy of DRAMMS. This section will discuss the various aspects of DRAMMS. Section 5.1 discusses how DRAMMS bridges the gap between voxelwise methods and landmark/feature-based methods. Sections 5.2, 5.3 and 5.4 will discuss, respectively, the attribute extraction, attribute selection and mutual-saliency weighting components in DRAMMS. In the end, we will discuss the difference between DRAMMS and a closely-related work HAMMER (Shen et al., 2003) in Section 5.5 and conclude the paper in Section DRAMMS as a bridge between voxel-wise and landmark/featurebased methods The two components in DRAMMS can be regarded as bridges between the traditional voxel-wise methods and the landmark/ feature-based methods in two respects: voxel characterization and voxel utilization. This is listed in Table Discussion on attribute extraction Gabor attributes are used in DRAMMS framework because of the three reasons mentioned in Section 2.2.1: general applicability; suitability in single- and multi-modality registration; and the multi-scale and multi-orientation nature. When using Gabor filter banks, the number of scales, number of orientations, and the frequency range to be covered should all be carefully tuned. Ineffectively setting these parameters would unnecessarily introduce large redundancy among the attributes extracted, increasing the computational burden and decreasing the distinctiveness of attribute description. In this paper, we have adopted the parameter settings developed by Manjunath and Ma (1996), which have found success applications in a number of studies like (Zhan and Shen, 2006). In particular, the number of scales is set at M = 4 and the number of orientations is set at N = 6, the highest frequency is set at 0.4 Hz, the lowest frequency at 0.05 Hz, and the size of the Gaussian envelope dependent on the highest frequency by a fixed relationship specified in Eq. 1 of their paper. Fig. 18 highlights the differences in the consequent similarity maps between adopting their set of parameters (sub-figures b, c, d, e) and adopting a random set of parameters (sub-figures b, c, d, e ). There are, of course, other texture and geometric attributes that can easily fit into the framework of DRAMMS. Examples include gray-level co-occurrence matrix (GLCM) attributes (Haralick, 1979), spin image attributes (Lazebnik et al., 2005), RIFT (rotation-invariant feature transform) attributes (Lazebnik et al., 2005) and fractal attributes (Lopes and Betrouni, 2009). Although Gabor attributes are believed to be a proper choice because of the three merits listed in Section and because of the experimental comparison in Fig. 8, a rigorous comparative study among all these attributes in the registration context should be of interest. Such a study is a non-trivial task, as it requires a large number of images to be registered, requires objective evaluation of registration accuracies, which is difficult, and ideally, requires quantitative analysis of the number and the depth of local minima in the cost function as a result of different attributes or their different combinations. We mark such a study as one of our future research directions Discussion on attribute selection Although the overlaps/redundancies among Gabor filter banks can be largely reduced by carefully choosing a set of parameters in the very beginning, the redundancy will almost never vanish completely, simply because of the non-orthogonal nature among Gabor filters. Therefore, the attribute selection module is needed to further reduce information redundancy and also computational burden. Key to the selection of attributes is the quantification of matching accuracy and matching uniqueness, especially the latter. Quantification of matching uniqueness by mutual-saliency measure enables us to find voxels and attributes components that not only render true correspondences similar, but more importantly, render them uniquely similar, meaning similar to each other and not similar to anything else in the neighborhood. Another nice feature of the attribute selection part is that no a priori knowledge is required. It is designed to take no other input except the two input images being registered, therefore keeping DRAMMS a general-purpose method. Nevertheless, if a priori knowledge is given, DRAMMS framework can be readily extended to incorporate it without much efforts. For instance, if there are a number of anatomically corresponding voxels given by experts, then we can directly rely on them to select the optimal attributes and simply skip the step of selecting the training voxel pairs, as long as the expert-defined voxel pairs are (1) mutually-salient, (2) representative of anatomic/geometric context in images and (3) ideally, spatially distributed. A priori knowledge can be also obtained if registration is restricted to a specific type of images (e.g. human brain MRI registration). In this case, a common set of optimal attributes can be learned when registering a large set of such images. This should provide considerable speedup of the algorithm for some common registration tasks Discussion on mutual-saliency weighting The mutual-saliency value is automatically derived to reflect whether the matching between a pair of voxels is unique in the neighborhood. Since saliency is measured individually in one

16 Y. Ou et al. / Medical Image Analysis 15 (2011) Fig. 18. Different similarity maps when using different parameter settings in Gabor filter banks for attribute extraction. All similarity maps are calculated between a cross point and all other points in the same image. (b e) are similarity maps resulted from using Gabor parameters suggested in Manjunath and Ma (1996), while counterparts (b e ) are resulted from using a random set of Gabor parameters. This figure highlights the importance of adopting the right set of Gabor parameters. image and matching uniqueness is measured across images, neither single saliency nor double saliency in existing methods will necessarily indicate matching uniqueness. This is the motivation for developing mutual-saliency, which directly measures matching uniqueness across images. Consequently, it can more effectively capture regions/voxels useful for registration and regions/voxels having difficulties establishing correspondences. The mutual-saliency map seems to be most helpful under circumstances of outlier region, also known as missing data, or partial loss of correspondences. A typical example is the image cuts/stitches when registering histological images with MR images (like in Figs. 11 and 17). Another typical example is the lesions/ tumors when registering diseased images with normal images (e.g. Fig. 1). In these cases, existing methods without using mutual-saliency weighting tend to over-aggressively match structures that should not have been corresponding to each other. Although the over-aggressive matching usually lead to smaller residual (like in Fig. 11c), they are not desirable because of the artificial results and because of the often convoluted deformations. In contrast, the mutual-saliency maps guide registration to match well the structures that should have been matched, and leave the structures that have difficulty finding correspondences almost untouched. This leads to more anatomically meaningful results, like in Fig. 11f. The relative larger intensity residual becomes a right descriptor to encode the missing data. It should be noted that the mutual-saliency calculation itself is usually computationally costly, because it checks voxel-wise similarities in a neighborhood for each individual voxel, and on top of that, because all these similarities are calculated from high-dimensional attribute vectors. This computational burden can be observed in Table 1, where incorporating mutualsaliency in the registration will usually cost considerably more computational time. Therefore, for those registration tasks that do not involve missing data or loss of correspondence, the mutual-saliency component can be probably dropped, with noticeable speedup and moderate decrease of registration accuracy Differences between DRAMMS and HAMMER algorithms We refer to HAMMER registration algorithm (Shen et al., 2003) as the one closest to DRAMMS. HAMMER algorithm pioneers the attribute-matching concept, where it extracts geometric-moment-invariant (GMI) attributes, tissue membership attributes and boundary/edge attributes for brain image registrations. We point out the following differences between the two methods: 1. Different applications: HAMMER is specifically designed for brain image registration, because it is based on attributes from brain segmentation and tissue labeling; whereas DRAMMS is generally applicable to different modalities and contents. 2. Different voxel utilization: HAMMER is essentially a landmark/ feature-based method, which may inherit challenges that are inherent to landmark/feature-based methods; whereas DRAM- MS is a voxel-wise method utilizing all imaging data and does not need heuristic thresholding for voxels. 3. Different ability to handle outlier regions: compared to HAM- MER, DRAMMS is able to handle outlier regions such as brain lesions and histology cuts because of the mutual-saliency mechanism. 4. Different deformation mechanism: HAMMER tends to aggressively match brain structures leading to non-diffeomorphic mapping; whereas DRAMMS enforces diffeomorphism in the transformation. In this paper, DRAMMS is only compared with other generalpurpose methods, and it will be interesting to compare with those specific-purpose methods including HAMMER in future studies, to better establish registration accuracies Conclusion In summary, a general-purpose image registration method named DRAMMS has been presented in this paper. In DRAMMS, more distinctive characterization of voxels leads to reduced matching ambiguities, and more spatially adaptive utilization of imaging data leads to robustness with regard to missing data or partial loss of correspondences. Future work includes comparing different texture/geometric attributes in the framework, better understanding the effect of mutual-saliency in registration scenarios with outlier regions, and more extensively comparing our method with existing ones in various registration tasks.

DRAMMS: Deformable Registration via Attribute Matching and Mutual-Saliency weighting

DRAMMS: Deformable Registration via Attribute Matching and Mutual-Saliency weighting DRAMMS: Deformable Registration via Attribute Matching and Mutual-Saliency weighting Yangming Ou, Christos Davatzikos Section of Biomedical Image Analysis (SBIA) University of Pennsylvania Outline 1. Background

More information

DRAMMS: deformable registration via attribute matching and mutual-saliency weighting

DRAMMS: deformable registration via attribute matching and mutual-saliency weighting University of Pennsylvania ScholarlyCommons Departmental Papers (BE) Department of Bioengineering 7-2009 DRAMMS: deformable registration via attribute matching and mutual-saliency weighting Yangming Ou

More information

Attribute Similarity and Mutual-Saliency Weighting for Registration and Label Fusion

Attribute Similarity and Mutual-Saliency Weighting for Registration and Label Fusion Attribute Similarity and Mutual-Saliency Weighting for Registration and Label Fusion Yangming Ou, Jimit Doshi, Guray Erus, and Christos Davatzikos Section of Biomedical Image Analysis (SBIA) Department

More information

Non-rigid Image Registration using Electric Current Flow

Non-rigid Image Registration using Electric Current Flow Non-rigid Image Registration using Electric Current Flow Shu Liao, Max W. K. Law and Albert C. S. Chung Lo Kwee-Seong Medical Image Analysis Laboratory, Department of Computer Science and Engineering,

More information

Multi-Atlas Segmentation of the Cardiac MR Right Ventricle

Multi-Atlas Segmentation of the Cardiac MR Right Ventricle Multi-Atlas Segmentation of the Cardiac MR Right Ventricle Yangming Ou, Jimit Doshi, Guray Erus, and Christos Davatzikos Section of Biomedical Image Analysis (SBIA) Department of Radiology, University

More information

The Anatomical Equivalence Class Formulation and its Application to Shape-based Computational Neuroanatomy

The Anatomical Equivalence Class Formulation and its Application to Shape-based Computational Neuroanatomy The Anatomical Equivalence Class Formulation and its Application to Shape-based Computational Neuroanatomy Sokratis K. Makrogiannis, PhD From post-doctoral research at SBIA lab, Department of Radiology,

More information

Norbert Schuff VA Medical Center and UCSF

Norbert Schuff VA Medical Center and UCSF Norbert Schuff Medical Center and UCSF Norbert.schuff@ucsf.edu Medical Imaging Informatics N.Schuff Course # 170.03 Slide 1/67 Objective Learn the principle segmentation techniques Understand the role

More information

Graph-based Deformable Image Registration

Graph-based Deformable Image Registration Graph-based Deformable Image Registration Aristeidis Sotiras, Yangming Ou, Nikos Paragios and Christos Davatzikos Abstract Deformable image registration is a field that has received considerable attention

More information

MR IMAGE SEGMENTATION

MR IMAGE SEGMENTATION MR IMAGE SEGMENTATION Prepared by : Monil Shah What is Segmentation? Partitioning a region or regions of interest in images such that each region corresponds to one or more anatomic structures Classification

More information

Deformable MRI-Ultrasound Registration via Attribute Matching and Mutual-Saliency Weighting for Image-Guided Neurosurgery

Deformable MRI-Ultrasound Registration via Attribute Matching and Mutual-Saliency Weighting for Image-Guided Neurosurgery Deformable MRI-Ultrasound Registration via Attribute Matching and Mutual-Saliency Weighting for Image-Guided Neurosurgery Inês Machado 1,2(&), Matthew Toews 3, Jie Luo 1,4, Prashin Unadkat 5, Walid Essayed

More information

Motion Estimation for Video Coding Standards

Motion Estimation for Video Coding Standards Motion Estimation for Video Coding Standards Prof. Ja-Ling Wu Department of Computer Science and Information Engineering National Taiwan University Introduction of Motion Estimation The goal of video compression

More information

Modern Medical Image Analysis 8DC00 Exam

Modern Medical Image Analysis 8DC00 Exam Parts of answers are inside square brackets [... ]. These parts are optional. Answers can be written in Dutch or in English, as you prefer. You can use drawings and diagrams to support your textual answers.

More information

Non-rigid Registration using Discrete MRFs: Application to Thoracic CT Images

Non-rigid Registration using Discrete MRFs: Application to Thoracic CT Images Non-rigid Registration using Discrete MRFs: Application to Thoracic CT Images Ben Glocker 1, Nikos Komodakis 2, Nikos Paragios 3,4, and Nassir Navab 1 1 Computer Aided Medical Procedures (CAMP), TU Mï

More information

Multimodality Imaging for Tumor Volume Definition in Radiation Oncology

Multimodality Imaging for Tumor Volume Definition in Radiation Oncology 81 There are several commercial and academic software tools that support different segmentation algorithms. In general, commercial software packages have better implementation (with a user-friendly interface

More information

Correspondence Detection Using Wavelet-Based Attribute Vectors

Correspondence Detection Using Wavelet-Based Attribute Vectors Correspondence Detection Using Wavelet-Based Attribute Vectors Zhong Xue, Dinggang Shen, and Christos Davatzikos Section of Biomedical Image Analysis, Department of Radiology University of Pennsylvania,

More information

Image Segmentation and Registration

Image Segmentation and Registration Image Segmentation and Registration Dr. Christine Tanner (tanner@vision.ee.ethz.ch) Computer Vision Laboratory, ETH Zürich Dr. Verena Kaynig, Machine Learning Laboratory, ETH Zürich Outline Segmentation

More information

Automatic Registration-Based Segmentation for Neonatal Brains Using ANTs and Atropos

Automatic Registration-Based Segmentation for Neonatal Brains Using ANTs and Atropos Automatic Registration-Based Segmentation for Neonatal Brains Using ANTs and Atropos Jue Wu and Brian Avants Penn Image Computing and Science Lab, University of Pennsylvania, Philadelphia, USA Abstract.

More information

Learning-based Neuroimage Registration

Learning-based Neuroimage Registration Learning-based Neuroimage Registration Leonid Teverovskiy and Yanxi Liu 1 October 2004 CMU-CALD-04-108, CMU-RI-TR-04-59 School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract

More information

Joint Tumor Segmentation and Dense Deformable Registration of Brain MR Images

Joint Tumor Segmentation and Dense Deformable Registration of Brain MR Images Joint Tumor Segmentation and Dense Deformable Registration of Brain MR Images Sarah Parisot 1,2,3, Hugues Duffau 4, Stéphane Chemouny 3, Nikos Paragios 1,2 1. Center for Visual Computing, Ecole Centrale

More information

THE preceding chapters were all devoted to the analysis of images and signals which

THE preceding chapters were all devoted to the analysis of images and signals which Chapter 5 Segmentation of Color, Texture, and Orientation Images THE preceding chapters were all devoted to the analysis of images and signals which take values in IR. It is often necessary, however, to

More information

Fast Image Registration via Joint Gradient Maximization: Application to Multi-Modal Data

Fast Image Registration via Joint Gradient Maximization: Application to Multi-Modal Data MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Fast Image Registration via Joint Gradient Maximization: Application to Multi-Modal Data Xue Mei, Fatih Porikli TR-19 September Abstract We

More information

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization

More information

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

More information

Combinatorial optimization and its applications in image Processing. Filip Malmberg

Combinatorial optimization and its applications in image Processing. Filip Malmberg Combinatorial optimization and its applications in image Processing Filip Malmberg Part 1: Optimization in image processing Optimization in image processing Many image processing problems can be formulated

More information

Ensemble registration: Combining groupwise registration and segmentation

Ensemble registration: Combining groupwise registration and segmentation PURWANI, COOTES, TWINING: ENSEMBLE REGISTRATION 1 Ensemble registration: Combining groupwise registration and segmentation Sri Purwani 1,2 sri.purwani@postgrad.manchester.ac.uk Tim Cootes 1 t.cootes@manchester.ac.uk

More information

Segmentation of Images

Segmentation of Images Segmentation of Images SEGMENTATION If an image has been preprocessed appropriately to remove noise and artifacts, segmentation is often the key step in interpreting the image. Image segmentation is a

More information

MARS: Multiple Atlases Robust Segmentation

MARS: Multiple Atlases Robust Segmentation Software Release (1.0.1) Last updated: April 30, 2014. MARS: Multiple Atlases Robust Segmentation Guorong Wu, Minjeong Kim, Gerard Sanroma, and Dinggang Shen {grwu, mjkim, gerard_sanroma, dgshen}@med.unc.edu

More information

Methodological progress in image registration for ventilation estimation, segmentation propagation and multi-modal fusion

Methodological progress in image registration for ventilation estimation, segmentation propagation and multi-modal fusion Methodological progress in image registration for ventilation estimation, segmentation propagation and multi-modal fusion Mattias P. Heinrich Julia A. Schnabel, Mark Jenkinson, Sir Michael Brady 2 Clinical

More information

Shape-based Diffeomorphic Registration on Hippocampal Surfaces Using Beltrami Holomorphic Flow

Shape-based Diffeomorphic Registration on Hippocampal Surfaces Using Beltrami Holomorphic Flow Shape-based Diffeomorphic Registration on Hippocampal Surfaces Using Beltrami Holomorphic Flow Abstract. Finding meaningful 1-1 correspondences between hippocampal (HP) surfaces is an important but difficult

More information

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight Three-Dimensional Object Reconstruction from Layered Spatial Data Michael Dangl and Robert Sablatnig Vienna University of Technology, Institute of Computer Aided Automation, Pattern Recognition and Image

More information

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009 Learning and Inferring Depth from Monocular Images Jiyan Pan April 1, 2009 Traditional ways of inferring depth Binocular disparity Structure from motion Defocus Given a single monocular image, how to infer

More information

Segmentation and Grouping

Segmentation and Grouping Segmentation and Grouping How and what do we see? Fundamental Problems ' Focus of attention, or grouping ' What subsets of pixels do we consider as possible objects? ' All connected subsets? ' Representation

More information

CHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT

CHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT CHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT 2.1 BRIEF OUTLINE The classification of digital imagery is to extract useful thematic information which is one

More information

LOCAL-GLOBAL OPTICAL FLOW FOR IMAGE REGISTRATION

LOCAL-GLOBAL OPTICAL FLOW FOR IMAGE REGISTRATION LOCAL-GLOBAL OPTICAL FLOW FOR IMAGE REGISTRATION Ammar Zayouna Richard Comley Daming Shi Middlesex University School of Engineering and Information Sciences Middlesex University, London NW4 4BT, UK A.Zayouna@mdx.ac.uk

More information

Motion Estimation. There are three main types (or applications) of motion estimation:

Motion Estimation. There are three main types (or applications) of motion estimation: Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion

More information

An explicit feature control approach in structural topology optimization

An explicit feature control approach in structural topology optimization th World Congress on Structural and Multidisciplinary Optimisation 07 th -2 th, June 205, Sydney Australia An explicit feature control approach in structural topology optimization Weisheng Zhang, Xu Guo

More information

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 20 CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 2.1 CLASSIFICATION OF CONVENTIONAL TECHNIQUES Classical optimization methods can be classified into two distinct groups:

More information

Methods for data preprocessing

Methods for data preprocessing Methods for data preprocessing John Ashburner Wellcome Trust Centre for Neuroimaging, 12 Queen Square, London, UK. Overview Voxel-Based Morphometry Morphometry in general Volumetrics VBM preprocessing

More information

Texture Analysis. Selim Aksoy Department of Computer Engineering Bilkent University

Texture Analysis. Selim Aksoy Department of Computer Engineering Bilkent University Texture Analysis Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Texture An important approach to image description is to quantify its texture content. Texture

More information

Texture Segmentation by Windowed Projection

Texture Segmentation by Windowed Projection Texture Segmentation by Windowed Projection 1, 2 Fan-Chen Tseng, 2 Ching-Chi Hsu, 2 Chiou-Shann Fuh 1 Department of Electronic Engineering National I-Lan Institute of Technology e-mail : fctseng@ccmail.ilantech.edu.tw

More information

Registration Techniques

Registration Techniques EMBO Practical Course on Light Sheet Microscopy Junior-Prof. Dr. Olaf Ronneberger Computer Science Department and BIOSS Centre for Biological Signalling Studies University of Freiburg Germany O. Ronneberger,

More information

Latest development in image feature representation and extraction

Latest development in image feature representation and extraction International Journal of Advanced Research and Development ISSN: 2455-4030, Impact Factor: RJIF 5.24 www.advancedjournal.com Volume 2; Issue 1; January 2017; Page No. 05-09 Latest development in image

More information

Morphological Analysis of Brain Structures Using Spatial Normalization

Morphological Analysis of Brain Structures Using Spatial Normalization Morphological Analysis of Brain Structures Using Spatial Normalization C. Davatzikos 1, M. Vaillant 1, S. Resnick 2, J.L. Prince 3;1, S. Letovsky 1, and R.N. Bryan 1 1 Department of Radiology, Johns Hopkins

More information

Deformable Medical Image Registration: Methods

Deformable Medical Image Registration: Methods Deformable Medical Image Registration 1 Deformable Medical Image Registration: Setting The State Of The Art With Discrete Methods Ben Glocker Microsoft Research, Cambridge, UK Aristeidis Sotiras Ecole

More information

Norbert Schuff Professor of Radiology VA Medical Center and UCSF

Norbert Schuff Professor of Radiology VA Medical Center and UCSF Norbert Schuff Professor of Radiology Medical Center and UCSF Norbert.schuff@ucsf.edu 2010, N.Schuff Slide 1/67 Overview Definitions Role of Segmentation Segmentation methods Intensity based Shape based

More information

Non-Rigid Image Registration III

Non-Rigid Image Registration III Non-Rigid Image Registration III CS6240 Multimedia Analysis Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (CS6240) Non-Rigid Image Registration

More information

Journal of Asian Scientific Research FEATURES COMPOSITION FOR PROFICIENT AND REAL TIME RETRIEVAL IN CBIR SYSTEM. Tohid Sedghi

Journal of Asian Scientific Research FEATURES COMPOSITION FOR PROFICIENT AND REAL TIME RETRIEVAL IN CBIR SYSTEM. Tohid Sedghi Journal of Asian Scientific Research, 013, 3(1):68-74 Journal of Asian Scientific Research journal homepage: http://aessweb.com/journal-detail.php?id=5003 FEATURES COMPOSTON FOR PROFCENT AND REAL TME RETREVAL

More information

Regional Manifold Learning for Deformable Registration of Brain MR Images

Regional Manifold Learning for Deformable Registration of Brain MR Images Regional Manifold Learning for Deformable Registration of Brain MR Images Dong Hye Ye, Jihun Hamm, Dongjin Kwon, Christos Davatzikos, and Kilian M. Pohl Department of Radiology, University of Pennsylvania,

More information

6. Dicretization methods 6.1 The purpose of discretization

6. Dicretization methods 6.1 The purpose of discretization 6. Dicretization methods 6.1 The purpose of discretization Often data are given in the form of continuous values. If their number is huge, model building for such data can be difficult. Moreover, many

More information

Towards the completion of assignment 1

Towards the completion of assignment 1 Towards the completion of assignment 1 What to do for calibration What to do for point matching What to do for tracking What to do for GUI COMPSCI 773 Feature Point Detection Why study feature point detection?

More information

Robust Linear Registration of CT images using Random Regression Forests

Robust Linear Registration of CT images using Random Regression Forests Robust Linear Registration of CT images using Random Regression Forests Ender Konukoglu a Antonio Criminisi a Sayan Pathak b Duncan Robertson a Steve White b David Haynor c Khan Siddiqui b a Microsoft

More information

Supervised vs. Unsupervised Learning

Supervised vs. Unsupervised Learning Clustering Supervised vs. Unsupervised Learning So far we have assumed that the training samples used to design the classifier were labeled by their class membership (supervised learning) We assume now

More information

Change detection using joint intensity histogram

Change detection using joint intensity histogram Change detection using joint intensity histogram Yasuyo Kita National Institute of Advanced Industrial Science and Technology (AIST) Information Technology Research Institute AIST Tsukuba Central 2, 1-1-1

More information

Efficient Image Compression of Medical Images Using the Wavelet Transform and Fuzzy c-means Clustering on Regions of Interest.

Efficient Image Compression of Medical Images Using the Wavelet Transform and Fuzzy c-means Clustering on Regions of Interest. Efficient Image Compression of Medical Images Using the Wavelet Transform and Fuzzy c-means Clustering on Regions of Interest. D.A. Karras, S.A. Karkanis and D. E. Maroulis University of Piraeus, Dept.

More information

Nonrigid Registration using Free-Form Deformations

Nonrigid Registration using Free-Form Deformations Nonrigid Registration using Free-Form Deformations Hongchang Peng April 20th Paper Presented: Rueckert et al., TMI 1999: Nonrigid registration using freeform deformations: Application to breast MR images

More information

Chapter 2 Basic Structure of High-Dimensional Spaces

Chapter 2 Basic Structure of High-Dimensional Spaces Chapter 2 Basic Structure of High-Dimensional Spaces Data is naturally represented geometrically by associating each record with a point in the space spanned by the attributes. This idea, although simple,

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion

More information

Geometric Registration for Deformable Shapes 3.3 Advanced Global Matching

Geometric Registration for Deformable Shapes 3.3 Advanced Global Matching Geometric Registration for Deformable Shapes 3.3 Advanced Global Matching Correlated Correspondences [ASP*04] A Complete Registration System [HAW*08] In this session Advanced Global Matching Some practical

More information

Preprocessing II: Between Subjects John Ashburner

Preprocessing II: Between Subjects John Ashburner Preprocessing II: Between Subjects John Ashburner Pre-processing Overview Statistics or whatever fmri time-series Anatomical MRI Template Smoothed Estimate Spatial Norm Motion Correct Smooth Coregister

More information

Object Extraction Using Image Segmentation and Adaptive Constraint Propagation

Object Extraction Using Image Segmentation and Adaptive Constraint Propagation Object Extraction Using Image Segmentation and Adaptive Constraint Propagation 1 Rajeshwary Patel, 2 Swarndeep Saket 1 Student, 2 Assistant Professor 1 2 Department of Computer Engineering, 1 2 L. J. Institutes

More information

Neuroimaging and mathematical modelling Lesson 2: Voxel Based Morphometry

Neuroimaging and mathematical modelling Lesson 2: Voxel Based Morphometry Neuroimaging and mathematical modelling Lesson 2: Voxel Based Morphometry Nivedita Agarwal, MD Nivedita.agarwal@apss.tn.it Nivedita.agarwal@unitn.it Volume and surface morphometry Brain volume White matter

More information

Elliptical Head Tracker using Intensity Gradients and Texture Histograms

Elliptical Head Tracker using Intensity Gradients and Texture Histograms Elliptical Head Tracker using Intensity Gradients and Texture Histograms Sriram Rangarajan, Dept. of Electrical and Computer Engineering, Clemson University, Clemson, SC 29634 srangar@clemson.edu December

More information

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors Texture The most fundamental question is: How can we measure texture, i.e., how can we quantitatively distinguish between different textures? Of course it is not enough to look at the intensity of individual

More information

Texture Image Segmentation using FCM

Texture Image Segmentation using FCM Proceedings of 2012 4th International Conference on Machine Learning and Computing IPCSIT vol. 25 (2012) (2012) IACSIT Press, Singapore Texture Image Segmentation using FCM Kanchan S. Deshmukh + M.G.M

More information

Efficient population registration of 3D data

Efficient population registration of 3D data Efficient population registration of 3D data Lilla Zöllei 1, Erik Learned-Miller 2, Eric Grimson 1, William Wells 1,3 1 Computer Science and Artificial Intelligence Lab, MIT; 2 Dept. of Computer Science,

More information

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 16, Issue 6, Ver. VI (Nov Dec. 2014), PP 29-33 Analysis of Image and Video Using Color, Texture and Shape Features

More information

An Introduction To Automatic Tissue Classification Of Brain MRI. Colm Elliott Mar 2014

An Introduction To Automatic Tissue Classification Of Brain MRI. Colm Elliott Mar 2014 An Introduction To Automatic Tissue Classification Of Brain MRI Colm Elliott Mar 2014 Tissue Classification Tissue classification is part of many processing pipelines. We often want to classify each voxel

More information

Image Resizing Based on Gradient Vector Flow Analysis

Image Resizing Based on Gradient Vector Flow Analysis Image Resizing Based on Gradient Vector Flow Analysis Sebastiano Battiato battiato@dmi.unict.it Giovanni Puglisi puglisi@dmi.unict.it Giovanni Maria Farinella gfarinellao@dmi.unict.it Daniele Ravì rav@dmi.unict.it

More information

Measuring longitudinal brain changes in humans and small animal models. Christos Davatzikos

Measuring longitudinal brain changes in humans and small animal models. Christos Davatzikos Measuring longitudinal brain changes in humans and small animal models Christos Davatzikos Section of Biomedical Image Analysis University of Pennsylvania (Radiology) http://www.rad.upenn.edu/sbia Computational

More information

Auto-contouring the Prostate for Online Adaptive Radiotherapy

Auto-contouring the Prostate for Online Adaptive Radiotherapy Auto-contouring the Prostate for Online Adaptive Radiotherapy Yan Zhou 1 and Xiao Han 1 Elekta Inc., Maryland Heights, MO, USA yan.zhou@elekta.com, xiao.han@elekta.com, Abstract. Among all the organs under

More information

Iterative Estimation of 3D Transformations for Object Alignment

Iterative Estimation of 3D Transformations for Object Alignment Iterative Estimation of 3D Transformations for Object Alignment Tao Wang and Anup Basu Department of Computing Science, Univ. of Alberta, Edmonton, AB T6G 2E8, Canada Abstract. An Iterative Estimation

More information

Hybrid Spline-based Multimodal Registration using a Local Measure for Mutual Information

Hybrid Spline-based Multimodal Registration using a Local Measure for Mutual Information Hybrid Spline-based Multimodal Registration using a Local Measure for Mutual Information Andreas Biesdorf 1, Stefan Wörz 1, Hans-Jürgen Kaiser 2, Karl Rohr 1 1 University of Heidelberg, BIOQUANT, IPMB,

More information

Available Online through

Available Online through Available Online through www.ijptonline.com ISSN: 0975-766X CODEN: IJPTFI Research Article ANALYSIS OF CT LIVER IMAGES FOR TUMOUR DIAGNOSIS BASED ON CLUSTERING TECHNIQUE AND TEXTURE FEATURES M.Krithika

More information

Annales UMCS Informatica AI 1 (2003) UMCS. Registration of CT and MRI brain images. Karol Kuczyński, Paweł Mikołajczak

Annales UMCS Informatica AI 1 (2003) UMCS. Registration of CT and MRI brain images. Karol Kuczyński, Paweł Mikołajczak Annales Informatica AI 1 (2003) 149-156 Registration of CT and MRI brain images Karol Kuczyński, Paweł Mikołajczak Annales Informatica Lublin-Polonia Sectio AI http://www.annales.umcs.lublin.pl/ Laboratory

More information

Accurate Image Registration from Local Phase Information

Accurate Image Registration from Local Phase Information Accurate Image Registration from Local Phase Information Himanshu Arora, Anoop M. Namboodiri, and C.V. Jawahar Center for Visual Information Technology, IIIT, Hyderabad, India { himanshu@research., anoop@,

More information

Clustering. Supervised vs. Unsupervised Learning

Clustering. Supervised vs. Unsupervised Learning Clustering Supervised vs. Unsupervised Learning So far we have assumed that the training samples used to design the classifier were labeled by their class membership (supervised learning) We assume now

More information

CRF Based Point Cloud Segmentation Jonathan Nation

CRF Based Point Cloud Segmentation Jonathan Nation CRF Based Point Cloud Segmentation Jonathan Nation jsnation@stanford.edu 1. INTRODUCTION The goal of the project is to use the recently proposed fully connected conditional random field (CRF) model to

More information

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving

More information

Prototype of Silver Corpus Merging Framework

Prototype of Silver Corpus Merging Framework www.visceral.eu Prototype of Silver Corpus Merging Framework Deliverable number D3.3 Dissemination level Public Delivery data 30.4.2014 Status Authors Final Markus Krenn, Allan Hanbury, Georg Langs This

More information

Object Identification in Ultrasound Scans

Object Identification in Ultrasound Scans Object Identification in Ultrasound Scans Wits University Dec 05, 2012 Roadmap Introduction to the problem Motivation Related Work Our approach Expected Results Introduction Nowadays, imaging devices like

More information

AN EXAMINING FACE RECOGNITION BY LOCAL DIRECTIONAL NUMBER PATTERN (Image Processing)

AN EXAMINING FACE RECOGNITION BY LOCAL DIRECTIONAL NUMBER PATTERN (Image Processing) AN EXAMINING FACE RECOGNITION BY LOCAL DIRECTIONAL NUMBER PATTERN (Image Processing) J.Nithya 1, P.Sathyasutha2 1,2 Assistant Professor,Gnanamani College of Engineering, Namakkal, Tamil Nadu, India ABSTRACT

More information

Artifacts and Textured Region Detection

Artifacts and Textured Region Detection Artifacts and Textured Region Detection 1 Vishal Bangard ECE 738 - Spring 2003 I. INTRODUCTION A lot of transformations, when applied to images, lead to the development of various artifacts in them. In

More information

Automatic Logo Detection and Removal

Automatic Logo Detection and Removal Automatic Logo Detection and Removal Miriam Cha, Pooya Khorrami and Matthew Wagner Electrical and Computer Engineering Carnegie Mellon University Pittsburgh, PA 15213 {mcha,pkhorrami,mwagner}@ece.cmu.edu

More information

Image Segmentation. Shengnan Wang

Image Segmentation. Shengnan Wang Image Segmentation Shengnan Wang shengnan@cs.wisc.edu Contents I. Introduction to Segmentation II. Mean Shift Theory 1. What is Mean Shift? 2. Density Estimation Methods 3. Deriving the Mean Shift 4. Mean

More information

identified and grouped together.

identified and grouped together. Segmentation ti of Images SEGMENTATION If an image has been preprocessed appropriately to remove noise and artifacts, segmentation is often the key step in interpreting the image. Image segmentation is

More information

Histogram and watershed based segmentation of color images

Histogram and watershed based segmentation of color images Histogram and watershed based segmentation of color images O. Lezoray H. Cardot LUSAC EA 2607 IUT Saint-Lô, 120 rue de l'exode, 50000 Saint-Lô, FRANCE Abstract A novel method for color image segmentation

More information

An Introduction to Content Based Image Retrieval

An Introduction to Content Based Image Retrieval CHAPTER -1 An Introduction to Content Based Image Retrieval 1.1 Introduction With the advancement in internet and multimedia technologies, a huge amount of multimedia data in the form of audio, video and

More information

K-Means Clustering Using Localized Histogram Analysis

K-Means Clustering Using Localized Histogram Analysis K-Means Clustering Using Localized Histogram Analysis Michael Bryson University of South Carolina, Department of Computer Science Columbia, SC brysonm@cse.sc.edu Abstract. The first step required for many

More information

An Algorithm for Medical Image Registration using Local Feature Modal Mapping

An Algorithm for Medical Image Registration using Local Feature Modal Mapping An Algorithm for Medical Image Registration using Local Feature Modal Mapping Cundong Tang, Shangke Quan,Xinfeng Yang * School of Computer and Information Engineering, Nanyang Institute of Technology,

More information

The organization of the human cerebral cortex estimated by intrinsic functional connectivity

The organization of the human cerebral cortex estimated by intrinsic functional connectivity 1 The organization of the human cerebral cortex estimated by intrinsic functional connectivity Journal: Journal of Neurophysiology Author: B. T. Thomas Yeo, et al Link: https://www.ncbi.nlm.nih.gov/pubmed/21653723

More information

Integrated Approaches to Non-Rigid Registration in Medical Images

Integrated Approaches to Non-Rigid Registration in Medical Images Work. on Appl. of Comp. Vision, pg 102-108. 1 Integrated Approaches to Non-Rigid Registration in Medical Images Yongmei Wang and Lawrence H. Staib + Departments of Electrical Engineering and Diagnostic

More information

A New Combinatorial Design of Coded Distributed Computing

A New Combinatorial Design of Coded Distributed Computing A New Combinatorial Design of Coded Distributed Computing Nicholas Woolsey, Rong-Rong Chen, and Mingyue Ji Department of Electrical and Computer Engineering, University of Utah Salt Lake City, UT, USA

More information

3D Registration based on Normalized Mutual Information

3D Registration based on Normalized Mutual Information 3D Registration based on Normalized Mutual Information Performance of CPU vs. GPU Implementation Florian Jung, Stefan Wesarg Interactive Graphics Systems Group (GRIS), TU Darmstadt, Germany stefan.wesarg@gris.tu-darmstadt.de

More information

Non-rigid Image Registration

Non-rigid Image Registration Overview Non-rigid Image Registration Introduction to image registration - he goal of image registration - Motivation for medical image registration - Classification of image registration - Nonrigid registration

More information

RIGID IMAGE REGISTRATION

RIGID IMAGE REGISTRATION RIGID IMAGE REGISTRATION Duygu Tosun-Turgut, Ph.D. Center for Imaging of Neurodegenerative Diseases Department of Radiology and Biomedical Imaging duygu.tosun@ucsf.edu What is registration? Image registration

More information

Biomedical Image Analysis Using Markov Random Fields & Efficient Linear Programing

Biomedical Image Analysis Using Markov Random Fields & Efficient Linear Programing Biomedical Image Analysis Using Markov Random Fields & Efficient Linear Programing Nikos Komodakis Ahmed Besbes Ben Glocker Nikos Paragios Abstract Computer-aided diagnosis through biomedical image analysis

More information

Image Segmentation Techniques for Object-Based Coding

Image Segmentation Techniques for Object-Based Coding Image Techniques for Object-Based Coding Junaid Ahmed, Joseph Bosworth, and Scott T. Acton The Oklahoma Imaging Laboratory School of Electrical and Computer Engineering Oklahoma State University {ajunaid,bosworj,sacton}@okstate.edu

More information

Whole Body MRI Intensity Standardization

Whole Body MRI Intensity Standardization Whole Body MRI Intensity Standardization Florian Jäger 1, László Nyúl 1, Bernd Frericks 2, Frank Wacker 2 and Joachim Hornegger 1 1 Institute of Pattern Recognition, University of Erlangen, {jaeger,nyul,hornegger}@informatik.uni-erlangen.de

More information

Bildverarbeitung für die Medizin 2007

Bildverarbeitung für die Medizin 2007 Bildverarbeitung für die Medizin 2007 Image Registration with Local Rigidity Constraints Jan Modersitzki Institute of Mathematics, University of Lübeck, Wallstraße 40, D-23560 Lübeck 1 Summary Registration

More information

Egemen Tanin, Tahsin M. Kurc, Cevdet Aykanat, Bulent Ozguc. Abstract. Direct Volume Rendering (DVR) is a powerful technique for

Egemen Tanin, Tahsin M. Kurc, Cevdet Aykanat, Bulent Ozguc. Abstract. Direct Volume Rendering (DVR) is a powerful technique for Comparison of Two Image-Space Subdivision Algorithms for Direct Volume Rendering on Distributed-Memory Multicomputers Egemen Tanin, Tahsin M. Kurc, Cevdet Aykanat, Bulent Ozguc Dept. of Computer Eng. and

More information