Schroedinger Eigenmaps with Nondiagonal Potentials for Spatial-Spectral Clustering of Hyperspectral Imagery

Size: px
Start display at page:

Download "Schroedinger Eigenmaps with Nondiagonal Potentials for Spatial-Spectral Clustering of Hyperspectral Imagery"

Transcription

1 Schroedinger Eigenmaps with Nondiagonal Potentials for Spatial-Spectral Clustering of Hyperspectral Imagery Nathan D. Cahill a, Wojciech Czaja b, and David W. Messinger c a Center for Applied and Computational Mathematics, School of Mathematical Sciences, Rochester Institute of Technology, Rochester, NY 14623, USA b Department of Mathematics, University of Maryland, College Park, MD 20742, USA c Digital Imaging and Remote Sensing Laboratory, Center for Imaging Science, Rochester Institute of Technology, Rochester, NY 14623, USA ABSTRACT Schroedinger Eigenmaps (SE) has recently emerged as a powerful graph-based technique for semi-supervised manifold learning and recovery. By extending the Laplacian of a graph constructed from hyperspectral imagery to incorporate barrier or cluster potentials, SE enables machine learning techniques that employ expert/labeled information provided at a subset of pixels. In this paper, we show how different types of nondiagonal potentials can be used within the SE framework in a way that allows for the integration of spatial and spectral information in unsupervised manifold learning and recovery. The nondiagonal potentials encode spatial proximity, which when combined with the spectral proximity information in the original graph, yields a framework that is competitive with state-of-the-art spectral/spatial fusion approaches for clustering and subsequent classification of hyperspectral image data. Keywords: Schroedinger eigenmaps, Laplacian eigenmaps, spatial-spectral fusion, dimensionality reduction 1. INTRODUCTION In hyperspectral imagery, each image pixel is comprised of typically hundreds of spectral bands. 1 Hence, an m n hyperspectral image with d spectral bands can be thought of as a data set containing mn points in a d-dimensional space. Because d can be quite large, it can be difficult for analysts to effectively search the imagery to identify targets or anomalies. Furthermore, automated algorithms for classification, segmentation, and target/anomaly detection can require a massive amount of computation. In order to combat these issues, a variety of approaches have been recently proposed to perform dimensionality reduction on hyperspectral imagery. Since hyperspectral data cannot be assumed to lie on a linear manifold, 2 many nonlinear approaches to dimensionality reduction have been investigated, including Local Linear Embedding (LLE), 3 Isometric Feature Mapping (ISOMAP), 4 Kernel Principal Components Analysis (KPCA), 5 and Laplacian Eigenmaps (LE). 6 In this article, we focus on the LE algorithm, which involves constructing a graph representing the highdimensional data and then using generalized eigenvectors of the graph Laplacian matrix as the basis for a lower-dimensional space in which local properties of the data are preserved. Recent research 7 9 has shown that due to spatial correlations in hyperspectral imagery (especially in high resolution hyperspectral imagery), spatial information should be included, or fused, with the spectral information in order to more adequately represent the properties of the image data in the lower-dimensional space. Incorporating spatial information has been approached from multiple fronts: modifying the structure of the graph, 7,8 modifying the edge weights, 9 or fusing spatial and spectral Laplacian matrices and/or their generalized eigenvectors. 8 We propose a different generalization of the LE algorithm for dimensionality reduction of hyperspectral imagery in a manner that fuses spatial and spectral information. Our generalization, which we refer to as the Spatial-Spectral Schroedinger Eigenmaps (SSSE) algorithm, is based on adding nondiagonal potentials encoding Send correspondence to Nathan D. Cahill: nathan.cahill@rit.edu

2 spatial proximity to the Laplacian matrix of the original graph (which contains spectral proximity information). Adding these potentials changes the Laplacian operator to a Schroedinger operator, making our proposed algorithm an instance of the Schroedinger Eigenmaps (SE) algorithm. 10 (Originally, SE was proposed for semisupervised dimensionality reduction and learning; in SSSE, the semi-supervision refers to knowledge of spatial proximity between pixels instead of knowledge of particular class labels.) To illustrate the practicality of the SSSE algorithm, we performed experiments on publicly available hyperspectral images (Pavia University and Indian Pines). We used a subset of the ground-truth labels from these images to learn classifiers for predicting class labels from the SSSE reduced-dimension data. When comparing SSSE with eight other dimensionality reduction algorithms, the subsequent classification performance is competitive/superior in nearly all cases. The remainder of this article is organized as follows. Section 2 provides mathematical preliminaries that describe the LE and SE algorithms, as well as prior art approaches for spatial-spectral fusion in LE-based dimensionality reduction. Section 3 presents the proposed SSSE algorithm. Section 4 describes, carries out, and analyzes the results of classification experiments that illustrate the efficacy of the SSSE algorithm with respect to several prior art algorithms. Finally, section 5 provides some concluding remarks. 2. MATHEMATICAL PRELIMINARIES In many areas of imaging analysis and computer vision, high dimensional data intrinsically resides on a low dimensional manifold in the high dimensional space. The goal of dimensionality reduction algorithms is to reduce the number of dimensions in the data in a way that preserves properties of the low dimensional manifold. Mathematically, ifx = {x 1,...,x k }isasetofpointsonamanifoldm R n, dimensionalityreductionalgorithms aim to identify a set of corresponding points Y = {y 1,...,y k } in R m, where m << n, so that the structure of Y is somehow similar to that of X. 2.1 Laplacian Eigenmaps The Laplacian Eigenmaps (LE) algorithm of Belkin and Niyogi 11 is a geometrically motivated nonlinear dimensionality reduction algorithm that is popular due to its computational efficiency, its locality preserving properties, and its natural relationship to clustering algorithms. It involves the following three steps: 1. Construct an undirected graph G = (X,E) whose vertices are the points in X and whose edges E are defined based on proximity between vertices. Proximity can be found either by ǫ-neighborhoods or by (mutual) k-nearest neighbor search. 2. Define weights for the edges in E. One ( common method ) is to define weights according to the heat kernel; i.e., define the weight W i,j = exp x i x j 2 /σ if an edge exists between x i and x j or W i,j = 0 otherwise. 3. Compute the smallest m+1 eigenvalues and eigenvectors of the generalized eigenvector problem Lf = λdf, wheredisthediagonalweighteddegreematrixdefinedbyd i,i = j W i,j, andl = D W isthelaplacian matrix. If the resulting eigenvectors f 0,..., f m, are ordered so that 0 = λ 0 λ 1 λ m, then the points y T 1, y T 2,..., y T 3 are defined to be the rows of F = [f 1 f 2 f m ]. As noted by Belkin and Niyogi, 11 the generalized eigenvector problem solved in the LE algorithm is identical to the one that emerges in the normalized cuts (NCut) algorithm 12,13 for clustering vertices of a graph into different classes. In fact, clustering can proceed directly on the points in Y, using a standard algorithm such as k-means clustering.

3 2.2 Schroedinger Eigenmaps Czaja and Ehler 10 proposed the Schroedinger Eigenmaps (SE) algorithm by generalizing the LE algorithm to incorporate a potential matrix V. The SE algorithm proceeds with the same steps as the LE algorithm, with the exception that the generalized eigenvector problem in step (3) is replaced by the problem (L+αV)f = λdf, where α is a parameter chosen to relatively weight the contributions of the Laplacian matrix and potential matrix. Two types of potentials have been explored for use in hyperspectral imaging analysis: 14 barriers and clusters. Barrier potentials are created by defining V to be a nonnegative diagonal matrix. The positive entries in V effectively pull the corresponding points in Y towards the origin. Cluster potentials are created by defining V to be the sum of nondiagonal matrices V (i,j) defined by: V (i,j) k,l = 1, (k,l) {(i,i),(j,j)} 1, (k,l) {(i,j),(j,i)} 0, otherwise The inclusion of V (i,j) in V effectively pulls or clusters y i and y j together.. (1) A key benefit of SE is that the potential matrix V enables semi-supervised clustering. If a subset of points in X has a known label, defining V to be a cluster potential will pull the corresponding points in Y towards each other. This same behavior extends to multiple labels. Following dimensionality reduction via SE, a standard clustering algorithm (like k-means clustering) can be employed as in the previous section. 2.3 Spatial-Spectral Fusion When the manifold under investigation describes image data, it is not only the spectral (intensity) information at each pixel in the image that influences the structure of the manifold, but also the spatial relationships between the spectra of neighboring pixels. To handle both spectral and spatial information mathematically, a manifold point x i is represented by concatenating a pixel s spectral information x f i and its spatial location x p i [ ; i.e., x T i = x f T ] T i x pt i. There are multiple ways to proceeding with LE-based dimensionality reduction (and clustering) that have been explored in the literature Shi-Malik Shi and Malik 12,13 describes how to handle graph construction and edge weight definition in a manner that incorporates both spectral and spatial information. This technique applied in a LE-based dimensionality reduction algorithm can be described by the following steps: 1. Construct G so that the set of edges E is defined based on ǫ-neighborhoods of the spatial locations; i.e., define an edge between x i and x j if x p i xp j 2 < ǫ. 2. Define edge weights by: W i,j = ) 2 xfi xfj exp ( xp i xp j 2 σf 2 σp 2, (x i,x j ) E 0, otherwise 3. Proceed with step (3) of the LE algorithm defined in Section Gilles-Bowles. (2) Gilles and Bowles 9 modify the approach of Shi and Malik to incorporate a penalty on differences in the direction of the spectral information as opposed to a penalty on the norm of their differences, and they illustrate how this modification is useful in segmenting hyperspectral images. The difference between the Gilles-Bowles and Shi-Malik approaches is that the edge weights in (2) are replaced by: W i,j = exp ( cos 1 ( x f i,xf j x f i x f j ) xpi xpj 2 σ 2 p ), (x i,x j ) E 0, otherwise. (3)

4 2.3.3 Hou-Zhang-Ye-Zheng Hou et al. 7 propose a slightly different approach to fusing spectral and spatial information in an LE-based algorithm within a system for classifying regions of hyperspectral imagery. Instead of the Shi-Malik and Gilles- Bowles approaches of defining graph edges based solely on spatial information and weights based on fused spectral-spatial information, Hou et al. uses the fused spectral-spatial information in the step of defining the graph edges and then uses binary weights; i.e., 1. Construct G so that the set of edges E is defined based on k-nearest neighbors according fused spectralspatial metric; i.e., define an edge between x i and x j if x i and x j are mutually in the k-nearest neighbors of each other according to the measure: x f i xf j 2 ( ( x p i d(x i,x j ) = 1 exp 1 exp xp 2 )) j. (4) 2σ 2 f 2σ 2 p 2. Define binary edge weights: W i,j = { 1, (xi,x j ) E 0, otherwise. (5) 3. Proceed with step (3) of the LE algorithm defined in Section Benedetto et al. Benedetto et al. 8 propose a variety of ways to fuse spectral and spatial information into a LE-based algorithm that is used in conjunction with linear discriminant analysis (LDA) to classify hyperspectral imagery. To unify their various proposed techniques, we introduce the metric: d β (x i,x j ) = x f i xf j 2 ( 2 x p i j β +(1 β) xp 2 ) 2, (6) σ 2 f σ 2 p where 0 β 1. Note that d 0 measures scaled Euclidean distance based purely on spatial components, and d 1 measures scaled Euclidean distance based purely on spectral components. Furthermore, we define G β to be the graph constructed so that the set of edges E β is defined based on mutual k-nearest neighbors according to the metric d β (x i,x j ). We also define the weight matrix W β componentwise by: W (β) i,j = { exp ( d β (x i,x j ) 2), (x i,x j ) E β 0, otherwise and we define the corresponding Laplacian matrix L β = D β W β., (7) With this notation, we can describe the following three flavors of LE-based manifold recovery proposed by Benedetto et al. 8 Benedetto-E: Fused Eigenvectors Perform the following steps: 1. Construct graphs G 0 and G 1 so that the sets of edges E 0 and E 1 are defined based on mutual k-nearest neighbors according to the metrics d 0 and d 1, respectively. 2. Define edge weights for G 0 and G 1 according to (7) with β = 0 and 1, respectively. 3. Let m = m 0 + m 1. Compute the smallest m eigenvalues and eigenvectors L 0 f (0) = λd 0 f (0), and compute the smallest m eigenvalues and eigenvectors of L 1 f (1) = λd 1 f (1). Assuming each set of eigenvectors is sorted so [ that the eigenvalues are increasing, then the points y1 T, y2 T,..., y3 T are defined to be the rows of F = f (0) 1 f m (0) 0 f (1) 1 f m (1) 1 ].

5 Benedetto-L: Fused Laplacians Perform steps (1) and (2) of Benedetto-E. Now perform the steps: 3. Define a fused Laplacian matrix L using one of the three methods: (a) element-wise multiplication of L 0 and L 1, (b) sum of L 0 and L 1, or (c) matrix multiplication of L 1 by L 0, followed by zeroing any components corresponding to edges not in E 1. (In fusion methods (a) and (c), the diagonals of the resulting matrices should be recomputed in order to ensure that they are valid Laplacian matrices; i.e., that the row sums are all zero.) 4. Proceed with step (3) of the LE algorithm defined in Section 2.1, using the fused Laplacian matrix L. Benedetto-M: Fused Metric Perform the standard LE algorithm using the graph G β with corresponding weight matrix W β. 3. SPATIAL-SPECTRAL SCHROEDINGER EIGENMAPS FOR DIMENSIONALITY REDUCTION AND CLUSTERING All of the prior art approaches described in Section 2.3 for performing dimensionality reduction and clustering with fused spatial and spectral information are based on the LE algorithm. We propose a different approach for spatial-spectral dimensionality reduction and clustering: computing Schroedinger Eigenmaps on graphs defined with spectral information, using cluster potentials that encode spatial proximity. The proposed algorithm, which we denote SSSE (Spatial-Spectral Schroedinger Eigenmaps) proceeds as follows: 1. Construct an undirected graph G = (X,E) whose vertices are the points in X and whose edges E are defined based on proximity between the spectral components of the vertices. 2. Define ( weights for the edges in E based on spectral information. For example, define the weight W i,j = ) exp x f i xf j 2 /σf 2 if an edge exists between x i and x j or W i,j = 0 otherwise. 3. Define a cluster potential matrix V that encodes proximity between the spatial components of the vertices: ( k x p V = V (i,j) i γ i,j exp xp 2 ) j, (8) i=1 x j Nǫ(x p i) where N p ǫ (x i ) is the set of points in X whose spatial components are in an ǫ-neighborhood of the spatial components of x i ; i.e., N p ǫ (x i ) = {x X x i s.t. x p i xp ǫ}, (9) V (i,j) is defined as in (1), and γ i,j can be chosen in a manner that provides greater influence for spatial neighbors having nearby spectral components. 4. Compute the smallest m+1 eigenvalues and eigenvectors of (L+αV)f = λdf, where D is the diagonal weighted degree matrix defined by D i,i = j W i,j, and L = D W is the Laplacian matrix. If the resulting eigenvectors f 0,..., f m, are ordered so that 0 = λ 0 λ 1 λ m, then the points y T 1, y T 2,..., y T 3 are defined to be the rows of F = [f 1 f 2 f m ]. Following dimensionality reduction, a standard clustering algorithm(like k-means clustering) can be employed as in Sections Note the similarities between the SSSE algorithm ( and the Shi-Malik and Gilles-Bowles approaches described ) ( ( )) in Section If we choose γ i,j = exp x f i xf j 2 /σf 2 or exp cos 1 f x i,xf j, then the σ 2 p x f i x f j

6 coefficients of each V (i,j) in (8) are equivalent to the edge weights in (2) or (3). The benefit of SSSE is that since these coefficients are applied to the cluster potentials (and not applied as edge weights on the graph G), the spatial neighborhood N p ǫ can be chosen to be quite small (even ǫ = one pixel) while still allowing G to contain edges corresponding to spectrally similar points that may be spatially distant. Another advantage of SSSE over some of the other algorithms (specifically, the Hou-Zhang-Ye-Zheng and Benedetto-M algorithms) is that the impact of changing the relative magnitudes of the spatial and spectral scale parameters (σ f and σ p ) can be explored without having to repeat the graph construction step. Once a graph is constructed, any changes made with respect to σ p /σ f can be achieved solely by modifying the cluster potential matrix. 4. CLASSIFICATION EXPERIMENTS In order to determine the efficacy of the proposed algorithm for spatial-spectral dimensionality reduction and compare its performance with respect to the prior art algorithms described in Section 2.3, we perform classification experiments(after dimensionality reduction) using publicly available hyperspectral image data sets with manually labeled ground truth. The data sets, experiments, and results are described in this section. 4.1 Data We use two publicly available datasets: Indian Pines and Pavia. The Indian Pines image, shown in Fig. s 1a 1b, was captured by an AVIRIS spectrometer over the rural Indian Pines test site in Northwestern Indiana, USA. The image contains pixels with spatial resolution of approximately 20 meters per pixel, with 224 spectral bands, 4 of which we have discarded due to noise and water. The image has been partially labeled, yielding ground truth pixels associated with 16 classes. The Pavia image, a portion of which is shown in Fig. s 1c 1d, was captured by a ROSIS sensor over the University of Pavia, Italy. The original image contains pixels with spatial resolution of approximately 1.3 meters per pixel, with 115 spectral bands. A partial set of labels yields ground truth pixels associated with 9 classes. We use a cropped subset ( pixels) of the original image in which the ground truth labels are particularly spatially diverse. (a) (b) (c) (d) Figure 1: Original images and ground truth: (a) Indian Pines, bands [29, 15, 12], (b) Indian Pines ground truth, (c) Pavia, bands [68,30,2], (d) Pavia ground truth.

7 4.2 Experimental Setup To compare our proposed SSSE algorithm with prior art algorithms, we use each algorithm to perform dimensionality reduction, and then we subsequently perform classification using the lower-dimensional embeddings in a similar manner to the protocol described in Benedetto et al. 8 The classification step is performed using linear discriminant analysis (LDA) as implemented in MATLAB, with 10% and 1% of each class selected from the ground truth pixels from the Indian Pines and Pavia images, respectively. We repeated classification 10 times and computed the mode of the results at each pixel to yield the final classification result. We used resulting confusion matrices to compute per-class accuracy as well as overall accuracy (OA), average accuracy (AA), average precision (AP), average sensitivity (ASe), average specificity (ASp), and Kappa coefficient (κ). Finally, we compared algorithms by determining whether differences in their Kappa coefficients were statistically significant using Z scores. 15 For the dimensionality ( reduction step, we use two versions of our proposed SSSE algorithm: SSSE1 - SSSE ) ( ( )) with γ i,j = exp x f i xf j 2 /σf 2, and SSSE2 - SSSE with γ i,j = exp cos 1 f x i,xf j. We also x f i x f j use our own implementations of the following algorithms: SM Shi-Malik, GB Gilles-Bowles, HZYZ Hou- Zhang-Ye-Zheng, BE Benedetto-E, BL1 Benedetto-L with element-wise multiplication of Laplacians, BL2 Benedetto-L with addition of Laplacians, BL3 Benedetto-L with matrix multiplication of Laplacians followed by zeroing of edges not in E 1, and BM Benedetto-M. A few notes about data treatment and parameter choices: Prior to dimensionality reduction, the spectral components of the data in X are normalized so that (1/k) k i=1 x f i = 1. We also assume that components of x p are in units of pixels. We make the 2 initial choice of σ f = σ p = 1 for each algorithm, but we adjust these parameters when necessary to improve performance. For all algorithms, we choose the reduced dimension to be n = 50 for Indian Pines and n = 25 for Pavia. For algorithms requiring graph construction via k-nearest neighbors (SSSE, HZYZ, BE, BL1, BL2, BL3, BM), we select k = 20. For the SSSE algorithm, we choose ǫ = 1 pixel for defining the neighborhood N p ǫ (x i ). In addition, we introduce a parameter ˆα defined by α = ˆα tr(l)/tr(v), in order to trade off the impact of L and V in a way that can be directly compared across images. 4.3 Results In the SSSE algorithm, fixing σ f = σ p = 1 leaves ˆα as the only free parameter. We tested classification after dimensionality reduction via SSSE1 and SSSE2 by selecting 17 logarithmically spaced values for ˆα ranging from 1 to 100. The resulting overall accuracy and average accuracy, precision, sensitivity, and specificity are shown as functions of ˆα in Fig. 2. Figures 3 4 show resulting classification maps for a subset of these choices of ˆα, as well as for the choice ˆα = 0 (corresponding to the use of solely spectral information). For both sets of images, we selected the best value of ˆα to be the value that appears to best maximize all of the reported quantities (OA, AA, AP, ASe, ASp). For the Indian Pines image (for both SSSE1 and SSSE2), this value is ˆα = 17.78, whereas for the Pavia image (again for both SSSE1 and SSSE2), it is ˆα = Numerical values of OA, AA, AP, ASe, ASp, and κ, as well as classification accuracy for each class, are reported in Tables 1 3. Also in Tables 1 4 are results of classification after using (our implementations of) the prior-art algorithms for dimensionality reduction and determining the best choice of parameters for those algorithms. For the Indian Pines image, these best parameter choices are: SM: ǫ = 5, σ f = 0.1, σ p = 100; GB: ǫ = 5, σ f = 0.2, σ p = 100; HZYZ: σ f = 1, σ p = 10; BE: 8 spatial / 42 spectral eigenvectors, σ f = 1, σ p = 10; BM: σ f = 1, σ p = 10, β = For the Pavia image, the best parameter choices are: SM: ǫ = 7, σ f = 0.45, σ p = 100; GB: ǫ = 7, σ f = 0.2, σ p = 100; HZYZ: σ f = 1, σ p = 10; BE: 5 spatial / 20 spectral eigenvectors, σ f = 1, σ p = 10; BM:

8 ˆα ˆα Indian Pines ˆα ˆα Pavia Figure 2: Classification performance measures for SSSE1 (top) and SSSE2 (bottom) as functions of ˆα: Overall accuracy (blue circles), average accuracy (green x s), average precision (red squares), average sensitivity (black + s), average specificity (magenta triangles), and Kappa coefficient (yellow triangles). Dashed vertical lines indicate best choice of ˆα. ˆα = 0 ˆα = 1.33 ˆα = 3.16 ˆα = 7.50 ˆα = ˆα = ˆα = 100 Figure 3: Classification results for Indian Pines image after dimensionality reduction via SSSE1 (top row) and SSSE2 (bottom row) for various values of ˆα.

9 ˆα = 0 ˆα = 1.33 ˆα = 3.16 ˆα = 7.50 ˆα = ˆα = ˆα = 100 Figure 4: Classification results for Pavia image after dimensionality reduction via SSSE1 (top row) and SSSE2 (bottom row) for various values of ˆα. σ f = 1, σ p = 10, β = Note that we did not include results corresponding to BL1; performing element-wise multiplication of weights caused some rows of the resulting weight matrix to be numerically zero, leading to a graph that was not connected so that the eigenvalue zero had multiplicity greater than one. As can be seen in Table 1, for the Indian Pines image, the SSSE2 algorithm exhibits the best performance in terms of all of the global measures (OA, AA, AP, ASe, and ASp), and the SSSE1 algorithm exhibits the second best performance. Other algorithms that perform fairly well on the Indian Pines image include SM, GB, HZYZ, and BE. To determine whether the difference in classification results from different algorithms may be statistically significant, we compute the standard normal deviant, Z, from the Kappa coefficients and their variance estimates; Z scores above 1.96 indicate statistically significant differences in Kappa coefficient at the 95% confidence level. Table 2 shows when the resulting Z scores indicate statistically significant differences between classification performance for each pair of algorithms. From this table, we see that for the Indian Pines data, while the differences in performance between SSSE1 and SSSE2 are not statistically significant, both SSSE1 and SSSE2 do exhibit statistically significant improvements over all other algorithms (with the exception of SSSE1 and HZYZ, in which the difference in performance is not statistically significant). In Table 3, we actually see that for the Pavia image, the BE algorithm exhibits the best performance in terms of all of the global measures. However, SSSE2 and SSSE1 come in second and third place, respectively, in terms of most of the global measures. (HZYZ outperforms SSSE1 in average precision). HZYZ also performs quite well on the Pavia image, and GB performs fairly well. Table 4 confirms this interpretation: the BE algorithm performs significantly better than other algorithms. Excluding BE, the SSSE1 and SSSE2 algorithms perform significantly better than all remaining algorithms.

10 No. of Samp. SSSE1 SSSE2 SM GB HZYZ BE BL2 BL3 BM OA AA AP ASe ASp κ Class Class Class Class Class Class Class Class Class Class Class Class Class Class Class Class Table 1: Indian Pines classification results using various dimensionality algorithms. OA = Overall Accuracy, AA = Average Accuracy, AP = Average Precision, ASe = Average Sensitivity, ASp = Average Specificity, κ = Kappa coefficient. Class rows report per-class accuracy. Classes: 1 = Alfalfa, 2 = Corn-notill, 3 = Corn-mintill, 4 = Corn, 5 = Grass-pasture, 6 = Grass-trees, 7 = Grass-pasture-mowed, 8 = Hay-windrowed, 9 = Oats, 10 = Soybean-notill, 11 = Soybean-mintill, 12 = Soybean-clean, 13 = Wheat, 14 = Woods, 15 = Buildings-Grass-Trees-Drives, 16 = Stone-Steel-Towers. All quantities (except number of samples) are percentages. SSSE1 SSSE2 SM GB HZYZ BE BL2 BL3 BM SSSE1 o + + o SSSE2 o SM GB + o HZYZ o BE + o BL2 + BL3 BM + + Table 2: Statistical significance between κ values of classification algorithms on Indian Pines data. Each entry is + if κ is significantly larger in the row method versus the column method, if κ is significantly smaller, and o if there is no significant difference. Significance is measured at the 95% confidence level.

11 No. of Samp. SSSE1 SSSE2 SM GB HZYZ BE BL2 BL3 BM OA AA AP ASe ASp κ Class Class Class Class Class Class Class Class Class Table 3: Pavia classification results using various dimensionality algorithms. OA = Overall Accuracy, AA = Average Accuracy, AP = Average Precision, ASe = Average Sensitivity, ASp = Average Specificity, κ = Kappa coefficient. Class rows report per-class accuracy. Classes: 1 = Asphalt, 2 = Meadows, 3 = Gravel, 4 = Trees, 5 = Painted metal sheets, 6 = Bare soil, 7 = Bitumen, 8 = Self-Blocking Bricks, 9 = Shadows. All quantities (except number of samples) are percentages. SSSE1 SSSE2 SM GB HZYZ BE BL2 BL3 BM SSSE1 o SSSE2 o SM + GB HZYZ BE BL2 + + BL3 BM Table 4: Statistical significance between κ values of classification algorithms on Pavia data. Each entry is + if κ is significantly larger in the row method versus the column method, if κ is significantly smaller, and o if there is no significant difference. Significance is measured at the 95% confidence level.

12 5. CONCLUSION In this article, we proposed a new algorithm for dimensionality reduction using both the spatial and spectral information present in a hyperspectral image. The algorithm is based on Schroedinger Eigenmaps, which has traditionally been used for semi-supervised learning. By constructing a graph based solely on spectral information and then defining a cluster potential matrix that encodes spatial relationships between pixels, our proposed algorithm provides a natural way to trade off the relative impact of the spatial versus spectral information in the dimensionality reduction process. Classification experiments on publicly available hyperspectral images with manually labeled ground truth show that the proposed algorithm exhibits superior/competitive performance to a variety of prior art algorithms for reducing the dimension of the data provided to a standard classification algorithm. APPENDIX Prototype implementations of the Spatial-Spectral Schroedinger Eigenmaps algorithms (SSSE1 and SSSE2) are available for download at MATLAB Central ( under File ID # ACKNOWLEDGEMENTS The authors would like to thank Prof. Landgrebe (Purdue University, USA) for providing the Indian Pines data and Prof. Paolo Gamba (Pavia University, Italy) for providing the Pavia University data. REFERENCES [1] Schott, J. R., [Remote Sensing: The Image Chain Approach], Oxford University Press, 2nd ed. (2007). [2] Prasad, S. and Bruce, L., Limitations of principal components analysis for hyperspectral target recognition, Geoscience and Remote Sensing Letters, IEEE 5, (Oct 2008). [3] Kim, D. and Finkel, L., Hyperspectral image processing using locally linear embedding, in [Neural Engineering, Conference Proceedings. First International IEEE EMBS Conference on], (March 2003). [4] Bachmann, C., Ainsworth, T., and Fusina, R., Exploiting manifold geometry in hyperspectral imagery, Geoscience and Remote Sensing, IEEE Transactions on 43, (March 2005). [5] Fauvel, M., Chanussot, J., and Benediktsson, J., Kernel principal component analysis for the classification of hyperspectral remote sensing data of urban areas, EURASIP Journal on Advances in Signal Processing 2009(783194), 1 14 (2009). [6] Halevy, A., Extensions of Laplacian Eigenmaps for Manifold Learning, PhD thesis, University of Maryland, College Park (2011). [7] Hou, B., Zhang, X., Ye, Q., and Zheng, Y., A novel method for hyperspectral image classification based on Laplacian eigenmap pixels distribution-flow, Selected Topics in Applied Earth Observations and Remote Sensing, IEEE Journal of 6(3), (2013). [8] Benedetto, J., Czaja, W., Dobrosotskaya, J., Doster, T., Duke, K., and Gillis, D., Integration of heterogeneous data for classification in hyperspectral satellite imagery, in [Proc. of SPIE Vol. 8390], (June 2012). [9] Gillis, D. B. and Bowles, J. H., Hyperspectral image segmentation using spatial-spectral graphs, Proc. SPIE Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII 8390, 83901Q Q 11 (2012). [10] Czaja, W. and Ehler, M., Schroedinger eigenmaps for the analysis of biomedical data, IEEE Transactions on Pattern Analysis and Machine Intelligence 35, (May 2013). [11] Belkin, M. and Niyogi, P., Laplacian eigenmaps for dimensionality reduction and data representation, Neural Computation 15, (June 2003). [12] Shi, J. and Malik, J., Normalized cuts and image segmentation, in [Computer Vision and Pattern Recognition, Proceedings., 1997 IEEE Computer Society Conference on], (1997).

13 [13] Shi, J. and Malik, J., Normalized cuts and image segmentation, Pattern Analysis and Machine Intelligence, IEEE Transactions on 22(8), (2000). [14] Benedetto, J., Czaja, W., Dobrosotskaya, J., Doster, T., Duke, K., and Gillis, D., Semi-supervised learning of heterogeneous data in remote sensing imagery, in [Proc. of SPIE Vol. 8401], (June 2012). [15] Senseman, G. M., Bagley, C. F., and Tweddale, S. A., Accuracy assessment of the discrete classification of remotely-sensed digital data for landcover mapping, in [USACERL Technical Report EN-95/04], 1 27 (April 1995).

Spatial-Spectral Dimensionality Reduction of Hyperspectral Imagery with Partial Knowledge of Class Labels

Spatial-Spectral Dimensionality Reduction of Hyperspectral Imagery with Partial Knowledge of Class Labels Spatial-Spectral Dimensionality Reduction of Hyperspectral Imagery with Partial Knowledge of Class Labels Nathan D. Cahill, Selene E. Chew, and Paul S. Wenger Center for Applied and Computational Mathematics,

More information

DIMENSION REDUCTION FOR HYPERSPECTRAL DATA USING RANDOMIZED PCA AND LAPLACIAN EIGENMAPS

DIMENSION REDUCTION FOR HYPERSPECTRAL DATA USING RANDOMIZED PCA AND LAPLACIAN EIGENMAPS DIMENSION REDUCTION FOR HYPERSPECTRAL DATA USING RANDOMIZED PCA AND LAPLACIAN EIGENMAPS YIRAN LI APPLIED MATHEMATICS, STATISTICS AND SCIENTIFIC COMPUTING ADVISOR: DR. WOJTEK CZAJA, DR. JOHN BENEDETTO DEPARTMENT

More information

Low-dimensional Representations of Hyperspectral Data for Use in CRF-based Classification

Low-dimensional Representations of Hyperspectral Data for Use in CRF-based Classification Rochester Institute of Technology RIT Scholar Works Presentations and other scholarship 8-31-2015 Low-dimensional Representations of Hyperspectral Data for Use in CRF-based Classification Yang Hu Nathan

More information

Dimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis:midyear Report

Dimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis:midyear Report Dimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis:midyear Report Yiran Li yl534@math.umd.edu Advisor: Wojtek Czaja wojtek@math.umd.edu

More information

Dimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis

Dimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis Dimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis Yiran Li yl534@math.umd.edu Advisor: Wojtek Czaja wojtek@math.umd.edu 10/17/2014 Abstract

More information

Classification of Hyperspectral Data over Urban. Areas Using Directional Morphological Profiles and. Semi-supervised Feature Extraction

Classification of Hyperspectral Data over Urban. Areas Using Directional Morphological Profiles and. Semi-supervised Feature Extraction IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL.X, NO.X, Y 1 Classification of Hyperspectral Data over Urban Areas Using Directional Morphological Profiles and Semi-supervised

More information

Spatial-Spectral Operator Theoretic Methods for Hyperspectral Image Classification

Spatial-Spectral Operator Theoretic Methods for Hyperspectral Image Classification Noname manuscript No. (will be inserted by the editor) Spatial-Spectral Operator Theoretic Methods for Hyperspectral Image Classification John J. Benedetto 1 Wojciech Czaja 1 Julia Dobrosotskaya 2 Timothy

More information

HYPERSPECTRAL image (HSI) acquired by spaceborne

HYPERSPECTRAL image (HSI) acquired by spaceborne 1 SuperPCA: A Superpixelwise PCA Approach for Unsupervised Feature Extraction of Hyperspectral Imagery Junjun Jiang, Member, IEEE, Jiayi Ma, Member, IEEE, Chen Chen, Member, IEEE, Zhongyuan Wang, Member,

More information

Fusion of pixel based and object based features for classification of urban hyperspectral remote sensing data

Fusion of pixel based and object based features for classification of urban hyperspectral remote sensing data Fusion of pixel based and object based features for classification of urban hyperspectral remote sensing data Wenzhi liao a, *, Frieke Van Coillie b, Flore Devriendt b, Sidharta Gautama a, Aleksandra Pizurica

More information

Spatial-spectral operator theoretic methods for hyperspectral image classification

Spatial-spectral operator theoretic methods for hyperspectral image classification Int J Geomath (2016) 7:275 297 DOI 10.1007/s13137-016-0085-0 ORIGINAL PAPER Spatial-spectral operator theoretic methods for hyperspectral image classification John J. Benedetto 1 Wojciech Czaja 1 Julia

More information

Semi-Supervised Normalized Embeddings for Fusion and Land-Use Classification of Multiple View Data

Semi-Supervised Normalized Embeddings for Fusion and Land-Use Classification of Multiple View Data Rochester Institute of Technology RIT Scholar Works Theses Thesis/Dissertation Collections 12-2018 Semi-Supervised Normalized Embeddings for Fusion and Land-Use Classification of Multiple View Data Poppy

More information

Hyperspectral Image Classification Using Gradient Local Auto-Correlations

Hyperspectral Image Classification Using Gradient Local Auto-Correlations Hyperspectral Image Classification Using Gradient Local Auto-Correlations Chen Chen 1, Junjun Jiang 2, Baochang Zhang 3, Wankou Yang 4, Jianzhong Guo 5 1. epartment of Electrical Engineering, University

More information

Hyperspectral image segmentation using spatial-spectral graphs

Hyperspectral image segmentation using spatial-spectral graphs Hyperspectral image segmentation using spatial-spectral graphs David B. Gillis* and Jeffrey H. Bowles Naval Research Laboratory, Remote Sensing Division, Washington, DC 20375 ABSTRACT Spectral graph theory

More information

PoS(CENet2017)005. The Classification of Hyperspectral Images Based on Band-Grouping and Convolutional Neural Network. Speaker.

PoS(CENet2017)005. The Classification of Hyperspectral Images Based on Band-Grouping and Convolutional Neural Network. Speaker. The Classification of Hyperspectral Images Based on Band-Grouping and Convolutional Neural Network 1 Xi an Hi-Tech Institute Xi an 710025, China E-mail: dr-f@21cnl.c Hongyang Gu Xi an Hi-Tech Institute

More information

Exploring Structural Consistency in Graph Regularized Joint Spectral-Spatial Sparse Coding for Hyperspectral Image Classification

Exploring Structural Consistency in Graph Regularized Joint Spectral-Spatial Sparse Coding for Hyperspectral Image Classification 1 Exploring Structural Consistency in Graph Regularized Joint Spectral-Spatial Sparse Coding for Hyperspectral Image Classification Changhong Liu, Jun Zhou, Senior Member, IEEE, Jie Liang, Yuntao Qian,

More information

Spectral-Spatial Response for Hyperspectral Image Classification

Spectral-Spatial Response for Hyperspectral Image Classification Article Spectral-Spatial Response for Hyperspectral Image Classification Yantao Wei 1,2, *,, Yicong Zhou 2, and Hong Li 3 1 School of Educational Information Technology, Central China Normal University,

More information

SELECTION OF THE OPTIMAL PARAMETER VALUE FOR THE LOCALLY LINEAR EMBEDDING ALGORITHM. Olga Kouropteva, Oleg Okun and Matti Pietikäinen

SELECTION OF THE OPTIMAL PARAMETER VALUE FOR THE LOCALLY LINEAR EMBEDDING ALGORITHM. Olga Kouropteva, Oleg Okun and Matti Pietikäinen SELECTION OF THE OPTIMAL PARAMETER VALUE FOR THE LOCALLY LINEAR EMBEDDING ALGORITHM Olga Kouropteva, Oleg Okun and Matti Pietikäinen Machine Vision Group, Infotech Oulu and Department of Electrical and

More information

Data fusion and multi-cue data matching using diffusion maps

Data fusion and multi-cue data matching using diffusion maps Data fusion and multi-cue data matching using diffusion maps Stéphane Lafon Collaborators: Raphy Coifman, Andreas Glaser, Yosi Keller, Steven Zucker (Yale University) Part of this work was supported by

More information

Frame based kernel methods for hyperspectral imagery data

Frame based kernel methods for hyperspectral imagery data Frame based kernel methods for hyperspectral imagery data Norbert Wiener Center Department of Mathematics University of Maryland, College Park Recent Advances in Harmonic Analysis and Elliptic Partial

More information

Linear and Non-linear Dimentionality Reduction Applied to Gene Expression Data of Cancer Tissue Samples

Linear and Non-linear Dimentionality Reduction Applied to Gene Expression Data of Cancer Tissue Samples Linear and Non-linear Dimentionality Reduction Applied to Gene Expression Data of Cancer Tissue Samples Franck Olivier Ndjakou Njeunje Applied Mathematics, Statistics, and Scientific Computation University

More information

Hyperspectral Image Classification via Kernel Sparse Representation

Hyperspectral Image Classification via Kernel Sparse Representation 1 Hyperspectral Image Classification via Kernel Sparse Representation Yi Chen 1, Nasser M. Nasrabadi 2, Fellow, IEEE, and Trac D. Tran 1, Senior Member, IEEE 1 Department of Electrical and Computer Engineering,

More information

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality

More information

Spectral Angle Based Unary Energy Functions for Spatial-Spectral Hyperspectral Classification Using Markov Random Fields

Spectral Angle Based Unary Energy Functions for Spatial-Spectral Hyperspectral Classification Using Markov Random Fields Rochester Institute of Technology RIT Scholar Works Presentations and other scholarship 7-31-2016 Spectral Angle Based Unary Energy Functions for Spatial-Spectral Hyperspectral Classification Using Markov

More information

PARALLEL IMPLEMENTATION OF MORPHOLOGICAL PROFILE BASED SPECTRAL-SPATIAL CLASSIFICATION SCHEME FOR HYPERSPECTRAL IMAGERY

PARALLEL IMPLEMENTATION OF MORPHOLOGICAL PROFILE BASED SPECTRAL-SPATIAL CLASSIFICATION SCHEME FOR HYPERSPECTRAL IMAGERY PARALLEL IMPLEMENTATION OF MORPHOLOGICAL PROFILE BASED SPECTRAL-SPATIAL CLASSIFICATION SCHEME FOR HYPERSPECTRAL IMAGERY B. Kumar a, O. Dikshit b a Department of Computer Science & Information Technology,

More information

IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 13, NO. 8, AUGUST

IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 13, NO. 8, AUGUST IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 13, NO. 8, AUGUST 2016 1059 A Modified Locality-Preserving Projection Approach for Hyperspectral Image Classification Yongguang Zhai, Lifu Zhang, Senior

More information

GRAPH-BASED SEMI-SUPERVISED HYPERSPECTRAL IMAGE CLASSIFICATION USING SPATIAL INFORMATION

GRAPH-BASED SEMI-SUPERVISED HYPERSPECTRAL IMAGE CLASSIFICATION USING SPATIAL INFORMATION GRAPH-BASED SEMI-SUPERVISED HYPERSPECTRAL IMAGE CLASSIFICATION USING SPATIAL INFORMATION Nasehe Jamshidpour a, Saeid Homayouni b, Abdolreza Safari a a Dept. of Geomatics Engineering, College of Engineering,

More information

Non-linear dimension reduction

Non-linear dimension reduction Sta306b May 23, 2011 Dimension Reduction: 1 Non-linear dimension reduction ISOMAP: Tenenbaum, de Silva & Langford (2000) Local linear embedding: Roweis & Saul (2000) Local MDS: Chen (2006) all three methods

More information

REMOTE sensing hyperspectral images (HSI) are acquired

REMOTE sensing hyperspectral images (HSI) are acquired IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 10, NO. 3, MARCH 2017 1151 Exploring Structural Consistency in Graph Regularized Joint Spectral-Spatial Sparse Coding

More information

Dimension Reduction CS534

Dimension Reduction CS534 Dimension Reduction CS534 Why dimension reduction? High dimensionality large number of features E.g., documents represented by thousands of words, millions of bigrams Images represented by thousands of

More information

Locality Preserving Projections (LPP) Abstract

Locality Preserving Projections (LPP) Abstract Locality Preserving Projections (LPP) Xiaofei He Partha Niyogi Computer Science Department Computer Science Department The University of Chicago The University of Chicago Chicago, IL 60615 Chicago, IL

More information

R-VCANet: A New Deep Learning-Based Hyperspectral Image Classification Method

R-VCANet: A New Deep Learning-Based Hyperspectral Image Classification Method IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING 1 R-VCANet: A New Deep Learning-Based Hyperspectral Image Classification Method Bin Pan, Zhenwei Shi and Xia Xu Abstract

More information

Does Normalization Methods Play a Role for Hyperspectral Image Classification?

Does Normalization Methods Play a Role for Hyperspectral Image Classification? Does Normalization Methods Play a Role for Hyperspectral Image Classification? Faxian Cao 1, Zhijing Yang 1*, Jinchang Ren 2, Mengying Jiang 1, Wing-Kuen Ling 1 1 School of Information Engineering, Guangdong

More information

Hyperspectral Image Classification by Using Pixel Spatial Correlation

Hyperspectral Image Classification by Using Pixel Spatial Correlation Hyperspectral Image Classification by Using Pixel Spatial Correlation Yue Gao and Tat-Seng Chua School of Computing, National University of Singapore, Singapore {gaoyue,chuats}@comp.nus.edu.sg Abstract.

More information

School of Computer and Communication, Lanzhou University of Technology, Gansu, Lanzhou,730050,P.R. China

School of Computer and Communication, Lanzhou University of Technology, Gansu, Lanzhou,730050,P.R. China Send Orders for Reprints to reprints@benthamscienceae The Open Automation and Control Systems Journal, 2015, 7, 253-258 253 Open Access An Adaptive Neighborhood Choosing of the Local Sensitive Discriminant

More information

Research Article Hyperspectral Image Classification Using Kernel Fukunaga-Koontz Transform

Research Article Hyperspectral Image Classification Using Kernel Fukunaga-Koontz Transform Mathematical Problems in Engineering Volume 13, Article ID 471915, 7 pages http://dx.doi.org/1.1155/13/471915 Research Article Hyperspectral Image Classification Using Kernel Fukunaga-Koontz Transform

More information

STRATIFIED SAMPLING METHOD BASED TRAINING PIXELS SELECTION FOR HYPER SPECTRAL REMOTE SENSING IMAGE CLASSIFICATION

STRATIFIED SAMPLING METHOD BASED TRAINING PIXELS SELECTION FOR HYPER SPECTRAL REMOTE SENSING IMAGE CLASSIFICATION Volume 117 No. 17 2017, 121-126 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu ijpam.eu STRATIFIED SAMPLING METHOD BASED TRAINING PIXELS SELECTION FOR HYPER

More information

Locality Preserving Projections (LPP) Abstract

Locality Preserving Projections (LPP) Abstract Locality Preserving Projections (LPP) Xiaofei He Partha Niyogi Computer Science Department Computer Science Department The University of Chicago The University of Chicago Chicago, IL 60615 Chicago, IL

More information

IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 7, NO. 6, JUNE

IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 7, NO. 6, JUNE IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 7, NO. 6, JUNE 2014 2147 Automatic Framework for Spectral Spatial Classification Based on Supervised Feature Extraction

More information

CSE 6242 A / CS 4803 DVA. Feb 12, Dimension Reduction. Guest Lecturer: Jaegul Choo

CSE 6242 A / CS 4803 DVA. Feb 12, Dimension Reduction. Guest Lecturer: Jaegul Choo CSE 6242 A / CS 4803 DVA Feb 12, 2013 Dimension Reduction Guest Lecturer: Jaegul Choo CSE 6242 A / CS 4803 DVA Feb 12, 2013 Dimension Reduction Guest Lecturer: Jaegul Choo Data is Too Big To Do Something..

More information

Classification of Hyper spectral Image Using Support Vector Machine and Marker-Controlled Watershed

Classification of Hyper spectral Image Using Support Vector Machine and Marker-Controlled Watershed Classification of Hyper spectral Image Using Support Vector Machine and Marker-Controlled Watershed Murinto #1, Nur Rochmah DPA #2 # Department of Informatics Engineering, Faculty of Industrial Technology,

More information

Remote Sensed Image Classification based on Spatial and Spectral Features using SVM

Remote Sensed Image Classification based on Spatial and Spectral Features using SVM RESEARCH ARTICLE OPEN ACCESS Remote Sensed Image Classification based on Spatial and Spectral Features using SVM Mary Jasmine. E PG Scholar Department of Computer Science and Engineering, University College

More information

Principal Component Image Interpretation A Logical and Statistical Approach

Principal Component Image Interpretation A Logical and Statistical Approach Principal Component Image Interpretation A Logical and Statistical Approach Md Shahid Latif M.Tech Student, Department of Remote Sensing, Birla Institute of Technology, Mesra Ranchi, Jharkhand-835215 Abstract

More information

Remote Sensing Data Classification Using Combined Spectral and Spatial Local Linear Embedding (CSSLE)

Remote Sensing Data Classification Using Combined Spectral and Spatial Local Linear Embedding (CSSLE) 2016 International Conference on Artificial Intelligence and Computer Science (AICS 2016) ISBN: 978-1-60595-411-0 Remote Sensing Data Classification Using Combined Spectral and Spatial Local Linear Embedding

More information

c 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all

c 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all c 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising

More information

MULTIVARIATE TEXTURE DISCRIMINATION USING A PRINCIPAL GEODESIC CLASSIFIER

MULTIVARIATE TEXTURE DISCRIMINATION USING A PRINCIPAL GEODESIC CLASSIFIER MULTIVARIATE TEXTURE DISCRIMINATION USING A PRINCIPAL GEODESIC CLASSIFIER A.Shabbir 1, 2 and G.Verdoolaege 1, 3 1 Department of Applied Physics, Ghent University, B-9000 Ghent, Belgium 2 Max Planck Institute

More information

Multi-resolution Segmentation and Shape Analysis for Remote Sensing Image Classification

Multi-resolution Segmentation and Shape Analysis for Remote Sensing Image Classification Multi-resolution Segmentation and Shape Analysis for Remote Sensing Image Classification Selim Aksoy and H. Gökhan Akçay Bilkent University Department of Computer Engineering Bilkent, 06800, Ankara, Turkey

More information

Fuzzy Entropy based feature selection for classification of hyperspectral data

Fuzzy Entropy based feature selection for classification of hyperspectral data Fuzzy Entropy based feature selection for classification of hyperspectral data Mahesh Pal Department of Civil Engineering NIT Kurukshetra, 136119 mpce_pal@yahoo.co.uk Abstract: This paper proposes to use

More information

COMPRESSED DETECTION VIA MANIFOLD LEARNING. Hyun Jeong Cho, Kuang-Hung Liu, Jae Young Park. { zzon, khliu, jaeypark

COMPRESSED DETECTION VIA MANIFOLD LEARNING. Hyun Jeong Cho, Kuang-Hung Liu, Jae Young Park.   { zzon, khliu, jaeypark COMPRESSED DETECTION VIA MANIFOLD LEARNING Hyun Jeong Cho, Kuang-Hung Liu, Jae Young Park Email : { zzon, khliu, jaeypark } @umich.edu 1. INTRODUCTION In many imaging applications such as Computed Tomography

More information

Stratified Structure of Laplacian Eigenmaps Embedding

Stratified Structure of Laplacian Eigenmaps Embedding Stratified Structure of Laplacian Eigenmaps Embedding Abstract We construct a locality preserving weight matrix for Laplacian eigenmaps algorithm used in dimension reduction. Our point cloud data is sampled

More information

Face Recognition using Laplacianfaces

Face Recognition using Laplacianfaces Journal homepage: www.mjret.in ISSN:2348-6953 Kunal kawale Face Recognition using Laplacianfaces Chinmay Gadgil Mohanish Khunte Ajinkya Bhuruk Prof. Ranjana M.Kedar Abstract Security of a system is an

More information

Dimensionality Reduction using Hybrid Support Vector Machine and Discriminant Independent Component Analysis for Hyperspectral Image

Dimensionality Reduction using Hybrid Support Vector Machine and Discriminant Independent Component Analysis for Hyperspectral Image Dimensionality Reduction using Hybrid Support Vector Machine and Discriminant Independent Component Analysis for Hyperspectral Image Murinto 1, Nur Rochmah Dyah PA 2 1,2 Department of Informatics Engineering

More information

IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING 1

IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING 1 IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING 1 Exploring Locally Adaptive Dimensionality Reduction for Hyperspectral Image Classification: A Maximum Margin Metric Learning

More information

Cluster Analysis (b) Lijun Zhang

Cluster Analysis (b) Lijun Zhang Cluster Analysis (b) Lijun Zhang zlj@nju.edu.cn http://cs.nju.edu.cn/zlj Outline Grid-Based and Density-Based Algorithms Graph-Based Algorithms Non-negative Matrix Factorization Cluster Validation Summary

More information

A comparison study of dimension estimation algorithms

A comparison study of dimension estimation algorithms A comparison study of dimension estimation algorithms Ariel Schlamm, a Ronald G. Resmini, b David Messinger, a, and William Basener c a Rochester Institute of Technology, Center for Imaging Science, Digital

More information

Introduction to digital image classification

Introduction to digital image classification Introduction to digital image classification Dr. Norman Kerle, Wan Bakx MSc a.o. INTERNATIONAL INSTITUTE FOR GEO-INFORMATION SCIENCE AND EARTH OBSERVATION Purpose of lecture Main lecture topics Review

More information

Copyright 2005 Center for Imaging Science Rochester Institute of Technology Rochester, NY

Copyright 2005 Center for Imaging Science Rochester Institute of Technology Rochester, NY Development of Algorithm for Fusion of Hyperspectral and Multispectral Imagery with the Objective of Improving Spatial Resolution While Retaining Spectral Data Thesis Christopher J. Bayer Dr. Carl Salvaggio

More information

APPLICATION OF SOFTMAX REGRESSION AND ITS VALIDATION FOR SPECTRAL-BASED LAND COVER MAPPING

APPLICATION OF SOFTMAX REGRESSION AND ITS VALIDATION FOR SPECTRAL-BASED LAND COVER MAPPING APPLICATION OF SOFTMAX REGRESSION AND ITS VALIDATION FOR SPECTRAL-BASED LAND COVER MAPPING J. Wolfe a, X. Jin a, T. Bahr b, N. Holzer b, * a Harris Corporation, Broomfield, Colorado, U.S.A. (jwolfe05,

More information

Application of Spectral Clustering Algorithm

Application of Spectral Clustering Algorithm 1/27 Application of Spectral Clustering Algorithm Danielle Middlebrooks dmiddle1@math.umd.edu Advisor: Kasso Okoudjou kasso@umd.edu Department of Mathematics University of Maryland- College Park Advance

More information

Robust Pose Estimation using the SwissRanger SR-3000 Camera

Robust Pose Estimation using the SwissRanger SR-3000 Camera Robust Pose Estimation using the SwissRanger SR- Camera Sigurjón Árni Guðmundsson, Rasmus Larsen and Bjarne K. Ersbøll Technical University of Denmark, Informatics and Mathematical Modelling. Building,

More information

Semi-Supervised Clustering with Partial Background Information

Semi-Supervised Clustering with Partial Background Information Semi-Supervised Clustering with Partial Background Information Jing Gao Pang-Ning Tan Haibin Cheng Abstract Incorporating background knowledge into unsupervised clustering algorithms has been the subject

More information

DUe to the rapid development and proliferation of hyperspectral. Hyperspectral Image Classification in the Presence of Noisy Labels

DUe to the rapid development and proliferation of hyperspectral. Hyperspectral Image Classification in the Presence of Noisy Labels Hyperspectral Image Classification in the Presence of Noisy Labels Junjun Jiang, Jiayi Ma, Zheng Wang, Chen Chen, and Xianming Liu arxiv:89.422v [cs.cv] 2 Sep 28 Abstract Label information plays an important

More information

HYPERSPECTRAL remote sensing images (HSI) with

HYPERSPECTRAL remote sensing images (HSI) with 1 A Semi-supervised Spatial Spectral Regularized Manifold Local Scaling Cut With HGF for Dimensionality Reduction of Hyperspectral Images Ramanarayan Mohanty, Student Member, IEEE, S L Happy, Member, IEEE,

More information

Selecting Models from Videos for Appearance-Based Face Recognition

Selecting Models from Videos for Appearance-Based Face Recognition Selecting Models from Videos for Appearance-Based Face Recognition Abdenour Hadid and Matti Pietikäinen Machine Vision Group Infotech Oulu and Department of Electrical and Information Engineering P.O.

More information

A MAXIMUM NOISE FRACTION TRANSFORM BASED ON A SENSOR NOISE MODEL FOR HYPERSPECTRAL DATA. Naoto Yokoya 1 and Akira Iwasaki 2

A MAXIMUM NOISE FRACTION TRANSFORM BASED ON A SENSOR NOISE MODEL FOR HYPERSPECTRAL DATA. Naoto Yokoya 1 and Akira Iwasaki 2 A MAXIMUM NOISE FRACTION TRANSFORM BASED ON A SENSOR NOISE MODEL FOR HYPERSPECTRAL DATA Naoto Yokoya 1 and Akira Iwasaki 1 Graduate Student, Department of Aeronautics and Astronautics, The University of

More information

Multi-level fusion of graph based discriminant analysis for hyperspectral image classification

Multi-level fusion of graph based discriminant analysis for hyperspectral image classification DOI 10.1007/s11042-016-4183-7 Multi-level fusion of graph based discriminant analysis for hyperspectral image classification Fubiao Feng 1 Qiong Ran 1 Wei Li 1 Received: 28 May 2016 / Revised: 28 October

More information

Graph Laplacian Kernels for Object Classification from a Single Example

Graph Laplacian Kernels for Object Classification from a Single Example Graph Laplacian Kernels for Object Classification from a Single Example Hong Chang & Dit-Yan Yeung Department of Computer Science, Hong Kong University of Science and Technology {hongch,dyyeung}@cs.ust.hk

More information

Spatially variant dimensionality reduction for the visualization of multi/hyperspectral images

Spatially variant dimensionality reduction for the visualization of multi/hyperspectral images Author manuscript, published in "International Conference on Image Analysis and Recognition, Burnaby : Canada (2011)" DOI : 10.1007/978-3-642-21593-3_38 Spatially variant dimensionality reduction for the

More information

Manifold Learning for Video-to-Video Face Recognition

Manifold Learning for Video-to-Video Face Recognition Manifold Learning for Video-to-Video Face Recognition Abstract. We look in this work at the problem of video-based face recognition in which both training and test sets are video sequences, and propose

More information

Large-Scale Face Manifold Learning

Large-Scale Face Manifold Learning Large-Scale Face Manifold Learning Sanjiv Kumar Google Research New York, NY * Joint work with A. Talwalkar, H. Rowley and M. Mohri 1 Face Manifold Learning 50 x 50 pixel faces R 2500 50 x 50 pixel random

More information

Cluster Analysis. Mu-Chun Su. Department of Computer Science and Information Engineering National Central University 2003/3/11 1

Cluster Analysis. Mu-Chun Su. Department of Computer Science and Information Engineering National Central University 2003/3/11 1 Cluster Analysis Mu-Chun Su Department of Computer Science and Information Engineering National Central University 2003/3/11 1 Introduction Cluster analysis is the formal study of algorithms and methods

More information

CSE 6242 A / CX 4242 DVA. March 6, Dimension Reduction. Guest Lecturer: Jaegul Choo

CSE 6242 A / CX 4242 DVA. March 6, Dimension Reduction. Guest Lecturer: Jaegul Choo CSE 6242 A / CX 4242 DVA March 6, 2014 Dimension Reduction Guest Lecturer: Jaegul Choo Data is Too Big To Analyze! Limited memory size! Data may not be fitted to the memory of your machine! Slow computation!

More information

Appearance Manifold of Facial Expression

Appearance Manifold of Facial Expression Appearance Manifold of Facial Expression Caifeng Shan, Shaogang Gong and Peter W. McOwan Department of Computer Science Queen Mary, University of London, London E1 4NS, UK {cfshan, sgg, pmco}@dcs.qmul.ac.uk

More information

EVALUATION OF CONVENTIONAL DIGITAL CAMERA SCENES FOR THEMATIC INFORMATION EXTRACTION ABSTRACT

EVALUATION OF CONVENTIONAL DIGITAL CAMERA SCENES FOR THEMATIC INFORMATION EXTRACTION ABSTRACT EVALUATION OF CONVENTIONAL DIGITAL CAMERA SCENES FOR THEMATIC INFORMATION EXTRACTION H. S. Lim, M. Z. MatJafri and K. Abdullah School of Physics Universiti Sains Malaysia, 11800 Penang ABSTRACT A study

More information

Including the Size of Regions in Image Segmentation by Region Based Graph

Including the Size of Regions in Image Segmentation by Region Based Graph International Journal of Emerging Engineering Research and Technology Volume 3, Issue 4, April 2015, PP 81-85 ISSN 2349-4395 (Print) & ISSN 2349-4409 (Online) Including the Size of Regions in Image Segmentation

More information

Spectral-spatial rotation forest for hyperspectral image classification

Spectral-spatial rotation forest for hyperspectral image classification Spectral-spatial rotation forest for hyperspectral image classification Junshi Xia, Lionel Bombrun, Yannick Berthoumieu, Christian Germain, Peijun Du To cite this version: Junshi Xia, Lionel Bombrun, Yannick

More information

Revista de Topografía Azimut

Revista de Topografía Azimut Revista de Topografía Azimut http://revistas.udistrital.edu.co/ojs/index.php/azimut Exploration of Fourier shape descriptor for classification of hyperspectral imagery Exploración del descriptor de forma

More information

DEEP LEARNING TO DIVERSIFY BELIEF NETWORKS FOR REMOTE SENSING IMAGE CLASSIFICATION

DEEP LEARNING TO DIVERSIFY BELIEF NETWORKS FOR REMOTE SENSING IMAGE CLASSIFICATION DEEP LEARNING TO DIVERSIFY BELIEF NETWORKS FOR REMOTE SENSING IMAGE CLASSIFICATION S.Dhanalakshmi #1 #PG Scholar, Department of Computer Science, Dr.Sivanthi Aditanar college of Engineering, Tiruchendur

More information

Diagonal Principal Component Analysis for Face Recognition

Diagonal Principal Component Analysis for Face Recognition Diagonal Principal Component nalysis for Face Recognition Daoqiang Zhang,2, Zhi-Hua Zhou * and Songcan Chen 2 National Laboratory for Novel Software echnology Nanjing University, Nanjing 20093, China 2

More information

Hyperspectral and Multispectral Image Fusion Using Local Spatial-Spectral Dictionary Pair

Hyperspectral and Multispectral Image Fusion Using Local Spatial-Spectral Dictionary Pair Hyperspectral and Multispectral Image Fusion Using Local Spatial-Spectral Dictionary Pair Yifan Zhang, Tuo Zhao, and Mingyi He School of Electronics and Information International Center for Information

More information

Relative Constraints as Features

Relative Constraints as Features Relative Constraints as Features Piotr Lasek 1 and Krzysztof Lasek 2 1 Chair of Computer Science, University of Rzeszow, ul. Prof. Pigonia 1, 35-510 Rzeszow, Poland, lasek@ur.edu.pl 2 Institute of Computer

More information

Spectral-Spatial Classification of Hyperspectral Image Based on Kernel Extreme Learning Machine

Spectral-Spatial Classification of Hyperspectral Image Based on Kernel Extreme Learning Machine Remote Sens. 2014, 6, 5795-5814; doi:10.3390/rs6065795 Article OPEN ACCESS remote sensing ISSN 2072-4292 www.mdpi.com/journal/remotesensing Spectral-Spatial Classification of Hyperspectral Image Based

More information

The Analysis of Parameters t and k of LPP on Several Famous Face Databases

The Analysis of Parameters t and k of LPP on Several Famous Face Databases The Analysis of Parameters t and k of LPP on Several Famous Face Databases Sujing Wang, Na Zhang, Mingfang Sun, and Chunguang Zhou College of Computer Science and Technology, Jilin University, Changchun

More information

Constrained Manifold Learning for Hyperspectral Imagery Visualization

Constrained Manifold Learning for Hyperspectral Imagery Visualization IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING 1 Constrained Manifold Learning for Hyperspectral Imagery Visualization Danping Liao, Yuntao Qian Member, IEEE, and Yuan

More information

Object Geolocation from Crowdsourced Street Level Imagery

Object Geolocation from Crowdsourced Street Level Imagery Object Geolocation from Crowdsourced Street Level Imagery Vladimir A. Krylov and Rozenn Dahyot ADAPT Centre, School of Computer Science and Statistics, Trinity College Dublin, Dublin, Ireland {vladimir.krylov,rozenn.dahyot}@tcd.ie

More information

Learning a Manifold as an Atlas Supplementary Material

Learning a Manifold as an Atlas Supplementary Material Learning a Manifold as an Atlas Supplementary Material Nikolaos Pitelis Chris Russell School of EECS, Queen Mary, University of London [nikolaos.pitelis,chrisr,lourdes]@eecs.qmul.ac.uk Lourdes Agapito

More information

Modelling and Visualization of High Dimensional Data. Sample Examination Paper

Modelling and Visualization of High Dimensional Data. Sample Examination Paper Duration not specified UNIVERSITY OF MANCHESTER SCHOOL OF COMPUTER SCIENCE Modelling and Visualization of High Dimensional Data Sample Examination Paper Examination date not specified Time: Examination

More information

Laplacian Faces: A Face Recognition Tool

Laplacian Faces: A Face Recognition Tool Laplacian Faces: A Face Recognition Tool Prof. Sami M Halwani 1, Prof. M.V.Ramana Murthy 1, Prof. S.B.Thorat 1 Faculty of Computing and Information Technology, King Abdul Aziz University, Rabigh, KSA,Email-mv.rm50@gmail.com,

More information

Learning to Recognize Faces in Realistic Conditions

Learning to Recognize Faces in Realistic Conditions 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050

More information

Title: A Deep Network Architecture for Super-resolution aided Hyperspectral Image Classification with Class-wise Loss

Title: A Deep Network Architecture for Super-resolution aided Hyperspectral Image Classification with Class-wise Loss 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising

More information

Visual Representations for Machine Learning

Visual Representations for Machine Learning Visual Representations for Machine Learning Spectral Clustering and Channel Representations Lecture 1 Spectral Clustering: introduction and confusion Michael Felsberg Klas Nordberg The Spectral Clustering

More information

Technical Report. Title: Manifold learning and Random Projections for multi-view object recognition

Technical Report. Title: Manifold learning and Random Projections for multi-view object recognition Technical Report Title: Manifold learning and Random Projections for multi-view object recognition Authors: Grigorios Tsagkatakis 1 and Andreas Savakis 2 1 Center for Imaging Science, Rochester Institute

More information

Identifying Layout Classes for Mathematical Symbols Using Layout Context

Identifying Layout Classes for Mathematical Symbols Using Layout Context Rochester Institute of Technology RIT Scholar Works Articles 2009 Identifying Layout Classes for Mathematical Symbols Using Layout Context Ling Ouyang Rochester Institute of Technology Richard Zanibbi

More information

IMAGE DENOISING USING NL-MEANS VIA SMOOTH PATCH ORDERING

IMAGE DENOISING USING NL-MEANS VIA SMOOTH PATCH ORDERING IMAGE DENOISING USING NL-MEANS VIA SMOOTH PATCH ORDERING Idan Ram, Michael Elad and Israel Cohen Department of Electrical Engineering Department of Computer Science Technion - Israel Institute of Technology

More information

Globally and Locally Consistent Unsupervised Projection

Globally and Locally Consistent Unsupervised Projection Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence Globally and Locally Consistent Unsupervised Projection Hua Wang, Feiping Nie, Heng Huang Department of Electrical Engineering

More information

Color Local Texture Features Based Face Recognition

Color Local Texture Features Based Face Recognition Color Local Texture Features Based Face Recognition Priyanka V. Bankar Department of Electronics and Communication Engineering SKN Sinhgad College of Engineering, Korti, Pandharpur, Maharashtra, India

More information

High-Resolution Image Classification Integrating Spectral-Spatial-Location Cues by Conditional Random Fields

High-Resolution Image Classification Integrating Spectral-Spatial-Location Cues by Conditional Random Fields IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 25, NO. 9, SEPTEMBER 2016 4033 High-Resolution Image Classification Integrating Spectral-Spatial-Location Cues by Conditional Random Fields Ji Zhao, Student

More information

Image Similarities for Learning Video Manifolds. Selen Atasoy MICCAI 2011 Tutorial

Image Similarities for Learning Video Manifolds. Selen Atasoy MICCAI 2011 Tutorial Image Similarities for Learning Video Manifolds Selen Atasoy MICCAI 2011 Tutorial Image Spaces Image Manifolds Tenenbaum2000 Roweis2000 Tenenbaum2000 [Tenenbaum2000: J. B. Tenenbaum, V. Silva, J. C. Langford:

More information

Short Survey on Static Hand Gesture Recognition

Short Survey on Static Hand Gesture Recognition Short Survey on Static Hand Gesture Recognition Huu-Hung Huynh University of Science and Technology The University of Danang, Vietnam Duc-Hoang Vo University of Science and Technology The University of

More information

Improving Image Segmentation Quality Via Graph Theory

Improving Image Segmentation Quality Via Graph Theory International Symposium on Computers & Informatics (ISCI 05) Improving Image Segmentation Quality Via Graph Theory Xiangxiang Li, Songhao Zhu School of Automatic, Nanjing University of Post and Telecommunications,

More information

Discriminant Analysis-Based Dimension Reduction for Hyperspectral Image Classification

Discriminant Analysis-Based Dimension Reduction for Hyperspectral Image Classification Satellite View istockphoto.com/frankramspott puzzle outline footage firm, inc. Discriminant Analysis-Based Dimension Reduction for Hyperspectral Image Classification A survey of the most recent advances

More information