Multi-level fusion of graph based discriminant analysis for hyperspectral image classification
|
|
- Alexina Thompson
- 5 years ago
- Views:
Transcription
1 DOI /s Multi-level fusion of graph based discriminant analysis for hyperspectral image classification Fubiao Feng 1 Qiong Ran 1 Wei Li 1 Received: 28 May 2016 / Revised: 28 October 2016 / Accepted: 18 November 2016 Springer Science+Business Media New York 2016 Abstract Based on the graph-embedding framework, sparse graph-based discriminant analysis (SGDA), collaborative graph-based discriminant analysis (CGDA) and low rankness graph based discriminant analysis (LGDA) have been proposed with different graph construction. However, due to the inherent characteristics of l 1 -norm, l 2 -norm and nuclearnorm, single graph may be not optimal in capturing global and local structure of the data. In this paper, a multi-level fusion strategy is proposed in combining the three graph construction methods: 1) multiple graphs-based discriminant analysis (MGDA) in feature level with adaptive weights; 2) decision level fusion with D-S theory (GDA-DS), followed by a typical support vector machine (SVM) classification. Experimental results on three hyperspectral images datasets demonstrate that results with the fused strategy prevails with better classification performance. Keywords Hyperspectral data Dimensionality reduction Graph embedding Multi-level fusion D-S evidence theory 1 Introduction Hyperspectral imagery consists of hundreds of narrow contiguous wavelength bands which include detailed spectral information about materials [5, 17]. Due to the fact that these bands are often highly correlated, many dimensionality reduction algorithms have been developed to decrease computational complexity and remove redundant features that may deteriorate classification performance [19]. Qiong Ran ranqiong@mail.buct.edu.cn 1 College of Information Science and Technology, Beijing University of Chemical Technology, Beijing, , China
2 The major strategies for dimensionality reduction contain band selection and projectionbased techniques. As for band selection, the aim is to find a small subset of original bands including sufficient information [7, 16, 29]. And for the projection-based techniques, original bands are projected into a lower dimensionality subspace based on some criterion function. The traditional projection-based approaches include principle component analysis (PCA) and Fisher s linear discriminate analysis (LDA). There are a lot of modified versions that have been developed such as kernel PCA (KPCA) [8], noise-adjusted subspace LDA (NA-SLDA) [14], and kernel LDA (KDA) [15]. Some other comparatively excellent techniques include local Fisher discriminative analysis (LFDA) [19], and a kernel version of LFDA (KLFDA) [18]. Some feature extraction methods in [3, 11, 25] also have significant contributions. Besides aforementioned algorithms, there is also another branch called manifold learning for dimensionality reduction. Manifold learning started with ISOMAP [30] and locally linear embedding (LLE) [27]. ISOMAP utilizes the geodesic distance instead of Euclidean distance in multi-dimensionality scaling (MDS) [12] between pixel-pair to reduce the dimensionality of data. LLE preserves the local linear structure of data and performances well. After that, many manifold learning algorithms were proposed, such as Laplacian eigenmaps (LE) [2], locality preserving projection (LPP) [24], neighborhood preserving embedding (NPE) [9]. LE applies the Laplacian matrix to obtain the locally neighbor information. LPP and NPE are non-linearized versions of LE and LLE, respectively. Recently, graph theory is widely applied in dimensionality reduction (DR). A general graph-embedding (GE) framework was proposed in [33]to describe many existing DR techniques. Based on this framework, many graphs have been constructed. Sparse graph-based discriminant analysis (SGDA) was proposed [20, 22] by preserving the sparse connection in a block-structured affinity matrix with class-specific labeled samples. In SGDA, a graph is constructed by an l 1 -norm optimization, which is different from traditional methods (e.g., k-nearest neighbor with Euclidean distance [26]). Furthermore, weights in an l 1 -graph derived via a sparse representation can automatically select more discriminative neighbors in the feature space. However, the l 1 -graph tends to represent each sample individually, which lacks global constraint on its solutions although capable of preserving locally linear structures. In collaborative graph-based discriminant analysis (CGDA) [21], a graph is constructed by an l 2 -norm instead of l 1 -norm optimization in SGDA. With collaborative representation among labeled samples, the solution can be nicely expressed in closed form. But the l 2 -graph is difficult to distinguish the the erroneous datum. In low rankness graph based discriminant analysis (LGDA) [1, 6], low-rank representation (LRR) with nuclearnorm has been proved to be excellent at preserving global data structures using a rank function. Different from the l 1 -graph, similar samples have similar representations under a common base (dictionary) in the low-rank graph. An ideal graph should reveal the true intrinsic complexity including the local neighborhood structure as well as global geometrical structure (e.g., subspaces structure, manifolds, and multiple clusters), especially for high-dimensional data. Thus, in this work, multi-level fusion strategies [4, 10] are designed to effectively fuse multiple graphs considering limitations of the different graphs, that is, multiple graph-based discriminant analysis (MGDA) and graph-based discriminant analysis with D-S evidence theory (GDA-DS). In our first strategy, the l 1 -graph, the l 2 -graph and low-rank graph are combined via a weighted summarization where the weights are used to trade off global and local structure. Then, graph-embedding-based discriminant analysis is employed for dimensionality reduction, which attempts to find an ideal projection capturing essential data structure in an informative graph with sparsity, collaboration and low-rank, solved as a generalized eigenvalue
3 decomposition problem. In our second strategy, D-S evidence theory [34] is used to fuse results of SGDA, CGDA and LGDA after SVM classification. This method is quite effective for adjusting each graph s deviation. The rest of this paper is organized as follows. The multi-level fusion strategies are discussed in Section 2. Classification results on the real hyperspectral datasets are reported in Section 3. Section 4 gives the conclusion. 2 Proposed dimensionality reduction methods The graph-embedding framework employs undirected weighted graphs for the dimensionality reduction task. Consider an N-band hyperspectral dataset with M labeled samples denoted as X = {x i } M i=1 in a RN 1 feature space. An intrinsic graph is denoted as G ={X, W}, and a penalty graph is represented as G p ={X, W p },wherew and W p are M M matrices representing edge weights between vertices. W expresses the similarities between vertices while W p captures similarity relationships are to be supressed by the dimensionality reduction process. Different graphs are constructed with specific definitions of the intrinsic and penalty graph, such as the l 1 -graph, l 2 -graph and low-rank graph. For these graphs, limitations present with the inherent characteristics. To capture global and local structure of the data optimally, we propose multi graph-based discriminant analysis(mgda) and Graph-Based Discriminant Analysis with D-S evidence theory (GDA-DS) to perform multi-level fusion of the graphs. 2.1 Multiple graph-based discriminant analysis (MGDA) In an l 1 -graph, for each pixel x i in the dictionary X, the sparse representation (SR) vector is calculated by solving the l 1 -norm optimization problem [13], arg min wi 1 s.t. Xw i = x i, (1) w i where w i =[w i1,w i2,,w im ] is a vector of size M 1and 1 denotes the l 1 -norm which sums up the absolute values of all entries. An l 2 -graph is constructed with the following objective function, arg min w i w i 2 s.t. Xw i = x i, (2) where w i =[w i1,w i2,,w im ] is a vector of size M 1and 2 denotes the l 2 -norm. As for all the data, (1), (2) can be further expressed as, arg min W F s.t. XW = X, (3) W where when F is 1, W =[w 1, w 2,, w M ] denotes the weight matrix of size M M whose column w i is the sparse representation vector corresponding to x i, diag(w) = 0; when F is 2, it denotes l 2 graph. A low-rank graph is constructed with the following objective function, arg min W W s.t. X = XW, (4) where represents the nuclear norm of a matrix. The equation can be re-formulated as arg min W + λ X XW 2,1, (5) W
4 where 2,1 represents the l 2,1 norm [35] and λ is a regularization parameter. The symmetric low-rank graph can be represented as, W lgda = ( W + W T )/2. In the first strategy of multi-level fusion of graph, multiple graphs, i.e., W sgda, W cgda and W lgda, are fused via a weighted summarization, W mgda = ω 1 W sgda + ω 2 W cgda + ω 3 W lgda s.t. ω 1 + ω 2 + ω 3 = 1,ω 1 0,ω 2 0,ω 3 0, (6) where ω 1, ω 2 and ω 3 are parameters to balance these three terms. Through the combination, the resulting graph can simultaneously reflect global and local structure of the data. Note that when ω 2 = 0andω 3 = 0, the graph reduces to an l 1 -graph, and when ω 1 = 0and ω 3 = 0, the graph reduces to an l 2 graph, and when ω 1 = 0andω 2 = 0, the graph reduces to a low rank graph. Figure 1 shows the model of fusion with multiple graphs. 2.2 Graph-embedding subspace learning and analysis on MGDA A graph-embedding subspace learning framework [32, 33]seeks to find an N K projection matrix P (with K N) which results in a low-dimensional subspace Y = P T X,where desired statistical or geometrical characteristics are preserved. The objective function can be described as, P = argmin P T x i P T x j 2 Wi,j P T XL p X T P i =j = argmin tr(p T XLX T P), (7) P T XL p X T P where L is the Laplacian matrix of graph G, L = D W, for the fusion strategy, W = W mgda, D is a diagonal matrix with the i th diagonal element being D ii = M j=1 W i,j.ifa penalty graph used, L p may be the Laplacian matrix of the penalty graph G p or a simple scale normalization constraint [33]. The optimal projection matrix P can be obtained as, P = arg min P P T XLX T P P T XL p X T P, (8) Fig. 1 The flowchart of fusion with multiple graphs
5 which can be further transferred into a generalized eigenvalue decomposition problem, XLX T P = XL p X T P, (9) where is a diagonal eigenvalue matrix. For an N K projection matrix P, it is constructed by the K eigenvectors corresponding to the K smallest nonzero eigenvalues. By constructing the optimized graph W mgda, the projected vectors are expected to be better centralized. In this work, MGDA is proposed to reduce dimensionality of the hyperspectral image. However, spectral information can be easily affected by many factors, such as differences in illumination conditions, geometric features of material surfaces, atmospheric affects [28]. So it is necessary to discuss the statistical distributions of objects in hyperspectral data. We expect that MGDA can outperform other graphs. The aforementioned graph matrices (i.e. SGDA, CGDA, LGDA and the proposed MGDA) using 3-class synthetic data (about 400 samples per class) are illustrated in Fig. 2. As the result shows, SGDA represents the data sparsely in Fig. 2a; CGDA can obtain good results via collaborative representation in Fig. 2b; and LGDA is more robust to noise in Fig. 2c. Whereas MGDA fuses advantages of the above three graphs in a graph in Fig. 2d. Figure 3 illustrates class distributions and classification results along the first two dimensions. Figure 3a is the original data(3-class synthetic data), where class 1 is represented by Fig. 2 Visualization of different graph weights using 3-class synthetic data
6 Fig. 3 Two-dimensional 3-class synthetic data classified (with accuracy) by different graph-based methods red plus,class2isblue square, and class 3 is black circle. Some points from class 2 and 3 are hard to be distinguished. And class 1 is composed of two parts, one of which is overlapped with class 2, marked with a black circle. It is obvious that MGDA is better than
7 others in dealing with the inseparable data. Distinguishing of points from class 2 and 3 with ambiguous edges and the class 1 and 2 with overlapping areas are both dealt with perfectly. To some extend, MGDA adequately captures different advantages and performs optimally. 2.3 Graph-based discriminant analysis with D-S evidence theory (GDA-DS) In this subsection, D-S evidence does a decision level fusion to decide the label of pixel, by fusing the results of SGDA, CGDA and LGDA after SVM classification [23]. D-S evidence theory is widely applied in information fusion. As a reasoning method under uncertainty, the D-S evidence theory has two main characters: 1) can deal with distributions compared to Bayesian probability theory; 2) with direct expression uncertain and don t know Frame of discernment If denotes the set of C corresponding to C labels, let ={ 1, 2,, C } be a frame of discernment. It is composed of C mutually exhaustive and exclusive hypotheses. The power set of is the set including all the 2 C possible subsets of, represented as P( ): P( ) ={, 1, 2,, C, { 1, 2 }, { 1, 3 },, } where denotes the null set. The { C } subsets containing only one element are called singletons Basic Probability Assignment (BPA) function In the aforementioned frame, a BPA function m is defined by m : 2 [0, 1], whichis also called mass function. And this function conforms to the following conditions: { m(a) = 1 A (10) m( ) = 0 where m(a) denotes the proportion of all terms and available evidence for each term. If subset A contains only one element, A is called unit focal element; and if subset A contains multiple elements, it is called multiple focal element Belief and plausibility functions In a frame and given BPA m, a belief function Bel is defined as: Bel(A) = B A m(b) (11) and a plausibility function Pl is defined as: Pl(A) = 1 Bel(A) = B A = m(b) (12) where A and B A. Pl(A) is also called plausible function or upper limit function. Bel(A) denotes the degree that proposition A is true; Pl(A) denotes the degree that proposition A is not true. So the probable range of proposition A can be expressed as [Bel(A),Pl(A)], i.e. uncertainly. And the internelspan Pl(A) Bel(A) represents the ignorance in proposition A. the relationship of Bel(A) and Pl(A)is shown in Fig. 4.
8 Fig. 4 Confidence interval schematic Combination rule of evidence Suppose m 1,m 2 are two mass functions. according to Dempsters orthogonal rule [31], we have 0,C = m(c) = (m 1 m 2 )(C) = m 1 (A)m 2 (B) (13) A B=C,C = where K is classic friction coefficient, and represented as: K = m 1 (A)m 2 (B)(0 K 1) (14) A B= K measures the degree of the conflict between m 1 and m 2. The denominator 1 K in (13) is a normalization factor. K = 0 denotes there is no conflict between m 1 and m 2, whereas K = 1 denotes complete contradiction between m 1 and m 2. If there are more than two mass functions, such as m 1,m 2,,m n, the combination rule is as bellow: m 1 (A 1 )m 2 (A 2 ) m n (A n ) A 1 A 2 A n =A (m 1 m 2 m n )(A) = (15) 1 K where K is represented as: K = m 1 (A 1 )m 2 (A 2 ) m n (A n ) (16) A 1 A 2 A n = In our second strategy, we use SVM classification to obtain the probability of different labels with different graphs. After SVM classification, P 1n is obtained for SGDA; P 2n is obtained for CGDA; and P 3n is obtained for LGDA, where n = 1, 2,,C, C is the number of class labels. And P 1n,P 2N P 3n are mass functions like m 1,m 2,m 3. The system structure is shown in Fig K 3 Experiments and analysis In this section, the proposed multiple graph-based discriminant analysis (MGDA) and graph-based discriminant analysis with D-S evidence theory (GDA-DS) are validated with three popular hyperspectral datasets. The typical SVM is employed as the classifier.
9 Fig. 5 System structure of DS based on SVM classifier 3.1 Hyperspectral data The first data employed was acquired using National Aeronautics and Space Administration s (NASA) Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor and was collected over northwest Indiana s Indian Pine test site in June The image represents Fig. 6 Parameter tuning of λ for SGDA, CGDA and LGDA
10 Table 1 Classification accuracy (%) versus the value of w 1 and w 2 for MGDA in Indian Pine dataset with 10 % training samples per class a classification scenario with pixels and 220 bands in 0.4- to 2.45-μm region of visible and infrared spectrum with a spatial resolution of 20 m. The scenario contains two-thirds agriculture, and one-third forest. In this work, a total of 202 bands is used after removal of water-absorption bands, and twelve classes are used in this study.we moved four classes out because the number of each class is less than 100. There are 10 % training samples per class (randomly selected) and a total of 9,155 testing samples. The second data was also collected by the AVIRIS sensor, capturing an area over Salinas Valley, California, with a spatial resolution of 3.7 m. The image comprises pixels with 204 bands after 20 water absorption bands are removed. It mainly contains vegetables, bare soils, and vineyard fields. There are 16 different classes, and 10 % training samples per class and a total of 48,714 testing samples. The third data was acquired by the AVIRIS instrument over the Kennedy Space Center (KSC), Florida, on March 23, AVIRIS acquires data in 224 bands of 10 nm width with center wavelengths from nm. The KSC data, acquired from an altitude of Table 2 Classification accuracy (%) versus the value of w 1 and w 2 for MGDA in Salinas dataset with 10 % training samples per class
11 Table 3 Classification accuracy (%) versus the value of w 1 and w 2 for MGDA in KSC dataset with 40 % training samples per class approximately 20 km, have a spatial resolution of 18 m. After removing water absorption and low SNR bands, 176 bands were used for the analysis. For classification purposes, 13 classes representing the various land cover types that occur in this environment were defined for the site. There are 40 % training samples per class and a total of 3,127 testing samples. 3.2 Parameter tuning We report experiments demonstrating the sensitivity of SGDA, CGDA and LGDA over a wide range of regularization parameters (i.e., λ in (5)), and dimensionality of the projected subspace. Table 4 Classification results with SVM classifier on Indian Pine data set # Class Train Test SGDA CGDA LGDA MGDA GDA-DS 1 Corn-notill Corn-mintill Corn Grass-pasture Grass-trees Hay-windrowed Soybean-notill Soybean-mintill Soybean-clean Wheat Woods Build-Grass-Trees-Drives OA AA KC
12 Table 5 Classification results with SVM classifier on Salinas data set # Class Train Test SGDA CGDA LGDA MGDA GDA-DS 1 Brocoli-green-weeds Brocoli-green-weeds Fallow Fallow-rough-plow Fallow-smooth Stubble Celery Grapes-untrained Soil-vinyard-develop Corn-senesced-green-weeds Lettuce-romaine-4wk Lettuce-romaine-5wk Lettuce-romaine-6wk Lettuce-romaine-7wk Vinyard-untrained Vinyard-vertical-trellis OA AA KC Figure 6 illustrates the classification accuracy of the SGDA, CGDA and LGDA as a function of λ with optimal reduced dimensionality. The parameter is chosen from the region of {0.001, 0.01, 0.1, 1, 10, 100, 1000}. Through cross-validation in the experiments, the Table 6 Classification results with SVM classifier on KSC data set Class Train Test SGDA CGDA LGDA MGDA GDA-DS OA AA KC
13 Fig. 7 Thematic maps resulting from classification for the Indian Pines dataset with 12 classes optimal λ of SGDA, CGDA and LGDA are set to 1000, 0.1 and 10 for the Indian Pines data; As for Salinas data, λ are set to 100, 1 and 10; and in KSC data, λ are set to 10, 1 and 1. The parameters for MGDA where ω 1 for SGDA, ω 2 for CGDA and ω 3 for LGDA (note that ω 3 = 1-ω 1 - ω 2 ) are shown in the Table 1 for Indian Pine data, Table 2 for Salinas data and Table 3 for KSC data. Noted that the remaining dimensionality is 41, 23, 39, respectively. In these tables, we can see that the same graph performs differently with different datasets. And MGDA can exploit the advantage of different graphs and lead to a good result. The feature fusion shows its advantages in assigning weights to all the graphs adaptively. The optimum weights are ω 1 = 0.1, ω 2 = 0.1 and ω 3 = 0.8 for Indian Pine data; ω 1 = 0.1, Fig. 8 Thematic maps resulting from classification for the Salinas dataset with 16 classes
14 Fig. 9 Thematic maps resulting from classification for the KSC dataset with 13 classes ω 2 = 0.5 and ω 3 = 0.4 for Salinas data; ω 1 = 0.1, ω 2 = 0.8 and ω 3 = 0.1 for KSC data. The precisions are improved greatly with indian pines dataset with an enhancement of 2.3 % to 5.3 %. While for the Salinas and KSC dataset, although the original precisions exceed 90 %, an improvement of 0.5 % to 3 % is still reported. 3.3 Classification performance To further proof the proposed methods, the overall classification accuracy (OA), the average classification accuracy (AA) and the Kappa coefficient (KC) ara utilized to evaluate the Fig. 10 Classification accuracy versus reduced dimensionality for methods using the Indian Pines dataset
15 Fig. 11 Classification accuracy versus reduced dimensionality for methods using the Salinas dataset results in Table 4, 5 and 6. The OAs, AAs and KCs of the proposed methods are better than SGDA, CGDA and LGDA. Noted with Indian pine dataset, the KCs of MGDA and GDA-DS exceed 0.8, while those of SGDA, CGDA, LGDA are inferior than 0.8. Besides, MGDA in regions of Corn-notill, Corn-mintill in Table 4, the one of Vinyard-untrained in Table 5, and the one of 4th region in Table 6 performance better than the other single graph methods. The classification map results are shown in Figs. 7, 8 and 9. Figures 10, 11 and 12 illustrate the relation between classification accuracy and the reduced dimensions for the three experimental data. The performance of other traditional Fig. 12 Classification accuracy versus reduced dimensionality for methods using the KSC dataset
16 Fig. 13 Classification performance of methods with different numbers of training-sample sizes using the experimental datasets classifiers, such as SGDA, CGDA and LGDA, is also included. It is apparent that the performance of MGDA and GDA-DS is always better than SGDA, CGDA and LGDA. While comparing the two methods MGDA and GDA-DS, MGDA performs better for Salinas and KSC dataset, especially with larger reduced dimension. We note that even for an extremely low dimensionality, the performance of MGDA and GDA-DS can be superior to others, which further suggests that the proposed strategy is able to find a transformation that can better centralize the information in the first dimensions. In practical situation, the number of training samples available is insufficient to estimate models for each class. Figure 13 shows the classification performance with different number of training samples in datasets. For Indian Pine data, the ratio to the whole available training set is set to [ ]; And for Salinas data, the ratio is set to [ ]; as for KSC data, the ratio is set to [ ]. It is apparent that the classification performance of MGDA and GDA-DS are consistently better than a single graph at various sample sizes, which implies that the multi-level fusion methods are more robust compared to the single graph-based methods.
17 4 Conclusions Based on the characteristics of l 1 -graph, l 2 and low-rank graph, a strategy with multi-level fusion of graph was proposed effectively. In the first strategy, multiple graphs-based discriminative analysis was designed to exploit the global and local structure simultaneously, while in the second method, graph-based discriminative analysis with D-S evidence theory was used to fuse classification results from separate pipeline. Compared with existing graph-embedding discriminant analysis methods, the proposed MGDA and GDA-DS can significantly reduce the dimensionality while preserving the rich statistical structure of the data. Experimental results on real hyperspectral images verified that the proposed methods consistently outperformed the traditional SGDA, CGDA and LGDA even with a small number of reduced dimensionality. Acknowledgments This work was supported by the National Natural Science Foundation of China under Grants No. NSFC , , and partly by the Fundamental Research Funds for the Central Universities under Grants No. BUCTRC201401, BUCTRC201615, YS1404, XK1521, ZY1504. References 1. Bao B, Liu G, Xu C, Yan S (2012) Inductive robust principal component analysis. IEEE Trans Image Process 21(8): Belkin M, Niyogi P (2003) Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput 15(6): Benediktsson JA, Palmason JA, Sveinsson JR (2005) Classification of hyperspectral data from urban areas based on extended morphological profiles. IEEE Trans Geosci Remote Sens 43(3): Bo C, Lu H, Wang D (2016) Hyperspectral image classification via jcr and svm models with decision fusion. IEEE Geosci Remote Sens Lett 13(2): Bo C, Lu H, Wang D (2016) Robust joint nearest subspace for hyperspectral image classification. Remote Sens Lett 7(10): Candès EJ, Li X, Ma Y, Wright J (2011) Robust principal component analysis? J ACM 3:58 7. Du Q, Yang H (2008) Similarity-based unsupervised band selection for hyperspectral image analysis. IEEE Geosci Remote Sens Lett 5(4): Fauvel M, Chanussot J, Benediktsson JA (2009) Kernel principal component analysis for the classifcation of hyperspectral remote sensing data over urban areas. EURASIP J Appl Signal Process 2009(1): He X, Cai D, Yan S, Zhang H-J (2005) Neighborhood preserving embedding. In: Tenth IEEE International Conference on Computer Vision (ICCV 05) Volume 1, vol. 2. IEEE, pp Kang X, Li S, Benediktsson JA (2014) Spectral-spatial hyperspectral image classification with edgepreserving filtering. IEEE Trans Geosci Remote Sens 52(5): Kang X, Li S, Fang L, Benediktsson JA (2015) Intrinsic image decomposition for feature extraction of hyperspectral images. IEEE Trans Geosci Remote Sens 53(4): Kruskal JB (1964) Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika 29(1): Li W, Du Q, zhang B (2015) Combined sparse and collaborative representation for hyperspectral target detection. Pattern Recogn 48: Li W, Prasad S, Fowler JE (2013) Noise-adjusted subspace discriminant analysis for hyperspectral imagery classification. IEEE Geosci Remote Sens Lett 10(6): Li W, Prasad S, Fowler JE (2014) Decision fusion in kernel-induced spaces for hyperspectral image classification. IEEE Trans Geosci Remote Sens 52(6): Li W, Prasad S, Fowler JE (2014) Hyperspectral image classification using Gaussian mixture model and Markov random field. IEEE Geosci Remote Sens Lett 11(1): Li W, Chen C, Su H, Du Q (2015) Local binary patterns and extreme learning machine for hyperspectral imagery classification. IEEE Trans Geosci Remote Sens 53(7):
18 18. Li W, Prasad S, Fowler JE, Bruce LM (2011) Locality-preserving discriminant analysis in kernelinduced feature spaces for hyperspectral image classification. IEEE Geosci Remote Sens Lett 8(5): Li W, Prasad S, Fowler JE, Bruce LM (2012) Locality-preserving dimensionality reduction and classification for hyperspectral image analysis. IEEE Trans Geosci Remote Sens 50(4): Ly N, Du Q, Fowler JE (2014) Collaborative graph-based discriminant analysis for hyperspectral imagery. IEEE J Selected Topics Appl Earth Observations Remote Sens 7(6): Ly NH, Du Q, Fowler JE (2014) Collaborative graph-based discriminant analysis for hyperspectral imagery. IEEE J Selected Topics Appl Earth Observations Remote Sens 7(6): Ly N, Du Q, Fowler JE (2014) Sparse Graph-based discriminant analysis for hyperspectral imagery. IEEE Trans Geosci Remote Sens 52(7): Melgani F, Bruzzone L (2004) Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans Geosci Remote Sens 42(8): Niyogi X (2004) Locality preserving projections. In: Neural information processing systems, vol. 16. MIT, p Plaza A, Martínez P, Plaza J, Pérez R (2005) Dimensionality reduction and classification of hyperspectral image data using sequences of extended morphological transformations. IEEE Trans Geosci Remote Sens 3: Rohban MH, Rabiee HR (2012) Supervised neighborhood graph construction for semi-supervised classification. Pattern Recogn 45(4): Roweis ST, Saul LK (2000) Nonlinear dimensionality reduction by locally linear embedding. Science 290(5500): Shaw G, Manolakis D (2002) Signal processing for hyperspectral image exploitation. IEEE Signal Process Mag 19: Su H, Yang H, Du Q, Sheng Y (2011) Semisupervised band clustering for dimensionality reduction of hyperspectral imagery. IEEE Geosci Remote Sens Lett 8(6): Tenenbaum JB, De Silva V, Langford JC (2000) A global geometric framework for nonlinear dimensionality reduction. Science 290(5500): Vapnik V, Vapnik V (1998) Statistical learning theory, vol 1. Wiley, New York 32. Wright J, Ma Y, Mairal J, Sapiro G, Huang T, Yan S (2010) Sparse representation for computer vision and pattern recognition. Proc IEEE 98(6): Yan S, Xu D, ZHang B, Zhang H, Yang Q, Lin S (2007) Graph embedding and extensions: A general framework for dimensionality reduction. IEEE Trans Pattern Anal Mach Intell 29(1): Zeng D, Xu J, Xu G (2008) Data fusion for traffic incident detector using ds evidence theory with probabilistic svms. J Comput 3(10): Zhuang L, Gao H, Lin Z, Ma Y, Zhang X, Yu N (2012) Non-negative low rank and sparse graph for semisupervised learning. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern recognition, Providence, Rhode Island, pp Fubiao Feng is currently pursuing the M. S. degree in Beijing University of Chemical Technology, Beijing, China. His supervisor is Dr. Wei Li and co-supervisor is Dr. Qiong Ran.
19 Qiong Ran received her Ph.D. degrees from the Institute of Remote Sensing Applications, Chinese Academy of Sciences (CAS), Beijing, China, in She has published over 10 papers in China and abroad. She is currently with the College of Information Science and Technology at Beijing University of Chemical Technology, Beijing, China. Her research interests include image acquisition, image processing, hyperspectral image analysis and applications. Wei Li received his Ph.D. degree in electrical and computer engineering from Mississippi State University, Starkville, in Subsequently, he spent one year as a postdoctoral researcher at the University of California, Davis. He is currently with the College of Information Science and Technology at Beijing University of Chemical Technology, Beijing, China. His research interests include statistical pattern recognition, hyperspectral image analysis, and data compression.
THE detailed spectral information of hyperspectral
1358 IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 14, NO. 8, AUGUST 2017 Locality Sensitive Discriminant Analysis for Group Sparse Representation-Based Hyperspectral Imagery Classification Haoyang
More informationHYPERSPECTRAL imagery consists of hundreds of
7066 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 54, NO. 12, DECEMBER 2016 Laplacian Regularized Collaborative Graph for Discriminant Analysis of Hyperspectral Imagery Wei Li, Member, IEEE,
More informationDiscriminant Analysis-Based Dimension Reduction for Hyperspectral Image Classification
Satellite View istockphoto.com/frankramspott puzzle outline footage firm, inc. Discriminant Analysis-Based Dimension Reduction for Hyperspectral Image Classification A survey of the most recent advances
More informationHYPERSPECTRAL image (HSI) acquired by spaceborne
1 SuperPCA: A Superpixelwise PCA Approach for Unsupervised Feature Extraction of Hyperspectral Imagery Junjun Jiang, Member, IEEE, Jiayi Ma, Member, IEEE, Chen Chen, Member, IEEE, Zhongyuan Wang, Member,
More informationIEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 13, NO. 8, AUGUST
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 13, NO. 8, AUGUST 2016 1059 A Modified Locality-Preserving Projection Approach for Hyperspectral Image Classification Yongguang Zhai, Lifu Zhang, Senior
More informationDIMENSION REDUCTION FOR HYPERSPECTRAL DATA USING RANDOMIZED PCA AND LAPLACIAN EIGENMAPS
DIMENSION REDUCTION FOR HYPERSPECTRAL DATA USING RANDOMIZED PCA AND LAPLACIAN EIGENMAPS YIRAN LI APPLIED MATHEMATICS, STATISTICS AND SCIENTIFIC COMPUTING ADVISOR: DR. WOJTEK CZAJA, DR. JOHN BENEDETTO DEPARTMENT
More informationRemote Sensing Data Classification Using Combined Spectral and Spatial Local Linear Embedding (CSSLE)
2016 International Conference on Artificial Intelligence and Computer Science (AICS 2016) ISBN: 978-1-60595-411-0 Remote Sensing Data Classification Using Combined Spectral and Spatial Local Linear Embedding
More informationHyperspectral Image Classification by Using Pixel Spatial Correlation
Hyperspectral Image Classification by Using Pixel Spatial Correlation Yue Gao and Tat-Seng Chua School of Computing, National University of Singapore, Singapore {gaoyue,chuats}@comp.nus.edu.sg Abstract.
More informationPoS(CENet2017)005. The Classification of Hyperspectral Images Based on Band-Grouping and Convolutional Neural Network. Speaker.
The Classification of Hyperspectral Images Based on Band-Grouping and Convolutional Neural Network 1 Xi an Hi-Tech Institute Xi an 710025, China E-mail: dr-f@21cnl.c Hongyang Gu Xi an Hi-Tech Institute
More informationHyperspectral and Multispectral Image Fusion Using Local Spatial-Spectral Dictionary Pair
Hyperspectral and Multispectral Image Fusion Using Local Spatial-Spectral Dictionary Pair Yifan Zhang, Tuo Zhao, and Mingyi He School of Electronics and Information International Center for Information
More informationRobust Pose Estimation using the SwissRanger SR-3000 Camera
Robust Pose Estimation using the SwissRanger SR- Camera Sigurjón Árni Guðmundsson, Rasmus Larsen and Bjarne K. Ersbøll Technical University of Denmark, Informatics and Mathematical Modelling. Building,
More informationTechnical Report. Title: Manifold learning and Random Projections for multi-view object recognition
Technical Report Title: Manifold learning and Random Projections for multi-view object recognition Authors: Grigorios Tsagkatakis 1 and Andreas Savakis 2 1 Center for Imaging Science, Rochester Institute
More informationNon-linear dimension reduction
Sta306b May 23, 2011 Dimension Reduction: 1 Non-linear dimension reduction ISOMAP: Tenenbaum, de Silva & Langford (2000) Local linear embedding: Roweis & Saul (2000) Local MDS: Chen (2006) all three methods
More informationThe Analysis of Parameters t and k of LPP on Several Famous Face Databases
The Analysis of Parameters t and k of LPP on Several Famous Face Databases Sujing Wang, Na Zhang, Mingfang Sun, and Chunguang Zhou College of Computer Science and Technology, Jilin University, Changchun
More informationHYPERSPECTRAL imagery has been increasingly used
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 14, NO. 5, MAY 2017 597 Transferred Deep Learning for Anomaly Detection in Hyperspectral Imagery Wei Li, Senior Member, IEEE, Guodong Wu, and Qian Du, Senior
More informationHYPERSPECTRAL imagery (HSI) records hundreds of
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 11, NO. 1, JANUARY 2014 173 Classification Based on 3-D DWT and Decision Fusion for Hyperspectral Image Analysis Zhen Ye, Student Member, IEEE, Saurabh
More informationA MAXIMUM NOISE FRACTION TRANSFORM BASED ON A SENSOR NOISE MODEL FOR HYPERSPECTRAL DATA. Naoto Yokoya 1 and Akira Iwasaki 2
A MAXIMUM NOISE FRACTION TRANSFORM BASED ON A SENSOR NOISE MODEL FOR HYPERSPECTRAL DATA Naoto Yokoya 1 and Akira Iwasaki 1 Graduate Student, Department of Aeronautics and Astronautics, The University of
More informationGlobally and Locally Consistent Unsupervised Projection
Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence Globally and Locally Consistent Unsupervised Projection Hua Wang, Feiping Nie, Heng Huang Department of Electrical Engineering
More informationLearning based face hallucination techniques: A survey
Vol. 3 (2014-15) pp. 37-45. : A survey Premitha Premnath K Department of Computer Science & Engineering Vidya Academy of Science & Technology Thrissur - 680501, Kerala, India (email: premithakpnath@gmail.com)
More informationDoes Normalization Methods Play a Role for Hyperspectral Image Classification?
Does Normalization Methods Play a Role for Hyperspectral Image Classification? Faxian Cao 1, Zhijing Yang 1*, Jinchang Ren 2, Mengying Jiang 1, Wing-Kuen Ling 1 1 School of Information Engineering, Guangdong
More informationManifold Learning for Video-to-Video Face Recognition
Manifold Learning for Video-to-Video Face Recognition Abstract. We look in this work at the problem of video-based face recognition in which both training and test sets are video sequences, and propose
More informationHyperspectral Image Classification Using Gradient Local Auto-Correlations
Hyperspectral Image Classification Using Gradient Local Auto-Correlations Chen Chen 1, Junjun Jiang 2, Baochang Zhang 3, Wankou Yang 4, Jianzhong Guo 5 1. epartment of Electrical Engineering, University
More informationSELECTION OF THE OPTIMAL PARAMETER VALUE FOR THE LOCALLY LINEAR EMBEDDING ALGORITHM. Olga Kouropteva, Oleg Okun and Matti Pietikäinen
SELECTION OF THE OPTIMAL PARAMETER VALUE FOR THE LOCALLY LINEAR EMBEDDING ALGORITHM Olga Kouropteva, Oleg Okun and Matti Pietikäinen Machine Vision Group, Infotech Oulu and Department of Electrical and
More informationKERNEL-based methods, such as support vector machines
48 IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 12, NO. 1, JANUARY 2015 Kernel Collaborative Representation With Tikhonov Regularization for Hyperspectral Image Classification Wei Li, Member, IEEE,QianDu,Senior
More informationROBUST JOINT SPARSITY MODEL FOR HYPERSPECTRAL IMAGE CLASSIFICATION. Wuhan University, China
ROBUST JOINT SPARSITY MODEL FOR HYPERSPECTRAL IMAGE CLASSIFICATION Shaoguang Huang 1, Hongyan Zhang 2, Wenzhi Liao 1 and Aleksandra Pižurica 1 1 Department of Telecommunications and Information Processing,
More informationSchroedinger Eigenmaps with Nondiagonal Potentials for Spatial-Spectral Clustering of Hyperspectral Imagery
Schroedinger Eigenmaps with Nondiagonal Potentials for Spatial-Spectral Clustering of Hyperspectral Imagery Nathan D. Cahill a, Wojciech Czaja b, and David W. Messinger c a Center for Applied and Computational
More informationA new Graph constructor for Semi-supervised Discriminant Analysis via Group Sparsity
2011 Sixth International Conference on Image and Graphics A new Graph constructor for Semi-supervised Discriminant Analysis via Group Sparsity Haoyuan Gao, Liansheng Zhuang, Nenghai Yu MOE-MS Key Laboratory
More informationClassification of Hyperspectral Data over Urban. Areas Using Directional Morphological Profiles and. Semi-supervised Feature Extraction
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL.X, NO.X, Y 1 Classification of Hyperspectral Data over Urban Areas Using Directional Morphological Profiles and Semi-supervised
More informationLocality Preserving Projections (LPP) Abstract
Locality Preserving Projections (LPP) Xiaofei He Partha Niyogi Computer Science Department Computer Science Department The University of Chicago The University of Chicago Chicago, IL 60615 Chicago, IL
More informationGRAPH-BASED SEMI-SUPERVISED HYPERSPECTRAL IMAGE CLASSIFICATION USING SPATIAL INFORMATION
GRAPH-BASED SEMI-SUPERVISED HYPERSPECTRAL IMAGE CLASSIFICATION USING SPATIAL INFORMATION Nasehe Jamshidpour a, Saeid Homayouni b, Abdolreza Safari a a Dept. of Geomatics Engineering, College of Engineering,
More informationDimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis
Dimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis Yiran Li yl534@math.umd.edu Advisor: Wojtek Czaja wojtek@math.umd.edu 10/17/2014 Abstract
More informationFuzzy Entropy based feature selection for classification of hyperspectral data
Fuzzy Entropy based feature selection for classification of hyperspectral data Mahesh Pal Department of Civil Engineering NIT Kurukshetra, 136119 mpce_pal@yahoo.co.uk Abstract: This paper proposes to use
More informationLarge-Scale Face Manifold Learning
Large-Scale Face Manifold Learning Sanjiv Kumar Google Research New York, NY * Joint work with A. Talwalkar, H. Rowley and M. Mohri 1 Face Manifold Learning 50 x 50 pixel faces R 2500 50 x 50 pixel random
More informationLocality Preserving Projections (LPP) Abstract
Locality Preserving Projections (LPP) Xiaofei He Partha Niyogi Computer Science Department Computer Science Department The University of Chicago The University of Chicago Chicago, IL 60615 Chicago, IL
More informationSparsity Preserving Canonical Correlation Analysis
Sparsity Preserving Canonical Correlation Analysis Chen Zu and Daoqiang Zhang Department of Computer Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China {zuchen,dqzhang}@nuaa.edu.cn
More informationDimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis:midyear Report
Dimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis:midyear Report Yiran Li yl534@math.umd.edu Advisor: Wojtek Czaja wojtek@math.umd.edu
More informationTime Series Clustering Ensemble Algorithm Based on Locality Preserving Projection
Based on Locality Preserving Projection 2 Information & Technology College, Hebei University of Economics & Business, 05006 Shijiazhuang, China E-mail: 92475577@qq.com Xiaoqing Weng Information & Technology
More informationSpectral-Spatial Response for Hyperspectral Image Classification
Article Spectral-Spatial Response for Hyperspectral Image Classification Yantao Wei 1,2, *,, Yicong Zhou 2, and Hong Li 3 1 School of Educational Information Technology, Central China Normal University,
More informationSTRATIFIED SAMPLING METHOD BASED TRAINING PIXELS SELECTION FOR HYPER SPECTRAL REMOTE SENSING IMAGE CLASSIFICATION
Volume 117 No. 17 2017, 121-126 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu ijpam.eu STRATIFIED SAMPLING METHOD BASED TRAINING PIXELS SELECTION FOR HYPER
More informationIEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 12, NO. 2, FEBRUARY
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 12, NO. 2, FEBRUARY 2015 349 Subspace-Based Support Vector Machines for Hyperspectral Image Classification Lianru Gao, Jun Li, Member, IEEE, Mahdi Khodadadzadeh,
More informationSchool of Computer and Communication, Lanzhou University of Technology, Gansu, Lanzhou,730050,P.R. China
Send Orders for Reprints to reprints@benthamscienceae The Open Automation and Control Systems Journal, 2015, 7, 253-258 253 Open Access An Adaptive Neighborhood Choosing of the Local Sensitive Discriminant
More informationDetecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference
Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference Minh Dao 1, Xiang Xiang 1, Bulent Ayhan 2, Chiman Kwan 2, Trac D. Tran 1 Johns Hopkins Univeristy, 3400
More informationExploring Structural Consistency in Graph Regularized Joint Spectral-Spatial Sparse Coding for Hyperspectral Image Classification
1 Exploring Structural Consistency in Graph Regularized Joint Spectral-Spatial Sparse Coding for Hyperspectral Image Classification Changhong Liu, Jun Zhou, Senior Member, IEEE, Jie Liang, Yuntao Qian,
More informationLearning a Manifold as an Atlas Supplementary Material
Learning a Manifold as an Atlas Supplementary Material Nikolaos Pitelis Chris Russell School of EECS, Queen Mary, University of London [nikolaos.pitelis,chrisr,lourdes]@eecs.qmul.ac.uk Lourdes Agapito
More informationSubspace Clustering. Weiwei Feng. December 11, 2015
Subspace Clustering Weiwei Feng December 11, 2015 Abstract Data structure analysis is an important basis of machine learning and data science, which is now widely used in computational visualization problems,
More informationAn efficient face recognition algorithm based on multi-kernel regularization learning
Acta Technica 61, No. 4A/2016, 75 84 c 2017 Institute of Thermomechanics CAS, v.v.i. An efficient face recognition algorithm based on multi-kernel regularization learning Bi Rongrong 1 Abstract. A novel
More informationDimension Reduction CS534
Dimension Reduction CS534 Why dimension reduction? High dimensionality large number of features E.g., documents represented by thousands of words, millions of bigrams Images represented by thousands of
More informationRemote Sensed Image Classification based on Spatial and Spectral Features using SVM
RESEARCH ARTICLE OPEN ACCESS Remote Sensed Image Classification based on Spatial and Spectral Features using SVM Mary Jasmine. E PG Scholar Department of Computer Science and Engineering, University College
More informationFace Recognition Based on LDA and Improved Pairwise-Constrained Multiple Metric Learning Method
Journal of Information Hiding and Multimedia Signal Processing c 2016 ISSN 2073-4212 Ubiquitous International Volume 7, Number 5, September 2016 Face Recognition ased on LDA and Improved Pairwise-Constrained
More informationHyperspectral Data Classification via Sparse Representation in Homotopy
Hyperspectral Data Classification via Sparse Representation in Homotopy Qazi Sami ul Haq,Lixin Shi,Linmi Tao,Shiqiang Yang Key Laboratory of Pervasive Computing, Ministry of Education Department of Computer
More informationRobust Face Recognition via Sparse Representation Authors: John Wright, Allen Y. Yang, Arvind Ganesh, S. Shankar Sastry, and Yi Ma
Robust Face Recognition via Sparse Representation Authors: John Wright, Allen Y. Yang, Arvind Ganesh, S. Shankar Sastry, and Yi Ma Presented by Hu Han Jan. 30 2014 For CSE 902 by Prof. Anil K. Jain: Selected
More informationFuzzy Bidirectional Weighted Sum for Face Recognition
Send Orders for Reprints to reprints@benthamscience.ae The Open Automation and Control Systems Journal, 2014, 6, 447-452 447 Fuzzy Bidirectional Weighted Sum for Face Recognition Open Access Pengli Lu
More informationA Discriminative Non-Linear Manifold Learning Technique for Face Recognition
A Discriminative Non-Linear Manifold Learning Technique for Face Recognition Bogdan Raducanu 1 and Fadi Dornaika 2,3 1 Computer Vision Center, 08193 Bellaterra, Barcelona, Spain bogdan@cvc.uab.es 2 IKERBASQUE,
More informationData fusion and multi-cue data matching using diffusion maps
Data fusion and multi-cue data matching using diffusion maps Stéphane Lafon Collaborators: Raphy Coifman, Andreas Glaser, Yosi Keller, Steven Zucker (Yale University) Part of this work was supported by
More informationImage Similarities for Learning Video Manifolds. Selen Atasoy MICCAI 2011 Tutorial
Image Similarities for Learning Video Manifolds Selen Atasoy MICCAI 2011 Tutorial Image Spaces Image Manifolds Tenenbaum2000 Roweis2000 Tenenbaum2000 [Tenenbaum2000: J. B. Tenenbaum, V. Silva, J. C. Langford:
More informationSpectral-spatial rotation forest for hyperspectral image classification
Spectral-spatial rotation forest for hyperspectral image classification Junshi Xia, Lionel Bombrun, Yannick Berthoumieu, Christian Germain, Peijun Du To cite this version: Junshi Xia, Lionel Bombrun, Yannick
More informationDUe to the rapid development and proliferation of hyperspectral. Hyperspectral Image Classification in the Presence of Noisy Labels
Hyperspectral Image Classification in the Presence of Noisy Labels Junjun Jiang, Jiayi Ma, Zheng Wang, Chen Chen, and Xianming Liu arxiv:89.422v [cs.cv] 2 Sep 28 Abstract Label information plays an important
More informationTextural Features for Hyperspectral Pixel Classification
Textural Features for Hyperspectral Pixel Classification Olga Rajadell, Pedro García-Sevilla, and Filiberto Pla Depto. Lenguajes y Sistemas Informáticos Jaume I University, Campus Riu Sec s/n 12071 Castellón,
More informationCOSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor
COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality
More informationR-VCANet: A New Deep Learning-Based Hyperspectral Image Classification Method
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING 1 R-VCANet: A New Deep Learning-Based Hyperspectral Image Classification Method Bin Pan, Zhenwei Shi and Xia Xu Abstract
More informationREMOTE sensing hyperspectral images (HSI) are acquired
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 10, NO. 3, MARCH 2017 1151 Exploring Structural Consistency in Graph Regularized Joint Spectral-Spatial Sparse Coding
More informationHead Frontal-View Identification Using Extended LLE
Head Frontal-View Identification Using Extended LLE Chao Wang Center for Spoken Language Understanding, Oregon Health and Science University Abstract Automatic head frontal-view identification is challenging
More informationHyperspectral image segmentation using spatial-spectral graphs
Hyperspectral image segmentation using spatial-spectral graphs David B. Gillis* and Jeffrey H. Bowles Naval Research Laboratory, Remote Sensing Division, Washington, DC 20375 ABSTRACT Spectral graph theory
More informationDEEP LEARNING TO DIVERSIFY BELIEF NETWORKS FOR REMOTE SENSING IMAGE CLASSIFICATION
DEEP LEARNING TO DIVERSIFY BELIEF NETWORKS FOR REMOTE SENSING IMAGE CLASSIFICATION S.Dhanalakshmi #1 #PG Scholar, Department of Computer Science, Dr.Sivanthi Aditanar college of Engineering, Tiruchendur
More informationFusion of pixel based and object based features for classification of urban hyperspectral remote sensing data
Fusion of pixel based and object based features for classification of urban hyperspectral remote sensing data Wenzhi liao a, *, Frieke Van Coillie b, Flore Devriendt b, Sidharta Gautama a, Aleksandra Pizurica
More informationDimension Reduction of Image Manifolds
Dimension Reduction of Image Manifolds Arian Maleki Department of Electrical Engineering Stanford University Stanford, CA, 9435, USA E-mail: arianm@stanford.edu I. INTRODUCTION Dimension reduction of datasets
More information4178 IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 9, NO. 9, SEPTEMBER 2016
4178 IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 9, NO. 9, SEPTEMBER 016 Hyperspectral Image Classification by Fusing Collaborative and Sparse Representations
More informationA Supervised Non-linear Dimensionality Reduction Approach for Manifold Learning
A Supervised Non-linear Dimensionality Reduction Approach for Manifold Learning B. Raducanu 1 and F. Dornaika 2,3 1 Computer Vision Center, Barcelona, SPAIN 2 Department of Computer Science and Artificial
More informationSemi-supervised Data Representation via Affinity Graph Learning
1 Semi-supervised Data Representation via Affinity Graph Learning Weiya Ren 1 1 College of Information System and Management, National University of Defense Technology, Changsha, Hunan, P.R China, 410073
More informationA Robust Sparse Representation Model for Hyperspectral Image Classification
sensors Article A Robust Sparse Representation Model for Hyperspectral Image Classification Shaoguang Huang 1, *, Hongyan Zhang 2 and Aleksandra Pižurica 1 1 Department of Telecommunications and Information
More informationGraph Autoencoder-Based Unsupervised Feature Selection
Graph Autoencoder-Based Unsupervised Feature Selection Siwei Feng Department of Electrical and Computer Engineering University of Massachusetts Amherst Amherst, MA, 01003 siwei@umass.edu Marco F. Duarte
More informationResearch Article Hyperspectral Image Classification Using Kernel Fukunaga-Koontz Transform
Mathematical Problems in Engineering Volume 13, Article ID 471915, 7 pages http://dx.doi.org/1.1155/13/471915 Research Article Hyperspectral Image Classification Using Kernel Fukunaga-Koontz Transform
More informationTitle: A Deep Network Architecture for Super-resolution aided Hyperspectral Image Classification with Class-wise Loss
2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising
More informationA New Orthogonalization of Locality Preserving Projection and Applications
A New Orthogonalization of Locality Preserving Projection and Applications Gitam Shikkenawis 1,, Suman K. Mitra, and Ajit Rajwade 2 1 Dhirubhai Ambani Institute of Information and Communication Technology,
More informationRobust Face Recognition via Sparse Representation
Robust Face Recognition via Sparse Representation Panqu Wang Department of Electrical and Computer Engineering University of California, San Diego La Jolla, CA 92092 pawang@ucsd.edu Can Xu Department of
More informationSpatially variant dimensionality reduction for the visualization of multi/hyperspectral images
Author manuscript, published in "International Conference on Image Analysis and Recognition, Burnaby : Canada (2011)" DOI : 10.1007/978-3-642-21593-3_38 Spatially variant dimensionality reduction for the
More informationCSE 6242 A / CS 4803 DVA. Feb 12, Dimension Reduction. Guest Lecturer: Jaegul Choo
CSE 6242 A / CS 4803 DVA Feb 12, 2013 Dimension Reduction Guest Lecturer: Jaegul Choo CSE 6242 A / CS 4803 DVA Feb 12, 2013 Dimension Reduction Guest Lecturer: Jaegul Choo Data is Too Big To Do Something..
More informationLEARNING COMPRESSED IMAGE CLASSIFICATION FEATURES. Qiang Qiu and Guillermo Sapiro. Duke University, Durham, NC 27708, USA
LEARNING COMPRESSED IMAGE CLASSIFICATION FEATURES Qiang Qiu and Guillermo Sapiro Duke University, Durham, NC 2778, USA ABSTRACT Learning a transformation-based dimension reduction, thereby compressive,
More informationTrace Ratio Criterion for Feature Selection
Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (2008) Trace Ratio Criterion for Feature Selection Feiping Nie 1, Shiming Xiang 1, Yangqing Jia 1, Changshui Zhang 1 and Shuicheng
More informationLinear Discriminant Analysis for 3D Face Recognition System
Linear Discriminant Analysis for 3D Face Recognition System 3.1 Introduction Face recognition and verification have been at the top of the research agenda of the computer vision community in recent times.
More informationClassification of Hyperspectral Breast Images for Cancer Detection. Sander Parawira December 4, 2009
1 Introduction Classification of Hyperspectral Breast Images for Cancer Detection Sander Parawira December 4, 2009 parawira@stanford.edu In 2009 approximately one out of eight women has breast cancer.
More informationSpatial-Spectral Dimensionality Reduction of Hyperspectral Imagery with Partial Knowledge of Class Labels
Spatial-Spectral Dimensionality Reduction of Hyperspectral Imagery with Partial Knowledge of Class Labels Nathan D. Cahill, Selene E. Chew, and Paul S. Wenger Center for Applied and Computational Mathematics,
More informationAdaptive Doppler centroid estimation algorithm of airborne SAR
Adaptive Doppler centroid estimation algorithm of airborne SAR Jian Yang 1,2a), Chang Liu 1, and Yanfei Wang 1 1 Institute of Electronics, Chinese Academy of Sciences 19 North Sihuan Road, Haidian, Beijing
More informationFace Recognition using Laplacianfaces
Journal homepage: www.mjret.in ISSN:2348-6953 Kunal kawale Face Recognition using Laplacianfaces Chinmay Gadgil Mohanish Khunte Ajinkya Bhuruk Prof. Ranjana M.Kedar Abstract Security of a system is an
More informationFrame based kernel methods for hyperspectral imagery data
Frame based kernel methods for hyperspectral imagery data Norbert Wiener Center Department of Mathematics University of Maryland, College Park Recent Advances in Harmonic Analysis and Elliptic Partial
More informationHeat Kernel Based Local Binary Pattern for Face Representation
JOURNAL OF LATEX CLASS FILES 1 Heat Kernel Based Local Binary Pattern for Face Representation Xi Li, Weiming Hu, Zhongfei Zhang, Hanzi Wang Abstract Face classification has recently become a very hot research
More informationSpectral Angle Based Unary Energy Functions for Spatial-Spectral Hyperspectral Classification Using Markov Random Fields
Rochester Institute of Technology RIT Scholar Works Presentations and other scholarship 7-31-2016 Spectral Angle Based Unary Energy Functions for Spatial-Spectral Hyperspectral Classification Using Markov
More informationIEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING 1
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING 1 Exploring Locally Adaptive Dimensionality Reduction for Hyperspectral Image Classification: A Maximum Margin Metric Learning
More informationA Robust Band Compression Technique for Hyperspectral Image Classification
A Robust Band Compression Technique for Hyperspectral Image Classification Qazi Sami ul Haq,Lixin Shi,Linmi Tao,Shiqiang Yang Key Laboratory of Pervasive Computing, Ministry of Education Department of
More informationSelecting Models from Videos for Appearance-Based Face Recognition
Selecting Models from Videos for Appearance-Based Face Recognition Abdenour Hadid and Matti Pietikäinen Machine Vision Group Infotech Oulu and Department of Electrical and Information Engineering P.O.
More informationGeneralized Autoencoder: A Neural Network Framework for Dimensionality Reduction
Generalized Autoencoder: A Neural Network Framework for Dimensionality Reduction Wei Wang 1, Yan Huang 1, Yizhou Wang 2, Liang Wang 1 1 Center for Research on Intelligent Perception and Computing, CRIPAC
More informationAppearance Manifold of Facial Expression
Appearance Manifold of Facial Expression Caifeng Shan, Shaogang Gong and Peter W. McOwan Department of Computer Science Queen Mary, University of London, London E1 4NS, UK {cfshan, sgg, pmco}@dcs.qmul.ac.uk
More informationLinear Laplacian Discrimination for Feature Extraction
Linear Laplacian Discrimination for Feature Extraction Deli Zhao Zhouchen Lin Rong Xiao Xiaoou Tang Microsoft Research Asia, Beijing, China delizhao@hotmail.com, {zhoulin,rxiao,xitang}@microsoft.com Abstract
More informationRegion Based Image Fusion Using SVM
Region Based Image Fusion Using SVM Yang Liu, Jian Cheng, Hanqing Lu National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences ABSTRACT This paper presents a novel
More informationA Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation
, pp.162-167 http://dx.doi.org/10.14257/astl.2016.138.33 A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation Liqiang Hu, Chaofeng He Shijiazhuang Tiedao University,
More informationIsometric Mapping Hashing
Isometric Mapping Hashing Yanzhen Liu, Xiao Bai, Haichuan Yang, Zhou Jun, and Zhihong Zhang Springer-Verlag, Computer Science Editorial, Tiergartenstr. 7, 692 Heidelberg, Germany {alfred.hofmann,ursula.barth,ingrid.haas,frank.holzwarth,
More informationAn Adaptive Threshold LBP Algorithm for Face Recognition
An Adaptive Threshold LBP Algorithm for Face Recognition Xiaoping Jiang 1, Chuyu Guo 1,*, Hua Zhang 1, and Chenghua Li 1 1 College of Electronics and Information Engineering, Hubei Key Laboratory of Intelligent
More informationLocally Linear Landmarks for large-scale manifold learning
Locally Linear Landmarks for large-scale manifold learning Max Vladymyrov and Miguel Á. Carreira-Perpiñán Electrical Engineering and Computer Science University of California, Merced http://eecs.ucmerced.edu
More informationA CNN-based Spatial Feature Fusion Algorithm for Hyperspectral Imagery Classification. Alan J.X. Guo, Fei Zhu. February 1, 2018
A CNN-based Spatial Feature Fusion Algorithm for Hyperspectral Imagery Classification Alan J.X. Guo, Fei Zhu February 1, 2018 arxiv:1801.10355v1 [cs.cv] 31 Jan 2018 Abstract The shortage of training samples
More informationLow-dimensional Representations of Hyperspectral Data for Use in CRF-based Classification
Rochester Institute of Technology RIT Scholar Works Presentations and other scholarship 8-31-2015 Low-dimensional Representations of Hyperspectral Data for Use in CRF-based Classification Yang Hu Nathan
More information