HYPERSPECTRAL imagery consists of hundreds of

Size: px
Start display at page:

Download "HYPERSPECTRAL imagery consists of hundreds of"

Transcription

1 7066 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 54, NO. 12, DECEMBER 2016 Laplacian Regularized Collaborative Graph for Discriminant Analysis of Hyperspectral Imagery Wei Li, Member, IEEE, and Qian Du, Senior Member, IEEE Abstract Collaborative graph-based discriminant analysis (CGDA) has been recently proposed for dimensionality reduction and classification of hyperspectral imagery, offering superior performance. In CGDA, a graph is constructed by l 2 -norm minimization-based representation using available labeled samples. Different from sparse graph-based discriminant analysis (SGDA) where a graph is built by l 1 -norm minimization, CGDA benefits from within-class sample collaboration and computational efficiency. However, CGDA does not consider data manifold structure reflecting geometric information. To improve CGDA in this regard, a Laplacian regularized CGDA (LapCGDA) framework is proposed, where a Laplacian graph of data manifold is incorporated into the CGDA. By taking advantage of the graph regularizer, the proposed method not only can offer collaborative representation but also can exploit the intrinsic geometric information. Moreover, both CGDA and LapCGDA are extended into kernel versions to further improve the performance. Experimental results on several different multiple-class hyperspectral classification tasks demonstrate the effectiveness of the proposed LapCGDA. Index Terms Collaborative graph, dimensionality reduction, graph embedding, hyperspectral data, Laplacian matrix. I. INTRODUCTION HYPERSPECTRAL imagery consists of hundreds of contiguous spectral wavelength bands that are highly correlated. High dimensionality usually leads to the curse of dimensionality problem, thus deteriorating classification performance, especially when the number of available labeled samples is limited [1] [5]. Dimensionality-reduction algorithms, removing redundant features and preserving useful information in a low-dimensional subspace [6], [7], have been substantially investigated for hyperspectral image analysis. Projection-based strategy is one of the major categories of dimensionality reduction. The essence is to project original bands into a lower dimensional subspace based on a certain criterion function. For example, principal component analysis (PCA) [8] attempts finding a linear transformation by maximizing the variance in the projected subspace; Fisher s linear discriminant Manuscript received April 12, 2016; revised June 23, 2016; accepted July 25, Date of publication August 12, 2016; date of current version September 30, This work was supported in part by the National Natural Science Foundation of China under Grant and Grant and in part by the Fundamental Research Funds for the Central Universities under Grant BUCTRC201401, Grant BUCTRC201615, and Grant XK1521. W. Li is with the College of Information Science and Technology, Beijing University of Chemical Technology, Beijing , China ( liwei089@ ieee.org). Q. Du is with the Department of Electrical and Computer Engineering, Mississippi State University, Starkville, MS USA ( du@ece. msstate.edu). Color versions of one or more of the figures in this paper are available online at Digital Object Identifier /TGRS analysis (LDA) [9] maximizes the trace ratio between the between-class scatter and the within-class scatter. There are numerous modified versions, including the kernel versions, such as kernel PCA [10], kernel LDA [11], local Fisher discriminant analysis (LFDA) [12], genetic algorithm-based LFDA [13], and kernel LFDA [14]. Unlike PCA or LDA, locality preserving projection (LPP) [15] seeks to find a linear map that preserves geometric information of neighboring samples in the original space. In [12], this type of manifold learning technique has been verified to be excellent at capturing manifold structure in hyperspectral imagery. Graph, as a mathematical form of data representation, has been successfully used for remote sensing image analysis, such as classification, segmentation, detection, and data fusion [16] [20]. Recently, due to the effectiveness of graph embedding, graph-based dimensionality reduction has received great attention [21] [26]. A general framework for dimensionality reduction, denoted as sparsity-preserving graph embedding, was proposed in [27]. Compared with the traditional k-nearest neighbor (k-nn)-based graph [28], the sparsity-based graph can provide greater robustness to additive data noise [29]. In [30], sparse graph-based discriminant analysis (SGDA) was developed [30] for dimensionality reduction and classification in hyperspectral imagery. SGDA preserves sparse connection among class-specific labeled samples. Weighted SGDA was to integrate both locality and sparsity structures [31]. In [32], block-based SGDA was employed for semisupervised classification. In [33], simultaneous sparse graph embedding was proposed. In [34], a sparse and low-rank graph-based discriminant analysis was presented by combining both sparsity and low rankness to maintain global and local structures simultaneously. Different from the aforementioned sparse graph, collaborative graph-based discriminant analysis (CGDA) [35] was presented by replacing the l 1 -norm minimization in solving the weight matrix with an l 2 -norm minimization. The motivation is based on the fact that it is the collaborative instead of competitive nature imposed by the sparsity constraint that actually provides comparative classification performance[36] [39]. Furthermore, CGDA is computationally very efficient because a closed-form solution is available when estimating the representation coefficients. In [35], CGDA has been demonstrated to offer even more superior classification performance and lower computational cost, and the former thus can be viewed as a better choice. Nevertheless, CGDA does not consider data manifold structure. There are research works in the literature related to embedding local manifold structures, such as LPP [15], locally linear embedding [40], and neighborhood preserving embedding [41]. In these methods, for two data points that lie closely in the original space, their intrinsic geometry distribution IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See for more information.

2 LI AND DU: LAPCGDA OF HYPERSPECTRAL IMAGERY 7067 should be preserved in the new subspace. Based on this concept, Laplacian regularized Gaussian mixture model (LapGMM) was presented for data clustering [42], and Laplacian regularized low-rank representation (LapLRR) was developed for image clustering and classification [43]. In [44], graph construction using local manifold learning was proposed for semisupervised hyperspectral image classification. In [45], sparse discriminant embedding with manifold learning was presented for dimensionality reduction in hyperspectral imagery. Motivated by these works, a Laplacian regularized CGDA (LapCGDA) framework is proposed, where a Laplacian graph of data manifold is incorporated into the CGDA during graph construction. Such a Laplacian graph captures intrinsic geometrical structure such that pixel relationship in the original data geometry is preserved. By taking advantage of this graph regularizer, the proposed method not only can offer collaborative representation but also can exploit the intrinsic geometric information, offering more discriminative power than the original CGDA. In summary, the main contributions of this paper can be summarized as follows. 1) To the best of our knowledge, this is the first time that collaborative and Laplacian graph is adopted for dimensionality reduction and classification in hyperspectral imagery, and the process of graph construction can be as fast as CGDA since a closed-form solution can be derived. 2) The resulting graph combines constraints with both collaboration in representation and data manifold structure preserved, which successfully makes the induced projection more stable and discriminative. 3) Both CGDA and LapCGDA are further extended into kernel versions, which are able to extract nonlinear discriminant features in kernel-induced spaces. The remainder of this paper is organized as follows. Section II reviews the graph-embedding dimensionality-reduction framework, including SGDA and CGDA. Section III primarily describes the proposed LapCGDA algorithm and its kernel version. Section IV validates the proposed approaches and reports classification results, as compared with several state-ofthe-art alternatives. Section V makes some concluding remarks. II. RELATED WORK A. Graph-Embedding Dimensionality Reduction Consider a hyperspectral data set with M labeled samples denoted as X = {x i } M i=1 in an Rd 1 feature space, where d is the number of bands. An intrinsic graph is denoted as G = {X, W} with W being an affinity matrix, and a penalty graph is represented as G p = {X, W p } with W p being a penalty weight matrix. Let C be the number of classes, m l be the number of available labeled samples in the lth class, and C l=1 m l = M. The graph-embedding dimensionality-reduction framework [21],[27] seeks to find a d K projection matrix P (with K d), which results in a low-dimensional subspace Y = P T X.The objective is to maintain class separability by preserving the relationship of data points in the original space. The objective function can be mathematically formed as P =arg min P T x i P T x j 2 W i,j P T XL p X T P i j =arg min tr(p T XLX T P) (1) P T XL p X T P where L is the Laplacian matrix of graph G, L = D W, D is a diagonal matrix with the ith diagonal element being D ii = M j=1 W i,j,andl p may be the Laplacian matrix of the penalty graph G p or a simple scale normalization constraint [21]. The optimal projection matrix P can be obtained as P =argmin P P T XLX T P P T XL p X T P which can be solved as a generalized eigenvalue decomposition problem (2) XLX T P =ΛXL p X T P (3) where Λ is a diagonal eigenvalue matrix. For a d K projection matrix P, it is constructed by the K eigenvectors corresponding to the K smallest nonzero eigenvalues. Note that the performance of graph-embedding-based dimensionalityreduction algorithms mainly depends on the choice of G. B. CGDA In CGDA [35], for each pixel x i in the dictionary X, the collaborative representation vector is calculated by solving the l 2 -norm optimization problem arg min w i w i 2 s.t. X i w i = x i (4) where X i does not include x i itself, and w i is a vector of size (M 1) 1. If 2 is replaced with 1, (4) becomes the objective function of SGDA [30], [32]. Note that (4) can be further rewritten as arg min w i x i X i w i λ w i 2 2 (5) where λ is a Lagrange multiplier. Equation (5) is equivalent to arg min w i [ wi T (X i T X i + λi)w i 2w i T X i T x i ]. (6) Taking the derivative with regard to w i and setting the resultant equation to zero yields, the closed-form solution can be written as w i =(X i T X i + λi) 1 X i T x i. (7) We define W =[w 1, w 2,...,w M ] as the graph weight matrix of size M M whose column w i is the collaborative representation vector corresponding to x i. Note that the diagonal elements in W are set to be zero. In [35], the affinity matrix W is actually calculated using the within-class samples. Thus, W can be expressed in the form of a block-diagonal structure W = W (1) W (C) (8) where {W (l) } C l=1 is the weight matrix of size m l m l using labeled samples in the lth class only. It has been demonstrated that the strategy with class label information has better discriminant ability [30].

3 7068 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 54, NO. 12, DECEMBER 2016 III. PROPOSED DIMENSIONALITY-REDUCTION METHODS A. LapCGDA In [35], it has been demonstrated that CGDA is superior to SGDA from the perspectives of both classification performance and computational cost. However, both SGDA and CGDA do not consider manifold structure within data, which may result in missing some locality information in the embedding process. To address this issue, LapCGDA is proposed, i.e., a collaborative and Laplacian graph is constructed with the objective function described as arg min w i x i X i w i λ w i βt i (9) where λ and β are two regularization parameters to balance the two types of penalty, T i is the manifold regularization term corresponding to w i,andt i = wi T Z iw i. Note that, when β =0, LapCGDA actually reduces to CGDA. Here, Z i is the Laplacian of the graph with affinity matrix A i whose pqth element is calculated as A p,q =exp( ( x p x q 2 2 /γ pγ q )), where γ p = x p x (k nn) p denotes the local scaling of data samples in the neighborhood of x p, x (k nn) p is the k nn -nearest neighbor 1 of x p,andx p and x q are from X i without x i itself. Note that this affinity matrix has been proved to be effective in locality preservation [12], [46]. Taking the derivative with regard to weight vector w i and setting the resultant equation to zero yields X i T x i + X i T X i w i + βx T i Z i X i w i + λw i =0 (10) and the closed-form solution is w i =(X i T X i + λi + βz i ) 1 X i T x i. (11) Thus, a manifold regularized collaborative graph is obtained. Note that the graph is constructed with class label information the same as CGDA. The overall description of the proposed LapCGDA is given as Algorithm 1. Algorithm 1 Proposed LapCGDA Algorithm Input: Training data X = {x i } M i=1 Rd with class label and the regularization parameters λ and β Normalize the columns of X to have unit l 2 -norm; Obtain graph matrix W via solving (11) in a closed form; Compute projections by solving the eigenvalue decomposition in (3); Output: A projection matrix P. B. Kernel Extensions Kernel methods learn nonlinear decision boundaries in a kernel-induced space [11], [14], [47], whose dimensionality is much higher than the input space. Kernel trick has been widely used without explicitly evaluating a nonlinear mapping func- 1 Here, k nn is a tuning parameter. According to our empirical study, k nn =7 works well for all the experiments. tion. For a given mapping function Φ, Mercer kernel function k(, ) can be represented as k(x i, x j )=Φ(x i ) T Φ(x j ) (12) where Φ maps the pixel x to the kernel-induced feature space: x Φ(x) R D 1 (D d is the dimension of kernel feature space). Commonly used kernels include the t-degree polynomial kernel k(x i, x j )=(x i T x j +1) t (t Z + ), and the Gaussian radial basis function (RBF) kernel k(x i, x j )=exp( σ x i x j 2 2) (σ >0 is the parameter of RBF kernel). For the graph-embedding process, projection P (k) in the kernel space is given by the solution of the generalized eigenvalue problem KL (k) K T P (k) =Λ (k) KL (k) K T P (k) (13) where K = Φ T Φ R M M represents the Gram matrix with K i,j = k(x i, x j ),andl (k) is the Laplacian matrix calculated according to the weight matrix W (k) in the kernel space. In the kernel CGDA (KCGDA), the objective function becomes arg min Φ(x i ) Φ i w wi i λ w i 2 2 (14) where Φ i =[Φ(x 1 ), Φ(x 2 ),...,Φ(x M )] R D (M 1), excluding Φ(x i ). The weight vector wi with size of (M 1) 1 can be recovered in a closed-form solution w i = ΦT i Φ(x i ) Φ T i Φ i + λi =(K i + λi) 1 k(, x i ) (15) where k(,x i )=[k(x 1,x i ),k(x 2,x i ),...,k(x M,x i )] T R (M 1) 1, and K i = Φ T i Φ i R (M 1) (M 1). Then, the weight matrix W (k) in the kernel space can be constructed just similar to (8). In the kernel LapCGDA (KLapCGDA), the affinity matrix A (k) i can be expressed as ( ) A (k) i =exp Φ(x p) Φ(x q ) 2 2 ( γ p γ q ) =exp (Φ(x p) Φ(x q )) T (Φ(x p ) Φ(x q )) γ p γ q ( =exp K ) p,p + K q,q 2K p,q. (16) γ p γ q After obtaining A (k) i, the graph Laplacian matrix Z (k) i = B (k) i A (k) i,whereb (k) i is a diagonal matrix with the pth diagonal element being B pp = M 1 q=1 A(k) p,q. Subsequently, the weight vector is computed in a closed form wi Φ T i = Φ(x i) Φ T i Φ i + λi + βz (k) ( i ) 1 = K i + λi + βz (k) i k(, xi ). (17) Finally, the weight matrix W (k) in the kernel space can be further calculated. In this paper, RBF kernel is employed.

4 LI AND DU: LAPCGDA OF HYPERSPECTRAL IMAGERY 7069 Fig. 1. Visualization of graph weights for CGDA and LapCGDA using threeclass synthetic data. (a) CGDA graph. (b) LapCGDA graph. C. Analysis on LapCGDA and KLapCGDA For hyperspectral data, spectral signatures can be affected by many factors such as illumination conditions, geometric features of material surfaces, and atmospheric affects [48]. In this paper, LapCGDA is proposed as a dimensionality-reduction step to preserve intrinsic geometry of the data. By considering both collaboration in representation and data manifold, it is expected that the induced subspace by LapCGDA provides more discriminating information; when combined with a classifier such as support vector machine (SVM) [49], [50], the resulting classification is more accurate. In order to illustrate the benefit of LapCGDA, we test with three-class synthetic data, where the statistical distribution is complex. That is, class 2 (marked by blue square) is relatively separable from the other two; class 1 (marked by red plus) mainly has two parts, one of which is significantly overlapped with class 3 (markedby black circle). Fig. 1 illustrates the graph matrix learned by CGDA and the proposed LapCGDA. Both graphs reveal three independent segments. It is apparent that, for each segment in CGDA, the distribution of white points (nonzero coefficients) is obviously chaotic; nevertheless, the graph obtained by LapCGDA presents within-block patterns obviously. This type of block pattern may potentially capture some intrinsic correlation among samples, e.g., the geometric structure within data, which is ignored by CGDA. Fig. 2 further shows classification maps produced by these two techniques. For better visual comparison, we plot circles with black dash. To demonstrate the benefit of the proposed LapCGDA, we highlight the area with circle in Fig. 2(c) and (d), where the misclassified samples by CGDA are obvious, e.g., the samples of class 1 are wrongly labeled as class 2. Generally, the classification accuracy of LapCGDA is as high as 92.67%, with an improvement of approximately 5% compared with CGDA. Fig. 2(d) and (e) also illustrates the performance of KCGDA and KLapCGDA, which are obviously better than their counterparts. IV. EXPERIMENTAL RESULTS A. Hyperspectral Data The first data set 2 employed in the experiment was acquired using National Aeronautics and Space Administration s 2 _Sensing_Scenes Fig. 2. Two-dimensional three-class synthetic data classification performance (the circle with black dash emphasizes the improved area). Note that the x-and y-axes indicate the range of data after being projected into the 2-D subspace. (a) Three-class synthetic data. (b) CGDA: 87.03%. (c) LapCGDA: 92.67%. (d) KCGDA: 89.33%. (e) KLapCGDA: 96.00%. TABLE I CLASS LABELS AND TRAIN TEST DISTRIBUTION OF SAMPLES FOR THE INDIAN PINES DATA SET Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor over northwest Indiana s Indian Pine test site in June The image represents an image scene with 145 pixels 145 pixels with 20-m spatial resolution and 220 bands in 0.4- to 2.45-μm spectrum region. It contains two-thirds agriculture and one-third forest. In this paper, a total of 200 bands are used after removal of water-absorption bands. There are 16 land-cover classes but not all mutually exclusive in the designated ground truth map. The numbers of training and testing samples are summarized in Table I.

5 7070 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 54, NO. 12, DECEMBER 2016 TABLE II CLASS LABELS AND TRAIN TEST DISTRIBUTION OF SAMPLES FOR THE SALINAS DATA SET TABLE III CLASS LABELS AND TRAIN TEST DISTRIBUTION OF SAMPLES FOR THE UNIVERSITY OF PAVIA DATA SET The second data set was also collected by the AVIRIS sensor, capturing an area over Salinas Valley, California. The image comprises 512 pixels 217 pixels with a spatial resolution of 3.7 m and 204 bands after 20 water-absorption bands are removed. It mainly contains vegetables, bare soils, and vineyard fields. There are also 16 classes, and the numbers of training and testing samples are listed in Table II. The third experimental data set was collected by the Reflective Optics System Imaging Spectrometer (ROSIS) sensor over the city of Pavia, Italy. The image scene covers spatial coverage of 610 pixels 340 pixels, collected under the HySens Project managed by the German Aerospace Agency (DLR). The data set has 103 spectral bands prior to water-band removal with spectral coverage from 0.43 to 0.86 μm and a spatial resolution of 1.3 m. Approximately labeled pixels with nine classes are from the ground truth map. More detailed information of the numbers of training and testing samples are summarized in Table III. B. Parameter Tuning The classical SVM classifier is employed to validate the aforementioned dimensionality-reduction methods, including CGDA, LapCGDA, KCGDA, and KLapCGDA. A fivefold cross-validation strategy is employed for tuning parameters in classification tasks. Fig. 3 illustrates the sensitivity of the proposed LapCGDA as a function of two important regularization parameters (i.e., λ and β) in its objective function [e.g., (9)]. In the experiment, λ is chosen from the region of {1e-6, 1e-5, 1e-4, 1e-3, 1e-2, 1e-1, 1} and β is chosen from the region of {0, 1e-6, 1e-5, 1e-4, 1e-3, Fig. 3. Parameter tuning of β and λ for the proposed LapCGDA using three experimental data sets. (a) Indian Pines data. (b) Salinas data. (c) University of Pavia data. 1e-2, 1e-1, 1, 1e1}. Optimal λ and β are determined for both LapCGDA and CGDA from the results in Fig. 3. For example, the optimal λ of LapCGDA is 1e-2 and the one of β is 1e-4 for the Indian Pines data and the Salinas data; as for the University

6 LI AND DU: LAPCGDA OF HYPERSPECTRAL IMAGERY 7071 TABLE IV SVM C LASS -S PECIFIC A CCURACY ( IN P ERCENTAGE ) AND OA OF D IFFERENT T ECHNIQUES FOR THE I NDIAN P INES D ATA TABLE V SVM C LASS -S PECIFIC A CCURACY ( IN P ERCENTAGE ) AND OA OF D IFFERENT T ECHNIQUES FOR THE S ALINAS D ATA TABLE VI SVM C LASS -S PECIFIC A CCURACY ( IN P ERCENTAGE ) AND OA OF D IFFERENT T ECHNIQUES FOR THE U NIVERSITY OF PAVIA D ATA Fig. 4. Classification accuracy versus reduced dimensionality K for methods using the experimental data sets. (a) Indian Pines data. (b) Salinas data. (c) University of Pavia data. of Pavia data, the value of λ and β can be set to 1e-3. It is worth mentioning that a nonzero value of β verifies that the manifold regularization term can have an impact on the dimensionalityreduction process. For KCGDA and KLapCGDA, the optimal λ and β are obtained in a similar way; as for the RBF kernel parameter, σ is set by the median value of 1/( xi x 22 ), i = 1, 2,..., M, where x = (1/M ) M i=1 xi is the mean of all available training samples [51]. Fig. 4 illustrates the classification accuracy as a function of the reduced dimensionality K for SGDA, CGDA, LapCGDA, KCGDA, and KLapCGDA. It is apparent that the performance tends to be stable when the dimensionality is larger than a certain value. For example, a reduced dimension of 20 appears to be sufficient in these three experimental data sets. From the curves in Fig. 4, we notice that, for low dimensionality, TABLE VII S TATISTICAL S IGNIFICANCE F ROM THE S TANDARDIZED M C N EMAR S T EST A BOUT THE D IFFERENCE B ETWEEN M ETHODS

7 7072 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 54, NO. 12, DECEMBER 2016 Fig. 5. Thematic maps resulting from classification for the Indian Pines data set with 16 classes. (a) Pseudo-color image. (b) Ground truth map. (c) LFDA: 81.79%. (d) SGDA: 83.34%. (e) CGDA: 84.59%. (f) LapCGDA: 86.70%. (g) KCGDA: 86.69%. (h) KLapCGDA: 88.52%. classification accuracy is often not high, whereas that of LapCGDA is always better than that of SGDA and CGDA, which further confirms that the proposed strategy is able to find a transform that can effectively reduce the dimensionality while enhancing class separability. C. Classification Performance We compare the proposed LapCGDA with all the bands (denoted as ALL, without dimensionality reduction), the traditional LDA and LFDA, and the state-of-the-art SGDA and CGDA; furthermore, the performances of KCGDA and KLapCGDA are also included. Tables IV VI list the classspecific accuracy and overall accuracy (OA) for these three experimental data sets. From the results of each individual method, sometimes LDA is even worse than ALL since its reduced dimension is limited to C 1, which may lose useful information. Furthermore, CGDA is generally superior to SGDA, LapCGDA performs better than SGDA and CGDA, and KCGDA outperforms CGDA and so does KLapCGDA. For example, in Table IV, LapCGDA (i.e., 86.70%) yields more than 2% higher accuracy than CGDA (i.e., 84.59%), and KLapCGDA (88.52%) also provides approximately 2% higher accuracy than KCGDA (i.e., 86.69%). It is interesting to notice that, for class 9 (i.e., Oats), the number of training samples is extremely small, causing many methods lose efficacy; however, the proposed KLapCGDA achieves 95% accuracy, which verifies its effectiveness. In order to demonstrate the statistical significance in accuracy improvement of the proposed methods, the standardized McNemar s test [52] is employed, as listed in Table VII. The Z values of McNemar s test larger than 1.96 and 2.58 mean that two results are statistically different at the 95% and 99% confidence levels, respectively. The sign of Z indicates whether classifier 1 outperforms classifier 2 (Z >0) or vice versa. In the experiment, we run the comparison between LapCGDA and CGDA, KCGDA and CGDA, KLapCGDA and LapCGDA, and KLapCGDA and KCGDA separately. In Table VII, all the values are larger than 2.58, which confirms that the proposed LapCGDA and KLapCGDA are highly discriminative dimensionality-reduction methods. Figs. 5 7 further illustrate the thematic maps. We produced ground-cover maps of the entire image scene for these images (including unlabeled pixels). However, to facilitate easy comparison between methods, only areas for which we have ground truth are shown in these maps. These maps are consistent with the results listed in Tables IV VI, respectively. Some areas in the classification maps produced by LapCGDA are obviously less noisy than those produced by SGDA and CGDA, e.g., the regions of Soybeans-no till and Soybeans-clean in Fig. 5, the one of Vinyard-untrained in Fig. 6, and the one of Gravel in Fig. 7. Fig. 8 illustrates the classification performance with different numbers of training samples. Usually, the number of training samples available may be insufficient to estimate models for each class in practical situations, which is necessary to investigate the sensitivity of training sizes. As shown in Fig. 8, for the Indian Pines data, the training size is changed from 1/10 to 1/5 (note that 1/10 is the ratio of number of training samples to the total labeled data); for the Salinas data and University

8 LI AND DU: LAPCGDA OF HYPERSPECTRAL IMAGERY 7073 Fig. 6. Thematic maps resulting from classification for the Salinas data set with 16 classes. (a) Pseudo-color image. (b) Ground truth map. (c) LFDA: 91.22%. (d) SGDA: 91.82%. (e) CGDA: 93.00%. (f) LapCGDA: 94.13%. (g) KCGDA: 93.97%. (h) KLapCGDA: 94.56%. of Pavia data, the training-sample-size ratio is changed from the regions of [0.01, 0.05] and [0.06, 0.1] with an interval of 0.01, respectively. From the results, LapCGDA still consistently performs better than SGDA and CGDA, and kernel methods outperform the linear versions. For example, KLapCGDA is always with 2% improvement compared with LapCGDA for the Indian Pines data; in Fig. 8(b), the improvement is even more obvious when the training size is extremely low (e.g., 0.01).

9 7074 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 54, NO. 12, DECEMBER 2016 Fig. 7. Thematic maps resulting from classification for the University of Pavia data set with 9 classes. (a) Pseudo-color image. (b) Ground truth map. (c) LFDA: 92.77%. (d) SGDA: 90.58%. (e) CGDA: 92.37%. (f) LapCGDA: 94.46%. (g) KCGDA: 93.28%. (h) KLapCGDA: 95.58%. Table VIII summarizes the computational complexity of the aforementioned graph-based dimensionality-reduction methods. All experiments were carried out using MATLAB on an Intel(R) Core(TM) i CPU machine with 8 GB of RAM. Obviously, CGDA is much faster than SGDA, which verifies its time efficiency. Based on this benefit, LapCGDA also provides desired computational performance, just slightly worse than CGDA due to the computation burden of an additional affinity matrix. Even for KCGDA and KLapCGDA, the computational cost is much lower than SGDA. V. C ONCLUSION In this paper, a LapCGDA framework has been proposed to improve the state-of-the-art CGDA. In LapCGDA, the Laplacian of the data manifold graph was incorporated into CGDA, exploiting the intrinsic geometric information within data. By considering both collaboration in representation and manifold structure, the subspace induced by LapCGDA provided more discriminative information. Furthermore, due to the fact that the solution of graph construction can be expressed in a closed form, the computational cost of the proposed LapCGDA is extremely low. Both CGDA and LapCGDA were extended into kernel versions, e.g., KCGDA and KLapCGDA. Experimental results with synthetic data and real hyperspectral images have demonstrated that the proposed LapCGDA and KLapCGDA are effective in dimensionality-reduction tasks and can provide superior performance when compared with SGDA and CGDA from the perspectives of both classification accuracy and computational efficiency.

10 LI AND DU: LAPCGDA OF HYPERSPECTRAL IMAGERY 7075 ACKNOWLEDGMENT The authors would like to thank Dr. Nam Ly for sharing the MATLAB code of sparse graph-based discriminant analysis and collaborative graph-based discriminant analysis for comparison purposes. Fig. 8. Classification performance of methods with different numbers of training sample sizes using the experimental data sets. (a) Indian Pines data. (b) Salinas data. (c) University of Pavia data. TABLE VIII EXECUTION TIME (IN SECONDS) IN THE THREE EXPERIMENTAL DATA SETS REFERENCES [1] B. Du, L. Zhang, L. Zhang, T. Chen, and K. Wu, A discriminative manifold learning based dimension reduction method for hyperspectral classification, Int. J. Fuzzy Syst., vol. 14, no. 2, pp , Jun [2]S.Prasad,W.Li,J.E.Fowler,andL.M.Bruce, Informationfusion in the redundant-wavelet-transform domain for noise-robust hyperspectral classification, IEEE Trans. Geosci. Remote Sens., vol. 50, no. 9, pp , Sep [3] W. Li, E. W. Tramel, S. Prasad, and J. E. Fowler, Nearest regularized subspace for hyperspectral classification, IEEE Trans. Geosci. Remote Sens., vol. 52, no. 1, pp , Jan [4] B. Du and L. Zhang, A discriminative metric learning based anomaly detection method, IEEE Trans. Geosci. Remote Sens., vol. 52, no. 11, pp , Nov [5] Y. Gu, T. Liu, X. Jia, J. A. Benediktsson, and J. Chanussot, Nonlinear multiple kernel learning with multiple-structure-element extended morphological profiles for hyperspectral image classification, IEEE Trans. Geosci. Remote Sens., vol. 54, no. 6, pp , Jun [6] B. Du and L. Zhang, Target detection based on a dynamic subspace, Pattern Recognit., vol. 47, no. 1, pp , Jan [7] L. Gao et al., Subspace-based support vector machines for hyperspectral image classification, IEEE Geosci. Remote Sens. Lett., vol. 12, no. 2, pp , Feb [8] M. Fauvel, J. Chanussot, and J. A. Benediktsson, Kernel principal component analysis for the classification of hyperspectral remote sensing data over urban areas, EURASIP J. Appl. Signal Process., vol. 2009, no. 1, pp. 1 14, Jan [9] W. Li, S. Prasad, and J. E. Fowler, Noise-adjusted subspace discriminant analysis for hyperspectral imagery classification, IEEE Geosci. Remote Sens. Lett., vol. 10, no. 6, pp , Nov [10] J. Yang, A. F. Frangi, J. Yang, D. Zhang, and Z. Jin, KPCA plus LDA: A complete kernel Fisher discriminant framework for feature extraction and recognition, IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 2, pp , Feb [11] W. Li, S. Prasad, and J. E. Fowler, Decision fusion in kernel-induced spaces for hyperspectral image classification, IEEE Trans. Geosci. Remote Sens., vol. 52, no. 6, pp , Jun [12] W. Li, S. Prasad, J. E. Fowler, and L. M. Bruce, Locality-preserving dimensionality reduction and classification for hyperspectral image analysis, IEEE Trans. Geosci. Remote Sens., vol. 50, no. 4, pp , Apr [13] M. Cui, S. Prasad, W. Li, and L. M. Bruce, Locality preserving genetic algorithms for spatial spectral hyperspectral image classification, IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 6, no. 3, pp , Jun [14] W. Li, S. Prasad, J. E. Fowler, and L. M. Bruce, Locality-preserving discriminant analysis in kernel-induced feature spaces for hyperspectral image classification, IEEE Geosci. Remote Sens. Lett., vol. 8, no. 5, pp , Sep [15] X. He and P. Niyogi, Locality preserving projections, in Advances in Neural Information Processing System, S. Thrun, L. Saul, and B. Schölkopf, Eds. Cambridge, MA, USA: MIT Press, [16] V. Harikumar, P. P. Gajjar, M. V. Joshi, and M. S. Raval, Multiresolution image fusion: Use of compressive sensing and graph cuts, IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 7, no. 5, pp , May [17] Y. Li, Y. Tan, J. Den, Q. Wen, and J. Tian, Cauchy graph embedding optimization for built-up areas detection from high-resolution remote sensing images, IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 8, no. 5, pp , May [18] W. Liao, M. Dalla Mura, J. Chanusso, and A. Pizurica, Fusion of spectral and spatial information for classification of hyperspectral remote-sensed imagery by local graph, IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 9, no. 2, pp , Feb [19] S. Jia, X. Zhang, and Q. Li, Spectral spatial hyperspectral image classification using l 1/2 regularized low-rank representation and sparse representation-based graph cuts, IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 8, no. 6, pp , Jun

11 7076 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 54, NO. 12, DECEMBER 2016 [20] M. T. Pham, G. Mercier, and J. Michel, Pointwise graph-based local texture characterization for very high resolution multispectral image classification, IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 8, no. 5, pp , May [21] S. Yan, D. Xu, B. Zhang, H.-J. Zhang, Q. Yang, and S. Lin, Graph embedding and extensions: A general framework for dimensionality reduction, IEEE Trans. Pattern Anal. Mach. Intell., vol. 29, no. 1, pp , Jan [22] D. Cai, X. He, J. Han, and T. Huang, Graph regularized nonnegative matrix factorization for data representation, IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 8, pp , Aug [23] L. Zhuang, H. Gao, Z. Lin, Y. Ma, X. Zhang, and N. Yu, Non-negative low rank and sparse graph for semi-supervised learning, in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., Providence, RI, USA, 2012, pp [24] M. Zhao, L. Jiao, J. Feng, and T. Liu, A simplified low rank and sparse graph for semi-supervised learning, Neurocomputing, vol. 140, pp , [25] H. Yuan and Y. Tang, Learning with hypergraph for hyperspectral image feature extraction, IEEE Geosci. Remote Sens. Lett., vol. 12, no. 8, pp , Aug [26] W. Li, J. Liu, and Q. Du, Sparse and low rank graph-based discriminant analysis for hyperspectral image classification, IEEE Trans. Geosci. Remote Sens., vol. 54, no. 7, pp , Jul [27] B. Cheng, J. Yang, S. Yan, Y. Fu, and T. S. Huang, Learning with l 1 -graph for image analysis, IEEE Trans. Image Process., vol. 19, no. 4, pp , Apr [28] J. Tang, R. Hong, S. Yan, T. Chua, G. Qi, and R. Jain, Image annotation by k-nn sparse graph-based label propagation over noisily tagged web images, ACM Trans. Intell. Syst. Technol., vol. 2, no. 2, pp. 1 14, [29] J. Wright, Y. Ma, J. Mairal, G. Sapiro, T. Huang, and S. Yan, Sparse representation for computer vision and pattern recognition, Proc. IEEE, vol. 98, no. 6, pp , Jun [30] N. Ly, Q. Du, and J. E. Fowler, Sparse graph-based discriminant analysis for hyperspectral imagery, IEEE Trans. Geosci. Remote Sens., vol. 52, no. 7, pp , Jul [31] W. He, H. Zhang, L. Zhang, W. Philips, and W. Liao, Weighted sparse graph based dimensionality reduction for hyperspectral images, IEEE Geosci. Remote Sens. Lett., vol. 13, no. 5, pp , May [32] K. Tan, S. Zhou, and Q. Du, Semi-supervised discriminant analysis for hyperspectral imagery with block-sparse graph, IEEE Geosci. Remote Sens. Lett., vol. 12, no. 8, pp , Aug [33] Z. Xue, P. Du, J. Li, and H. Su, Simultaneous sparse graph embedding for hyperspectral image classification, IEEE Trans. Geosci. Remote Sens., vol. 53, no. 11, pp , Nov [34] W. Li, J. Liu, and Q. Du, Sparse and low-rank graph for discriminant analysis of hyperspectral imagery, IEEE Trans. Geosci. Remote Sens., vol. 54, no. 7, pp , Jul [35] N. Ly, Q. Du, and J. E. Fowler, Collaborative graph-based discriminant analysis for hyperspectral imagery, IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 7, no. 6, pp , Jun [36] W. Li and Q. Du, Joint within-class collaborative representation for hyperspectral image classification, IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 7, no. 6, pp , Jun [37] L. Zhang, M. Yang, and X. Feng, Sparse representation or collaborative representation: Which helps face recognition? in Proc. Int. Conf. Comput. Vis., Barcelona, Spain, Nov. 2011, pp [38] W. Li and Q. Du, Collaborative representation for hyperspectral anomaly detection, IEEE Trans. Geosci. Remote Sens., vol. 53, no. 3, pp , Mar [39] W. Li, Q. Du, and B. Zhang, Combined sparse and collaborative representation for hyperspectral target detection, Pattern Recognit., vol. 48, pp , [40] S. T. Roweis and L. K. Saul, Nonlinear dimensionality reduction by locally linear embedding, Science, vol. 290, no. 5500, pp , Dec [41] X. He, D. Cai, S. Yan, and H. Zhang, Neighborhood preserving embedding, in Proc. Int. Conf. Comput. Vis., Beijing, China, Oct. 2005, pp [42] X. He, D. Cai, Y. Shao, H. Bao, and J. Han, Laplacian regularized Gaussian mixture model for data clustering, IEEE Trans. Knowl. Data Eng., vol. 23, no. 9, pp , Sep [43] J. Liu, Y. Chen, J. Zhang, and Z. Xu, Enhancing low-rank subspace clustering by manifold regularization, IEEE Trans. Image Process., vol. 23, no. 9, pp , Sep [44] L. Ma, M. M. Crawford, X. Yang, and Y. Guo, Local manifold learning based graph construction for semisupervised hyperspectral image classification, IEEE Trans. Geosci. Remote Sens., vol. 53, no. 5, pp , May [45] H. Huang and M. Yang, Dimensionality reduction of hyperspectral images with sparse discriminant embedding, IEEE Trans. Geosci. Remote Sens., vol. 53, no. 9, pp , Sep [46] M. Sugiyama, Local fisher discriminant analysis for supervised dimensionality reduction, in Proc. Int. Conf. Mach. Learn., Pittsburgh, PA, USA, Jun. 2006, pp [47] D. Wang, H. Lu, and M. H. Yang, Kernel collaborative face recognition, Pattern Recognit., vol. 48, no. 10, pp , Oct [48] G. Shaw and D. Manolakis, Signal processing for hyperspectral image exploitation, IEEE Signal Process. Mag., vol. 19, no. 1, pp , Jan [49] C.-H. Li, B.-C. Kuo, C.-T. Lin, and C.-S. Huang, A spatial contextual support vector machine for remotely sensed image classification, IEEE Trans. Geosci. Remote Sens., vol. 50, no. 3, pp , Mar [50] W. Li, C. Chen, H. Su, and Q. Du, Local binary patterns and extreme learning machine for hyperspectral imagery classification, IEEE Trans. Geosci. Remote Sens., vol. 53, no. 7, pp , Jul [51] L. Zhang et al., Kernel sparse representation-based classifier, IEEE Trans. Signal Process., vol. 60, no. 4, pp , Apr [52] A. Villa, J. A. Benediktsson, J. Chanussot, and C. Jutten, Hyperspectral image classification with independent component discriminant analysis, IEEE Trans. Geosci. Remote Sens., vol. 49, no. 12, pp , Dec Wei Li (S 11 M 13) received the B.E. degree in telecommunications engineering from Xidian University, Xi an, China, in 2007; the M.S. degree in information science and technology from Sun Yat-sen University, Guangzhou, China, in 2009; and the Ph.D. degree in electrical and computer engineering from Mississippi State University, Starkville, MS, USA, in Subsequently, he spent one year as a Postdoctoral Researcher at the University of California, Davis, CA, USA. He is currently with the College of Information Science and Technology, Beijing University of Chemical Technology, Beijing, China. His research interests include statistical pattern recognition, hyperspectral image analysis, and data compression. Dr. Li is an active Reviewer for the IEEE TRANSACTIONS ON GEO- SCIENCE AND REMOTE SENSING, the IEEE GEOSCIENCE REMOTE SENS- ING LETTERS, and the IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING (JSTARS). He is the recipient of the 2015 Best Reviewer Award from the IEEE Geoscience and Remote Sensing Society for his service for IEEE JSTARS. Qian Du (S 98 M 00 SM 05) received the Ph.D. degree in electrical engineering from the University of Maryland Baltimore County, Baltimore, MD, USA, in Currently, she is the Bobby Shackouls Professor with the Department of Electrical and Computer Engineering, Mississippi State University, Starkville, MS, USA. Her research interests include hyperspectral remote sensing image analysis and applications, pattern classification, data compression, and neural networks. Dr. Du is a Fellow of SPIE-International Society for Optics and Photonics. She is the General Chair of the 4th IEEE Geoscience and Remote Sensing Society (GRSS) Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS) in Shanghai, China, in She served as the Cochair for the Data Fusion Technical Committee of the IEEE GRSS ( ) and the Chair for the Remote Sensing and Mapping Technical Committee of the International Association for Pattern Recognition ( ). She served as an Associate Editor for the IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, thejournal of Applied Remote Sensing, and the IEEE SIGNAL PROCESSING LETTERS. Since 2016, she has been the Editor-in-Chief of the IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING. She was the recipient of the 2010 Best Reviewer Award from the IEEE GRSS.

THE detailed spectral information of hyperspectral

THE detailed spectral information of hyperspectral 1358 IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 14, NO. 8, AUGUST 2017 Locality Sensitive Discriminant Analysis for Group Sparse Representation-Based Hyperspectral Imagery Classification Haoyang

More information

KERNEL-based methods, such as support vector machines

KERNEL-based methods, such as support vector machines 48 IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 12, NO. 1, JANUARY 2015 Kernel Collaborative Representation With Tikhonov Regularization for Hyperspectral Image Classification Wei Li, Member, IEEE,QianDu,Senior

More information

HYPERSPECTRAL imagery has been increasingly used

HYPERSPECTRAL imagery has been increasingly used IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 14, NO. 5, MAY 2017 597 Transferred Deep Learning for Anomaly Detection in Hyperspectral Imagery Wei Li, Senior Member, IEEE, Guodong Wu, and Qian Du, Senior

More information

HYPERSPECTRAL imagery (HSI) records hundreds of

HYPERSPECTRAL imagery (HSI) records hundreds of IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 11, NO. 1, JANUARY 2014 173 Classification Based on 3-D DWT and Decision Fusion for Hyperspectral Image Analysis Zhen Ye, Student Member, IEEE, Saurabh

More information

IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 13, NO. 8, AUGUST

IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 13, NO. 8, AUGUST IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 13, NO. 8, AUGUST 2016 1059 A Modified Locality-Preserving Projection Approach for Hyperspectral Image Classification Yongguang Zhai, Lifu Zhang, Senior

More information

Multi-level fusion of graph based discriminant analysis for hyperspectral image classification

Multi-level fusion of graph based discriminant analysis for hyperspectral image classification DOI 10.1007/s11042-016-4183-7 Multi-level fusion of graph based discriminant analysis for hyperspectral image classification Fubiao Feng 1 Qiong Ran 1 Wei Li 1 Received: 28 May 2016 / Revised: 28 October

More information

GRAPH-BASED SEMI-SUPERVISED HYPERSPECTRAL IMAGE CLASSIFICATION USING SPATIAL INFORMATION

GRAPH-BASED SEMI-SUPERVISED HYPERSPECTRAL IMAGE CLASSIFICATION USING SPATIAL INFORMATION GRAPH-BASED SEMI-SUPERVISED HYPERSPECTRAL IMAGE CLASSIFICATION USING SPATIAL INFORMATION Nasehe Jamshidpour a, Saeid Homayouni b, Abdolreza Safari a a Dept. of Geomatics Engineering, College of Engineering,

More information

Hyperspectral and Multispectral Image Fusion Using Local Spatial-Spectral Dictionary Pair

Hyperspectral and Multispectral Image Fusion Using Local Spatial-Spectral Dictionary Pair Hyperspectral and Multispectral Image Fusion Using Local Spatial-Spectral Dictionary Pair Yifan Zhang, Tuo Zhao, and Mingyi He School of Electronics and Information International Center for Information

More information

4178 IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 9, NO. 9, SEPTEMBER 2016

4178 IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 9, NO. 9, SEPTEMBER 2016 4178 IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 9, NO. 9, SEPTEMBER 016 Hyperspectral Image Classification by Fusing Collaborative and Sparse Representations

More information

DIMENSION REDUCTION FOR HYPERSPECTRAL DATA USING RANDOMIZED PCA AND LAPLACIAN EIGENMAPS

DIMENSION REDUCTION FOR HYPERSPECTRAL DATA USING RANDOMIZED PCA AND LAPLACIAN EIGENMAPS DIMENSION REDUCTION FOR HYPERSPECTRAL DATA USING RANDOMIZED PCA AND LAPLACIAN EIGENMAPS YIRAN LI APPLIED MATHEMATICS, STATISTICS AND SCIENTIFIC COMPUTING ADVISOR: DR. WOJTEK CZAJA, DR. JOHN BENEDETTO DEPARTMENT

More information

Fusion of pixel based and object based features for classification of urban hyperspectral remote sensing data

Fusion of pixel based and object based features for classification of urban hyperspectral remote sensing data Fusion of pixel based and object based features for classification of urban hyperspectral remote sensing data Wenzhi liao a, *, Frieke Van Coillie b, Flore Devriendt b, Sidharta Gautama a, Aleksandra Pizurica

More information

Semi-supervised Data Representation via Affinity Graph Learning

Semi-supervised Data Representation via Affinity Graph Learning 1 Semi-supervised Data Representation via Affinity Graph Learning Weiya Ren 1 1 College of Information System and Management, National University of Defense Technology, Changsha, Hunan, P.R China, 410073

More information

Discriminant Analysis-Based Dimension Reduction for Hyperspectral Image Classification

Discriminant Analysis-Based Dimension Reduction for Hyperspectral Image Classification Satellite View istockphoto.com/frankramspott puzzle outline footage firm, inc. Discriminant Analysis-Based Dimension Reduction for Hyperspectral Image Classification A survey of the most recent advances

More information

Classification of Hyperspectral Data over Urban. Areas Using Directional Morphological Profiles and. Semi-supervised Feature Extraction

Classification of Hyperspectral Data over Urban. Areas Using Directional Morphological Profiles and. Semi-supervised Feature Extraction IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL.X, NO.X, Y 1 Classification of Hyperspectral Data over Urban Areas Using Directional Morphological Profiles and Semi-supervised

More information

A MAXIMUM NOISE FRACTION TRANSFORM BASED ON A SENSOR NOISE MODEL FOR HYPERSPECTRAL DATA. Naoto Yokoya 1 and Akira Iwasaki 2

A MAXIMUM NOISE FRACTION TRANSFORM BASED ON A SENSOR NOISE MODEL FOR HYPERSPECTRAL DATA. Naoto Yokoya 1 and Akira Iwasaki 2 A MAXIMUM NOISE FRACTION TRANSFORM BASED ON A SENSOR NOISE MODEL FOR HYPERSPECTRAL DATA Naoto Yokoya 1 and Akira Iwasaki 1 Graduate Student, Department of Aeronautics and Astronautics, The University of

More information

Hyperspectral Image Classification by Using Pixel Spatial Correlation

Hyperspectral Image Classification by Using Pixel Spatial Correlation Hyperspectral Image Classification by Using Pixel Spatial Correlation Yue Gao and Tat-Seng Chua School of Computing, National University of Singapore, Singapore {gaoyue,chuats}@comp.nus.edu.sg Abstract.

More information

IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 12, NO. 2, FEBRUARY

IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 12, NO. 2, FEBRUARY IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 12, NO. 2, FEBRUARY 2015 349 Subspace-Based Support Vector Machines for Hyperspectral Image Classification Lianru Gao, Jun Li, Member, IEEE, Mahdi Khodadadzadeh,

More information

An efficient face recognition algorithm based on multi-kernel regularization learning

An efficient face recognition algorithm based on multi-kernel regularization learning Acta Technica 61, No. 4A/2016, 75 84 c 2017 Institute of Thermomechanics CAS, v.v.i. An efficient face recognition algorithm based on multi-kernel regularization learning Bi Rongrong 1 Abstract. A novel

More information

Graph Autoencoder-Based Unsupervised Feature Selection

Graph Autoencoder-Based Unsupervised Feature Selection Graph Autoencoder-Based Unsupervised Feature Selection Siwei Feng Department of Electrical and Computer Engineering University of Massachusetts Amherst Amherst, MA, 01003 siwei@umass.edu Marco F. Duarte

More information

Dimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis:midyear Report

Dimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis:midyear Report Dimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis:midyear Report Yiran Li yl534@math.umd.edu Advisor: Wojtek Czaja wojtek@math.umd.edu

More information

Time Series Clustering Ensemble Algorithm Based on Locality Preserving Projection

Time Series Clustering Ensemble Algorithm Based on Locality Preserving Projection Based on Locality Preserving Projection 2 Information & Technology College, Hebei University of Economics & Business, 05006 Shijiazhuang, China E-mail: 92475577@qq.com Xiaoqing Weng Information & Technology

More information

Dimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis

Dimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis Dimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis Yiran Li yl534@math.umd.edu Advisor: Wojtek Czaja wojtek@math.umd.edu 10/17/2014 Abstract

More information

Hyperspectral Image Classification Using Gradient Local Auto-Correlations

Hyperspectral Image Classification Using Gradient Local Auto-Correlations Hyperspectral Image Classification Using Gradient Local Auto-Correlations Chen Chen 1, Junjun Jiang 2, Baochang Zhang 3, Wankou Yang 4, Jianzhong Guo 5 1. epartment of Electrical Engineering, University

More information

The Analysis of Parameters t and k of LPP on Several Famous Face Databases

The Analysis of Parameters t and k of LPP on Several Famous Face Databases The Analysis of Parameters t and k of LPP on Several Famous Face Databases Sujing Wang, Na Zhang, Mingfang Sun, and Chunguang Zhou College of Computer Science and Technology, Jilin University, Changchun

More information

PoS(CENet2017)005. The Classification of Hyperspectral Images Based on Band-Grouping and Convolutional Neural Network. Speaker.

PoS(CENet2017)005. The Classification of Hyperspectral Images Based on Band-Grouping and Convolutional Neural Network. Speaker. The Classification of Hyperspectral Images Based on Band-Grouping and Convolutional Neural Network 1 Xi an Hi-Tech Institute Xi an 710025, China E-mail: dr-f@21cnl.c Hongyang Gu Xi an Hi-Tech Institute

More information

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality

More information

Schroedinger Eigenmaps with Nondiagonal Potentials for Spatial-Spectral Clustering of Hyperspectral Imagery

Schroedinger Eigenmaps with Nondiagonal Potentials for Spatial-Spectral Clustering of Hyperspectral Imagery Schroedinger Eigenmaps with Nondiagonal Potentials for Spatial-Spectral Clustering of Hyperspectral Imagery Nathan D. Cahill a, Wojciech Czaja b, and David W. Messinger c a Center for Applied and Computational

More information

Spectral Angle Based Unary Energy Functions for Spatial-Spectral Hyperspectral Classification Using Markov Random Fields

Spectral Angle Based Unary Energy Functions for Spatial-Spectral Hyperspectral Classification Using Markov Random Fields Rochester Institute of Technology RIT Scholar Works Presentations and other scholarship 7-31-2016 Spectral Angle Based Unary Energy Functions for Spatial-Spectral Hyperspectral Classification Using Markov

More information

Face Recognition Based on LDA and Improved Pairwise-Constrained Multiple Metric Learning Method

Face Recognition Based on LDA and Improved Pairwise-Constrained Multiple Metric Learning Method Journal of Information Hiding and Multimedia Signal Processing c 2016 ISSN 2073-4212 Ubiquitous International Volume 7, Number 5, September 2016 Face Recognition ased on LDA and Improved Pairwise-Constrained

More information

Learning based face hallucination techniques: A survey

Learning based face hallucination techniques: A survey Vol. 3 (2014-15) pp. 37-45. : A survey Premitha Premnath K Department of Computer Science & Engineering Vidya Academy of Science & Technology Thrissur - 680501, Kerala, India (email: premithakpnath@gmail.com)

More information

Robust Face Recognition via Sparse Representation

Robust Face Recognition via Sparse Representation Robust Face Recognition via Sparse Representation Panqu Wang Department of Electrical and Computer Engineering University of California, San Diego La Jolla, CA 92092 pawang@ucsd.edu Can Xu Department of

More information

Remote Sensed Image Classification based on Spatial and Spectral Features using SVM

Remote Sensed Image Classification based on Spatial and Spectral Features using SVM RESEARCH ARTICLE OPEN ACCESS Remote Sensed Image Classification based on Spatial and Spectral Features using SVM Mary Jasmine. E PG Scholar Department of Computer Science and Engineering, University College

More information

Exploring Structural Consistency in Graph Regularized Joint Spectral-Spatial Sparse Coding for Hyperspectral Image Classification

Exploring Structural Consistency in Graph Regularized Joint Spectral-Spatial Sparse Coding for Hyperspectral Image Classification 1 Exploring Structural Consistency in Graph Regularized Joint Spectral-Spatial Sparse Coding for Hyperspectral Image Classification Changhong Liu, Jun Zhou, Senior Member, IEEE, Jie Liang, Yuntao Qian,

More information

IN RECENT years, the latest generation of optical sensors

IN RECENT years, the latest generation of optical sensors IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 11, NO. 8, AUGUST 2014 1409 Supervised Segmentation of Very High Resolution Images by the Use of Extended Morphological Attribute Profiles and a Sparse

More information

Fuzzy Entropy based feature selection for classification of hyperspectral data

Fuzzy Entropy based feature selection for classification of hyperspectral data Fuzzy Entropy based feature selection for classification of hyperspectral data Mahesh Pal Department of Civil Engineering NIT Kurukshetra, 136119 mpce_pal@yahoo.co.uk Abstract: This paper proposes to use

More information

DEEP LEARNING TO DIVERSIFY BELIEF NETWORKS FOR REMOTE SENSING IMAGE CLASSIFICATION

DEEP LEARNING TO DIVERSIFY BELIEF NETWORKS FOR REMOTE SENSING IMAGE CLASSIFICATION DEEP LEARNING TO DIVERSIFY BELIEF NETWORKS FOR REMOTE SENSING IMAGE CLASSIFICATION S.Dhanalakshmi #1 #PG Scholar, Department of Computer Science, Dr.Sivanthi Aditanar college of Engineering, Tiruchendur

More information

Does Normalization Methods Play a Role for Hyperspectral Image Classification?

Does Normalization Methods Play a Role for Hyperspectral Image Classification? Does Normalization Methods Play a Role for Hyperspectral Image Classification? Faxian Cao 1, Zhijing Yang 1*, Jinchang Ren 2, Mengying Jiang 1, Wing-Kuen Ling 1 1 School of Information Engineering, Guangdong

More information

Heat Kernel Based Local Binary Pattern for Face Representation

Heat Kernel Based Local Binary Pattern for Face Representation JOURNAL OF LATEX CLASS FILES 1 Heat Kernel Based Local Binary Pattern for Face Representation Xi Li, Weiming Hu, Zhongfei Zhang, Hanzi Wang Abstract Face classification has recently become a very hot research

More information

Remote Sensing Data Classification Using Combined Spectral and Spatial Local Linear Embedding (CSSLE)

Remote Sensing Data Classification Using Combined Spectral and Spatial Local Linear Embedding (CSSLE) 2016 International Conference on Artificial Intelligence and Computer Science (AICS 2016) ISBN: 978-1-60595-411-0 Remote Sensing Data Classification Using Combined Spectral and Spatial Local Linear Embedding

More information

An Optimized Pixel-Wise Weighting Approach For Patch-Based Image Denoising

An Optimized Pixel-Wise Weighting Approach For Patch-Based Image Denoising An Optimized Pixel-Wise Weighting Approach For Patch-Based Image Denoising Dr. B. R.VIKRAM M.E.,Ph.D.,MIEEE.,LMISTE, Principal of Vijay Rural Engineering College, NIZAMABAD ( Dt.) G. Chaitanya M.Tech,

More information

An Adaptive Threshold LBP Algorithm for Face Recognition

An Adaptive Threshold LBP Algorithm for Face Recognition An Adaptive Threshold LBP Algorithm for Face Recognition Xiaoping Jiang 1, Chuyu Guo 1,*, Hua Zhang 1, and Chenghua Li 1 1 College of Electronics and Information Engineering, Hubei Key Laboratory of Intelligent

More information

HYPERSPECTRAL images with hundreds of spectral

HYPERSPECTRAL images with hundreds of spectral IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 1 Hyperspectral Image Classification With Imbalanced Data Based on Orthogonal Complement Subspace Projection Jiaojiao Li, Member, IEEE, QianDu, Fellow,

More information

ROBUST JOINT SPARSITY MODEL FOR HYPERSPECTRAL IMAGE CLASSIFICATION. Wuhan University, China

ROBUST JOINT SPARSITY MODEL FOR HYPERSPECTRAL IMAGE CLASSIFICATION. Wuhan University, China ROBUST JOINT SPARSITY MODEL FOR HYPERSPECTRAL IMAGE CLASSIFICATION Shaoguang Huang 1, Hongyan Zhang 2, Wenzhi Liao 1 and Aleksandra Pižurica 1 1 Department of Telecommunications and Information Processing,

More information

A new Graph constructor for Semi-supervised Discriminant Analysis via Group Sparsity

A new Graph constructor for Semi-supervised Discriminant Analysis via Group Sparsity 2011 Sixth International Conference on Image and Graphics A new Graph constructor for Semi-supervised Discriminant Analysis via Group Sparsity Haoyuan Gao, Liansheng Zhuang, Nenghai Yu MOE-MS Key Laboratory

More information

HYPERSPECTRAL remote sensing images are very important

HYPERSPECTRAL remote sensing images are very important 762 IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 6, NO. 4, OCTOBER 2009 Ensemble Classification Algorithm for Hyperspectral Remote Sensing Data Mingmin Chi, Member, IEEE, Qian Kun, Jón Atli Beneditsson,

More information

FACE RECOGNITION USING SUPPORT VECTOR MACHINES

FACE RECOGNITION USING SUPPORT VECTOR MACHINES FACE RECOGNITION USING SUPPORT VECTOR MACHINES Ashwin Swaminathan ashwins@umd.edu ENEE633: Statistical and Neural Pattern Recognition Instructor : Prof. Rama Chellappa Project 2, Part (b) 1. INTRODUCTION

More information

Non-linear dimension reduction

Non-linear dimension reduction Sta306b May 23, 2011 Dimension Reduction: 1 Non-linear dimension reduction ISOMAP: Tenenbaum, de Silva & Langford (2000) Local linear embedding: Roweis & Saul (2000) Local MDS: Chen (2006) all three methods

More information

MULTI/HYPERSPECTRAL imagery has the potential to

MULTI/HYPERSPECTRAL imagery has the potential to IEEE GEOSCIENCE AND REMOTE SENSING ETTERS, VO. 11, NO. 12, DECEMBER 2014 2183 Three-Dimensional Wavelet Texture Feature Extraction and Classification for Multi/Hyperspectral Imagery Xian Guo, Xin Huang,

More information

Globally and Locally Consistent Unsupervised Projection

Globally and Locally Consistent Unsupervised Projection Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence Globally and Locally Consistent Unsupervised Projection Hua Wang, Feiping Nie, Heng Huang Department of Electrical Engineering

More information

NTHU Rain Removal Project

NTHU Rain Removal Project People NTHU Rain Removal Project Networked Video Lab, National Tsing Hua University, Hsinchu, Taiwan Li-Wei Kang, Institute of Information Science, Academia Sinica, Taipei, Taiwan Chia-Wen Lin *, Department

More information

AN ENHANCED ATTRIBUTE RERANKING DESIGN FOR WEB IMAGE SEARCH

AN ENHANCED ATTRIBUTE RERANKING DESIGN FOR WEB IMAGE SEARCH AN ENHANCED ATTRIBUTE RERANKING DESIGN FOR WEB IMAGE SEARCH Sai Tejaswi Dasari #1 and G K Kishore Babu *2 # Student,Cse, CIET, Lam,Guntur, India * Assistant Professort,Cse, CIET, Lam,Guntur, India Abstract-

More information

SEMI-SUPERVISED LEARNING (SSL) for classification

SEMI-SUPERVISED LEARNING (SSL) for classification IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 12, DECEMBER 2015 2411 Bilinear Embedding Label Propagation: Towards Scalable Prediction of Image Labels Yuchen Liang, Zhao Zhang, Member, IEEE, Weiming Jiang,

More information

Locality Preserving Genetic Algorithms for Spatial-Spectral Hyperspectral Image Classification

Locality Preserving Genetic Algorithms for Spatial-Spectral Hyperspectral Image Classification 1688 IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 6, NO. 3, JUNE 2013 Locality Preserving Genetic Algorithms for Spatial-Spectral Hyperspectral Image Classification

More information

STRATIFIED SAMPLING METHOD BASED TRAINING PIXELS SELECTION FOR HYPER SPECTRAL REMOTE SENSING IMAGE CLASSIFICATION

STRATIFIED SAMPLING METHOD BASED TRAINING PIXELS SELECTION FOR HYPER SPECTRAL REMOTE SENSING IMAGE CLASSIFICATION Volume 117 No. 17 2017, 121-126 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu ijpam.eu STRATIFIED SAMPLING METHOD BASED TRAINING PIXELS SELECTION FOR HYPER

More information

Fuzzy Bidirectional Weighted Sum for Face Recognition

Fuzzy Bidirectional Weighted Sum for Face Recognition Send Orders for Reprints to reprints@benthamscience.ae The Open Automation and Control Systems Journal, 2014, 6, 447-452 447 Fuzzy Bidirectional Weighted Sum for Face Recognition Open Access Pengli Lu

More information

Nonlinear Dimensionality Reduction Applied to the Classification of Images

Nonlinear Dimensionality Reduction Applied to the Classification of Images onlinear Dimensionality Reduction Applied to the Classification of Images Student: Chae A. Clark (cclark8 [at] math.umd.edu) Advisor: Dr. Kasso A. Okoudjou (kasso [at] math.umd.edu) orbert Wiener Center

More information

Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference

Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference Minh Dao 1, Xiang Xiang 1, Bulent Ayhan 2, Chiman Kwan 2, Trac D. Tran 1 Johns Hopkins Univeristy, 3400

More information

HYPERSPECTRAL image (HSI) acquired by spaceborne

HYPERSPECTRAL image (HSI) acquired by spaceborne 1 SuperPCA: A Superpixelwise PCA Approach for Unsupervised Feature Extraction of Hyperspectral Imagery Junjun Jiang, Member, IEEE, Jiayi Ma, Member, IEEE, Chen Chen, Member, IEEE, Zhongyuan Wang, Member,

More information

Sparsity Preserving Canonical Correlation Analysis

Sparsity Preserving Canonical Correlation Analysis Sparsity Preserving Canonical Correlation Analysis Chen Zu and Daoqiang Zhang Department of Computer Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China {zuchen,dqzhang}@nuaa.edu.cn

More information

Automatic Shadow Removal by Illuminance in HSV Color Space

Automatic Shadow Removal by Illuminance in HSV Color Space Computer Science and Information Technology 3(3): 70-75, 2015 DOI: 10.13189/csit.2015.030303 http://www.hrpub.org Automatic Shadow Removal by Illuminance in HSV Color Space Wenbo Huang 1, KyoungYeon Kim

More information

Facial Expression Recognition Using Expression- Specific Local Binary Patterns and Layer Denoising Mechanism

Facial Expression Recognition Using Expression- Specific Local Binary Patterns and Layer Denoising Mechanism Facial Expression Recognition Using Expression- Specific Local Binary Patterns and Layer Denoising Mechanism 1 2 Wei-Lun Chao, Jun-Zuo Liu, 3 Jian-Jiun Ding, 4 Po-Hung Wu 1, 2, 3, 4 Graduate Institute

More information

REMOTE sensing hyperspectral images (HSI) are acquired

REMOTE sensing hyperspectral images (HSI) are acquired IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 10, NO. 3, MARCH 2017 1151 Exploring Structural Consistency in Graph Regularized Joint Spectral-Spatial Sparse Coding

More information

Spatially variant dimensionality reduction for the visualization of multi/hyperspectral images

Spatially variant dimensionality reduction for the visualization of multi/hyperspectral images Author manuscript, published in "International Conference on Image Analysis and Recognition, Burnaby : Canada (2011)" DOI : 10.1007/978-3-642-21593-3_38 Spatially variant dimensionality reduction for the

More information

Structure-adaptive Image Denoising with 3D Collaborative Filtering

Structure-adaptive Image Denoising with 3D Collaborative Filtering , pp.42-47 http://dx.doi.org/10.14257/astl.2015.80.09 Structure-adaptive Image Denoising with 3D Collaborative Filtering Xuemei Wang 1, Dengyin Zhang 2, Min Zhu 2,3, Yingtian Ji 2, Jin Wang 4 1 College

More information

Data Mining Chapter 3: Visualizing and Exploring Data Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University

Data Mining Chapter 3: Visualizing and Exploring Data Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University Data Mining Chapter 3: Visualizing and Exploring Data Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University Exploratory data analysis tasks Examine the data, in search of structures

More information

[Khadse, 4(7): July, 2015] ISSN: (I2OR), Publication Impact Factor: Fig:(1) Image Samples Of FERET Dataset

[Khadse, 4(7): July, 2015] ISSN: (I2OR), Publication Impact Factor: Fig:(1) Image Samples Of FERET Dataset IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY IMPLEMENTATION OF THE DATA UNCERTAINTY MODEL USING APPEARANCE BASED METHODS IN FACE RECOGNITION Shubhangi G. Khadse, Prof. Prakash

More information

A New Orthogonalization of Locality Preserving Projection and Applications

A New Orthogonalization of Locality Preserving Projection and Applications A New Orthogonalization of Locality Preserving Projection and Applications Gitam Shikkenawis 1,, Suman K. Mitra, and Ajit Rajwade 2 1 Dhirubhai Ambani Institute of Information and Communication Technology,

More information

Color Local Texture Features Based Face Recognition

Color Local Texture Features Based Face Recognition Color Local Texture Features Based Face Recognition Priyanka V. Bankar Department of Electronics and Communication Engineering SKN Sinhgad College of Engineering, Korti, Pandharpur, Maharashtra, India

More information

IMAGE RESTORATION VIA EFFICIENT GAUSSIAN MIXTURE MODEL LEARNING

IMAGE RESTORATION VIA EFFICIENT GAUSSIAN MIXTURE MODEL LEARNING IMAGE RESTORATION VIA EFFICIENT GAUSSIAN MIXTURE MODEL LEARNING Jianzhou Feng Li Song Xiaog Huo Xiaokang Yang Wenjun Zhang Shanghai Digital Media Processing Transmission Key Lab, Shanghai Jiaotong University

More information

City, University of London Institutional Repository

City, University of London Institutional Repository City Research Online City, University of London Institutional Repository Citation: Zhu, R. ORCID: 0000-0002-9944-0369, Dong, M. and Xue, J-H. (2014). Spectral non-local restoration of hyperspectral images

More information

Spectral-spatial rotation forest for hyperspectral image classification

Spectral-spatial rotation forest for hyperspectral image classification Spectral-spatial rotation forest for hyperspectral image classification Junshi Xia, Lionel Bombrun, Yannick Berthoumieu, Christian Germain, Peijun Du To cite this version: Junshi Xia, Lionel Bombrun, Yannick

More information

Spatial-Spectral Dimensionality Reduction of Hyperspectral Imagery with Partial Knowledge of Class Labels

Spatial-Spectral Dimensionality Reduction of Hyperspectral Imagery with Partial Knowledge of Class Labels Spatial-Spectral Dimensionality Reduction of Hyperspectral Imagery with Partial Knowledge of Class Labels Nathan D. Cahill, Selene E. Chew, and Paul S. Wenger Center for Applied and Computational Mathematics,

More information

Hyperspectral Data Classification via Sparse Representation in Homotopy

Hyperspectral Data Classification via Sparse Representation in Homotopy Hyperspectral Data Classification via Sparse Representation in Homotopy Qazi Sami ul Haq,Lixin Shi,Linmi Tao,Shiqiang Yang Key Laboratory of Pervasive Computing, Ministry of Education Department of Computer

More information

MULTI-POSE FACE HALLUCINATION VIA NEIGHBOR EMBEDDING FOR FACIAL COMPONENTS. Yanghao Li, Jiaying Liu, Wenhan Yang, Zongming Guo

MULTI-POSE FACE HALLUCINATION VIA NEIGHBOR EMBEDDING FOR FACIAL COMPONENTS. Yanghao Li, Jiaying Liu, Wenhan Yang, Zongming Guo MULTI-POSE FACE HALLUCINATION VIA NEIGHBOR EMBEDDING FOR FACIAL COMPONENTS Yanghao Li, Jiaying Liu, Wenhan Yang, Zongg Guo Institute of Computer Science and Technology, Peking University, Beijing, P.R.China,

More information

Textural Features for Hyperspectral Pixel Classification

Textural Features for Hyperspectral Pixel Classification Textural Features for Hyperspectral Pixel Classification Olga Rajadell, Pedro García-Sevilla, and Filiberto Pla Depto. Lenguajes y Sistemas Informáticos Jaume I University, Campus Riu Sec s/n 12071 Castellón,

More information

MULTIVARIATE TEXTURE DISCRIMINATION USING A PRINCIPAL GEODESIC CLASSIFIER

MULTIVARIATE TEXTURE DISCRIMINATION USING A PRINCIPAL GEODESIC CLASSIFIER MULTIVARIATE TEXTURE DISCRIMINATION USING A PRINCIPAL GEODESIC CLASSIFIER A.Shabbir 1, 2 and G.Verdoolaege 1, 3 1 Department of Applied Physics, Ghent University, B-9000 Ghent, Belgium 2 Max Planck Institute

More information

Learning a Manifold as an Atlas Supplementary Material

Learning a Manifold as an Atlas Supplementary Material Learning a Manifold as an Atlas Supplementary Material Nikolaos Pitelis Chris Russell School of EECS, Queen Mary, University of London [nikolaos.pitelis,chrisr,lourdes]@eecs.qmul.ac.uk Lourdes Agapito

More information

Stepwise Metric Adaptation Based on Semi-Supervised Learning for Boosting Image Retrieval Performance

Stepwise Metric Adaptation Based on Semi-Supervised Learning for Boosting Image Retrieval Performance Stepwise Metric Adaptation Based on Semi-Supervised Learning for Boosting Image Retrieval Performance Hong Chang & Dit-Yan Yeung Department of Computer Science Hong Kong University of Science and Technology

More information

Adaptive Doppler centroid estimation algorithm of airborne SAR

Adaptive Doppler centroid estimation algorithm of airborne SAR Adaptive Doppler centroid estimation algorithm of airborne SAR Jian Yang 1,2a), Chang Liu 1, and Yanfei Wang 1 1 Institute of Electronics, Chinese Academy of Sciences 19 North Sihuan Road, Haidian, Beijing

More information

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 24, NO. 7, JULY

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 24, NO. 7, JULY IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 24, NO. 7, JULY 2015 2037 Spatial Coherence-Based Batch-Mode Active Learning for Remote Sensing Image Classification Qian Shi, Bo Du, Member, IEEE, and Liangpei

More information

PARALLEL IMPLEMENTATION OF MORPHOLOGICAL PROFILE BASED SPECTRAL-SPATIAL CLASSIFICATION SCHEME FOR HYPERSPECTRAL IMAGERY

PARALLEL IMPLEMENTATION OF MORPHOLOGICAL PROFILE BASED SPECTRAL-SPATIAL CLASSIFICATION SCHEME FOR HYPERSPECTRAL IMAGERY PARALLEL IMPLEMENTATION OF MORPHOLOGICAL PROFILE BASED SPECTRAL-SPATIAL CLASSIFICATION SCHEME FOR HYPERSPECTRAL IMAGERY B. Kumar a, O. Dikshit b a Department of Computer Science & Information Technology,

More information

A Fourier Extension Based Algorithm for Impulse Noise Removal

A Fourier Extension Based Algorithm for Impulse Noise Removal A Fourier Extension Based Algorithm for Impulse Noise Removal H. Sahoolizadeh, R. Rajabioun *, M. Zeinali Abstract In this paper a novel Fourier extension based algorithm is introduced which is able to

More information

Robust Kernel Methods in Clustering and Dimensionality Reduction Problems

Robust Kernel Methods in Clustering and Dimensionality Reduction Problems Robust Kernel Methods in Clustering and Dimensionality Reduction Problems Jian Guo, Debadyuti Roy, Jing Wang University of Michigan, Department of Statistics Introduction In this report we propose robust

More information

IMAGE classification plays an important role in remote sensing

IMAGE classification plays an important role in remote sensing IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 11, NO. 2, FEBRUARY 2014 489 Spatial-Attraction-Based Markov Random Field Approach for Classification of High Spatial Resolution Multispectral Imagery Hua

More information

HYPERSPECTRAL remote sensing sensors provide hundreds

HYPERSPECTRAL remote sensing sensors provide hundreds 70 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 53, NO. 1, JANUARY 2015 Spectral Spatial Classification of Hyperspectral Data via Morphological Component Analysis-Based Image Separation Zhaohui

More information

Discriminative Locality Alignment

Discriminative Locality Alignment Discriminative Locality Alignment Tianhao Zhang 1, Dacheng Tao 2,3,andJieYang 1 1 Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, Shanghai, China 2 School of Computer

More information

A Feature Selection Method to Handle Imbalanced Data in Text Classification

A Feature Selection Method to Handle Imbalanced Data in Text Classification A Feature Selection Method to Handle Imbalanced Data in Text Classification Fengxiang Chang 1*, Jun Guo 1, Weiran Xu 1, Kejun Yao 2 1 School of Information and Communication Engineering Beijing University

More information

ESSENTIALLY, system modeling is the task of building

ESSENTIALLY, system modeling is the task of building IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 53, NO. 4, AUGUST 2006 1269 An Algorithm for Extracting Fuzzy Rules Based on RBF Neural Network Wen Li and Yoichi Hori, Fellow, IEEE Abstract A four-layer

More information

High-Resolution Image Classification Integrating Spectral-Spatial-Location Cues by Conditional Random Fields

High-Resolution Image Classification Integrating Spectral-Spatial-Location Cues by Conditional Random Fields IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 25, NO. 9, SEPTEMBER 2016 4033 High-Resolution Image Classification Integrating Spectral-Spatial-Location Cues by Conditional Random Fields Ji Zhao, Student

More information

Image retrieval based on bag of images

Image retrieval based on bag of images University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 2009 Image retrieval based on bag of images Jun Zhang University of Wollongong

More information

Frame based kernel methods for hyperspectral imagery data

Frame based kernel methods for hyperspectral imagery data Frame based kernel methods for hyperspectral imagery data Norbert Wiener Center Department of Mathematics University of Maryland, College Park Recent Advances in Harmonic Analysis and Elliptic Partial

More information

REPORT DOCUMENTATION PAGE

REPORT DOCUMENTATION PAGE REPORT DOCUMENTATION PAGE Form Approved OMB NO. 0704-0188 The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions,

More information

Directional Derivative and Feature Line Based Subspace Learning Algorithm for Classification

Directional Derivative and Feature Line Based Subspace Learning Algorithm for Classification Journal of Information Hiding and Multimedia Signal Processing c 206 ISSN 2073-422 Ubiquitous International Volume 7, Number 6, November 206 Directional Derivative and Feature Line Based Subspace Learning

More information

THE imaging spectrometer, airborne or spaceborne, is a

THE imaging spectrometer, airborne or spaceborne, is a IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 1 Semisupervised Discriminative Locally Enhanced Alignment for Hyperspectral Image Classification Qian Shi, Student Member, IEEE, Liangpei Zhang, Senior

More information

Change Detection in Remotely Sensed Images Based on Image Fusion and Fuzzy Clustering

Change Detection in Remotely Sensed Images Based on Image Fusion and Fuzzy Clustering International Journal of Electronics Engineering Research. ISSN 0975-6450 Volume 9, Number 1 (2017) pp. 141-150 Research India Publications http://www.ripublication.com Change Detection in Remotely Sensed

More information

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 1

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 1 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 1 Local Binary Patterns and Extreme Learning Machine for Hyperspectral Imagery Classification Wei Li, Member, IEEE, Chen Chen, Student Member, IEEE, Hongjun

More information

Land-use scene classification using multi-scale completed local binary patterns

Land-use scene classification using multi-scale completed local binary patterns DOI 10.1007/s11760-015-0804-2 ORIGINAL PAPER Land-use scene classification using multi-scale completed local binary patterns Chen Chen 1 Baochang Zhang 2 Hongjun Su 3 Wei Li 4 Lu Wang 4 Received: 25 April

More information

Graph Laplacian Kernels for Object Classification from a Single Example

Graph Laplacian Kernels for Object Classification from a Single Example Graph Laplacian Kernels for Object Classification from a Single Example Hong Chang & Dit-Yan Yeung Department of Computer Science, Hong Kong University of Science and Technology {hongch,dyyeung}@cs.ust.hk

More information

IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING 1

IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING 1 IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING 1 Exploring Locally Adaptive Dimensionality Reduction for Hyperspectral Image Classification: A Maximum Margin Metric Learning

More information

Technical Report. Title: Manifold learning and Random Projections for multi-view object recognition

Technical Report. Title: Manifold learning and Random Projections for multi-view object recognition Technical Report Title: Manifold learning and Random Projections for multi-view object recognition Authors: Grigorios Tsagkatakis 1 and Andreas Savakis 2 1 Center for Imaging Science, Rochester Institute

More information