Hyperspectral Image Classification Using Gradient Local Auto-Correlations

Similar documents
Does Normalization Methods Play a Role for Hyperspectral Image Classification?

PoS(CENet2017)005. The Classification of Hyperspectral Images Based on Band-Grouping and Convolutional Neural Network. Speaker.

STRATIFIED SAMPLING METHOD BASED TRAINING PIXELS SELECTION FOR HYPER SPECTRAL REMOTE SENSING IMAGE CLASSIFICATION

MULTI/HYPERSPECTRAL imagery has the potential to

HYPERSPECTRAL image (HSI) acquired by spaceborne

PARALLEL IMPLEMENTATION OF MORPHOLOGICAL PROFILE BASED SPECTRAL-SPATIAL CLASSIFICATION SCHEME FOR HYPERSPECTRAL IMAGERY

Classification of Hyperspectral Data over Urban. Areas Using Directional Morphological Profiles and. Semi-supervised Feature Extraction

Remote Sensed Image Classification based on Spatial and Spectral Features using SVM

Exploring Structural Consistency in Graph Regularized Joint Spectral-Spatial Sparse Coding for Hyperspectral Image Classification

Spectral-Spatial Classification of Hyperspectral Image Based on Kernel Extreme Learning Machine

Spectral-Spatial Response for Hyperspectral Image Classification

REMOTE sensing hyperspectral images (HSI) are acquired

R-VCANet: A New Deep Learning-Based Hyperspectral Image Classification Method

Spectral Angle Based Unary Energy Functions for Spatial-Spectral Hyperspectral Classification Using Markov Random Fields

IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 7, NO. 6, JUNE

HYPERSPECTRAL imagery (HSI) records hundreds of

Classification of hyperspectral images by using extended morphological attribute profiles and independent component analysis

Fusion of pixel based and object based features for classification of urban hyperspectral remote sensing data

Land-use scene classification using multi-scale completed local binary patterns

Hyperspectral Image Classification by Using Pixel Spatial Correlation

A Novel Clustering-Based Feature Representation for the Classification of Hyperspectral Imagery

Schroedinger Eigenmaps with Nondiagonal Potentials for Spatial-Spectral Clustering of Hyperspectral Imagery

Shapelet-Based Sparse Image Representation for Landcover Classification of Hyperspectral Data

GRAPH-BASED SEMI-SUPERVISED HYPERSPECTRAL IMAGE CLASSIFICATION USING SPATIAL INFORMATION

c 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all

THE detailed spectral information of hyperspectral

Hyperspectral Image Classification via Kernel Sparse Representation

HYPERSPECTRAL remote sensing sensors provide hundreds

A MAXIMUM NOISE FRACTION TRANSFORM BASED ON A SENSOR NOISE MODEL FOR HYPERSPECTRAL DATA. Naoto Yokoya 1 and Akira Iwasaki 2

ROBUST JOINT SPARSITY MODEL FOR HYPERSPECTRAL IMAGE CLASSIFICATION. Wuhan University, China

IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 13, NO. 8, AUGUST

Color Local Texture Features Based Face Recognition

Linear vs Nonlinear Extreme Learning Machine for Spectral- Spatial Classification of Hyperspectral Image

IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 12, NO. 2, FEBRUARY

Revista de Topografía Azimut

IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING 1

Hyperspectral Image Classification with Markov Random Fields and a Convolutional Neural Network

Classification of Hyper spectral Image Using Support Vector Machine and Marker-Controlled Watershed

DEEP LEARNING TO DIVERSIFY BELIEF NETWORKS FOR REMOTE SENSING IMAGE CLASSIFICATION

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

(2016) ISSN

Hyperspectral and Multispectral Image Fusion Using Local Spatial-Spectral Dictionary Pair

KERNEL-based methods, such as support vector machines

Research Article Hyperspectral Image Classification Using Kernel Fukunaga-Koontz Transform

Video Inter-frame Forgery Identification Based on Optical Flow Consistency

A New Spatial Spectral Feature Extraction Method for Hyperspectral Images Using Local Covariance Matrix Representation

Feature Selection for Classification of Remote Sensed Hyperspectral Images: A Filter approach using Genetic Algorithm and Cluster Validity

2 Proposed Methodology

A Multichannel Gray Level Co-Occurrence Matrix for Multi/Hyperspectral Image Texture Representation

Hyperspectral Image Segmentation with Markov Random Fields and a Convolutional Neural Network

TEXTURE CLASSIFICATION METHODS: A REVIEW

Linear Discriminant Analysis for 3D Face Recognition System

HYPERSPECTRAL imagery has been increasingly used

Title: A Deep Network Architecture for Super-resolution aided Hyperspectral Image Classification with Class-wise Loss

Graph Matching Iris Image Blocks with Local Binary Pattern

Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference

Spectral-spatial rotation forest for hyperspectral image classification

MONITORING urbanization may help government agencies

APPLICATION OF SOFTMAX REGRESSION AND ITS VALIDATION FOR SPECTRAL-BASED LAND COVER MAPPING

IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 7, NO. 4, APRIL

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 24, NO. 7, JULY

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 1

IN RECENT years, the latest generation of optical sensors

SPATIAL-SPECTRAL CLASSIFICATION BASED ON THE UNSUPERVISED CONVOLUTIONAL SPARSE AUTO-ENCODER FOR HYPERSPECTRAL REMOTE SENSING IMAGERY

URBAN IMPERVIOUS SURFACE EXTRACTION FROM VERY HIGH RESOLUTION IMAGERY BY ONE-CLASS SUPPORT VECTOR MACHINE

An efficient face recognition algorithm based on multi-kernel regularization learning

Automatic Shadow Removal by Illuminance in HSV Color Space

A Robust Sparse Representation Model for Hyperspectral Image Classification

DUe to the rapid development and proliferation of hyperspectral. Hyperspectral Image Classification in the Presence of Noisy Labels

Efficient Image Compression of Medical Images Using the Wavelet Transform and Fuzzy c-means Clustering on Regions of Interest.

Change Detection in Remotely Sensed Images Based on Image Fusion and Fuzzy Clustering

Multi-resolution Segmentation and Shape Analysis for Remote Sensing Image Classification

Fuzzy Entropy based feature selection for classification of hyperspectral data

Learning with infinitely many features

Nearest Clustering Algorithm for Satellite Image Classification in Remote Sensing Applications

DIMENSION REDUCTION FOR HYPERSPECTRAL DATA USING RANDOMIZED PCA AND LAPLACIAN EIGENMAPS

An Adaptive Threshold LBP Algorithm for Face Recognition

AN INTEGRATED APPROACH TO AGRICULTURAL CROP CLASSIFICATION USING SPOT5 HRV IMAGES

High Resolution Remote Sensing Image Classification based on SVM and FCM Qin LI a, Wenxing BAO b, Xing LI c, Bin LI d

HYPERSPECTRAL imagery (HSI), spanning the visible

A Framework of Hyperspectral Image Compression using Neural Networks

COMBINING HIGH SPATIAL RESOLUTION OPTICAL AND LIDAR DATA FOR OBJECT-BASED IMAGE CLASSIFICATION

Three Dimensional Texture Computation of Gray Level Co-occurrence Tensor in Hyperspectral Image Cubes

IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS. Kirthiga, M.E-Communication system, PREC, Thanjavur

Classification (or thematic) accuracy assessment. Lecture 8 March 11, 2005

4178 IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 9, NO. 9, SEPTEMBER 2016

A New Feature Local Binary Patterns (FLBP) Method

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision

Video annotation based on adaptive annular spatial partition scheme

HIGH spectral resolution images are now available with

Copyright 2005 Society of Photo-Optical Instrumentation Engineers.

Lab 9. Julia Janicki. Introduction

DIFFERENT OPTIMAL BAND SELECTION OF HYPERSPECTRAL IMAGES USING A CONTINUOUS GENETIC ALGORITHM

HYBRID CENTER-SYMMETRIC LOCAL PATTERN FOR DYNAMIC BACKGROUND SUBTRACTION. Gengjian Xue, Li Song, Jun Sun, Meng Wu

Extreme Sparse Multinomial Logistic Regression: A Fast and Robust Framework for Hyperspectral Image Classification

A Robust Band Compression Technique for Hyperspectral Image Classification

Aggregated Color Descriptors for Land Use Classification

IN RECENT years, with the rapid development of space

A CNN-based Spatial Feature Fusion Algorithm for Hyperspectral Imagery Classification. Alan J.X. Guo, Fei Zhu. February 1, 2018

A Laplacian Based Novel Approach to Efficient Text Localization in Grayscale Images

Transcription:

Hyperspectral Image Classification Using Gradient Local Auto-Correlations Chen Chen 1, Junjun Jiang 2, Baochang Zhang 3, Wankou Yang 4, Jianzhong Guo 5 1. epartment of Electrical Engineering, University of Texas at allas, Texas, USA 2. School of Computer Science, China University of Geosciences, Wuhan, China 3. School of Automation Science and Electrical Engineering, Beihang University, Beijing, China 4. School of Automation, Southeast University, Nanjing, China 5. School of Electronic and Electrical Engineering, Wuhan Textile University, Wuhan, China chenchen870713@.gmail.com 1 Abstract Spatial information has been verified to be helpful in hyperspectral image classification. In this paper, a spatial feature extraction method utilizing spatial and orientational auto-correlations of image local gradients is presented for hyperspectral imagery (HSI) classification. The Gradient Local Auto-Correlations (GLAC) method employs second order statistics (i.e., auto-correlations) to capture richer information from images than the histogram-based methods (e.g., Histogram of Oriented Gradients) which use first order statistics (i.e., histograms). The experiments carried out on two hyperspectral images proved the effectiveness of the proposed method compared to the state-of-the-art spatial feature extraction methods for HSI classification. 1. Introduction Hyperspectral imagery (HSI) captures a dense spectral sampling of reflectance values over a wide range of spectrum [1]. This rich spectral information provides additional capacities for many remote sensing applications including environmental monitoring, crop analysis, plant and mineral exploration, etc. In conventional HSI classification approaches, only spectral signatures of every pixel in the image are considered. Classification techniques use spectral values alone to assign labels to each pixel are so-called pixelwise classifiers [2]. However, the spatial context information in hyperspectral images is also useful for scene interpretation. uring the last decade, there has been a great deal of effort in exploiting spatial features to improve HSI classification performance. In [3], a volumetric gray level co-occurrence matrix was used to extract the texture features of hyperspectral images. In [4], a spectral-spatial preprocessing method was proposed to incorporate spatial features for HSI classification by employing a multihypothesis prediction strategy that was developed for compressed sensing image reconstruction [5] and image super-resolution [6]. A 3- discrete wavelet transform (3- WT) was employed in [7] to effectively capture the spatial information of hyperspectral images in different scales and orientations. 2- Gabor filters were applied to selected bands or principal components of the hyperspectral image to extract Gabor texture features for classification [8, 9]. Morphological profiles (MPs) generated via a series of structural elements were introduced in [10] to capture multiscale structural features for HSI classification. ue to the effectiveness of MPs characterizing spatial structural features, many features based MPs have been proposed for HSI classification, such as extended morphological profiles (EMPs) [11], attributes profiles (APs) [12], and extended multi-attribute profile (EMAP) [13]. In [14], local binary patterns (LBPs) and Gabor texture features were combined to enhance the discriminative power of the spatial features. Spatial feature extraction plays a key role in improving the HSI classification performance. In this paper, we introduce the gradient local auto-correlations (GLAC) [15] and present a new spatial feature extraction method for hyperspectral images using GLAC. The GLAC descriptor, which is based on the second order of statistics of gradients (spatial and orientational auto-correlations of local image gradients), can effectively capture rich information from images and has been successfully used in motion recognition [22] and human detection [15, 23]. To our best knowledge, this is the first time, image gradient based features have been used for hyperspectral image classification. Experimental results on two HSI datasets demonstrate the effectiveness of the proposed feature extraction method compared with several state-ofthe-art spatial feature extraction methods for HSI classification. The remainder of this paper is organized as follows. Section 2 describes the details of the GLAC descriptor and the classification framework. Section 3 presents the experimental results with two real hyperspectral datasets. Finally, Section 4 concludes the paper. 2. Methodology 2.1. Gradient local auto-correlations GLAC [15] descriptor is an effective tool for extracting shift-invariant image features. Let I be an image region 225

and r = ( xy, ) t be a position vector in I. The magnitude and the orientation angle of the image gradient at each 2 2 pixel can be represented by n = + and x y θ = arctan,, respectively. The orientation θ is then x y coded into orientation bins by voting weights to the nearest bins to form a gradient orientation vector f R. With the gradient orientation vector f and the gradient th magnitude n, the N order auto-correlation function of local gradients can be expressed as follows: Rd (,..., d a,..., a ) = (1) 0 N, 1 N [16], it was suggested to use several principal components (PCs) of the hyperspectral data to address this issue. However, any feature reduction technique could also be applied. In our spatial feature extraction method, principal component analysis (PCA) [17] is used to obtain the first K PCs. In each PC, GLAC features are generated for the pixel of interest in its corresponding local image patch with size of w w. The GLAC features from all PCs are concatenated to form a single composite feature vector for each pixel in an image as illustrated in Figure 2. For classification, extreme learning machine [18] is employed due to its efficient computation and good classification performance [9, 19]. ω [ n r n r + a1 n r+ an ] fd r f 0 d r + a 1 1 fd r + a N N dr I ( ), ( ),..., ( ) ( ) ( ) ( ), where a i are displacement vectors from the reference th point r, f d is the d element of f, and ω() indicates a weighting function. In the experiments reported later, N {0,1}, a1, xy ±Δ { r,0}, and ω( ) min( ) were considered as suggested in [15], where Δ r represents the displacement interval in both horizontal and vertical directions. For N {0,1}, the formulation of GLAC is given by ( ) ( ) ( ) F r r (2) 0: RN= 0 d0 = n fd r I 0 ( ) = ( ) ( + ) ( ) ( + ) F : R d, d, a min n r, n r a f r f r a. 1 N= 1 0 1 1 1 d0 d1 1 r I The spatial auto-correlation patterns of (, rr+ a 1) are shown in Figure 1. Figure 1. Configuration patterns of (, rr+ a 1). The dimensionality of the above GLAC features ( F 0 2 and F 1) becomes + 4. Although the dimensionality of the GLAC features is high, the computational cost is low due to the sparseness of f. In other words, Eq. (2) is applied to a few non-zero elements of f. It is also worth noting that the computational cost is invariant to the number of bins,, since the sparseness of f does not depend on. 2.2. Proposed classification framework Hyperspectral images usually have hundreds of spectral bands. Therefore, extracting spatial features from each spectral band image creates high computational burden. In Figure 2. Graphical illustration of the procedure of extracting GLAC features from a hyperspectral image. 3. Experiments and analysis In this section, we evaluate the proposed feature extraction method using two real hyperspectral datasets. In our experiments, the first 4 PCs (i.e., K = 4 ) which account for over 95% of the variance of the datasets are considered. Three spatial feature extraction approaches including Gabor filters [8], EMAP [13], and LBP [14] are utilized to compare with our proposed GLAC method. Moreover, classification using the spectral information (denoted by Spec) is also conducted. 3.1. Experimental data We use two widely used benchmarks (the Indian Pines and Pavia University datasets) for HSI classification. Both datasets and their corresponding ground truth maps are obtained from the publicly available website [20]. The Indian Pines dataset was acquired by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) over the Indian Pines test site in northwestern Indiana. The original data consists of 224 spectral bands which were reduced to 200 bands after removal of 24 water-absorption bands. The dataset has a spatial dimension of 145 145 pixels with a spatial resolution of 20m. There are 16 different land-cover classes in this dataset. The Pavia University 226

dataset was acquired by the Reflective Optics System Imaging Spectrometer (ROSIS) over Pavia in north Italy. This dataset has 103 spectral bands and each having a spatial dimension of 610 340 pixels with a spatial resolution of 1.3m. The dataset consists of 9 different land-cover classes. The ground truth labels of the two datasets are shown in Figure 3. There are 10249 labeled pixels for the Indian Pines dataset and 42776 labeled pixels for the Pavia University dataset. etailed information of the number of training and testing samples used for the two datasets is summarized in Tables 3 and 4, respectively. The default parameter settings for the competing methods (Gabor features, EMAP features, and LBP features) are adopted according to [8], [21], and [14], respectively. Δ r 1 2 3 4 5 6 7 8 1 75.7 85.6 88.2 89.9 90.5 90.7 90.7 91.2 2 76.0 84.8 88.8 90.2 90.5 90.7 91.2 91.1 3 75.8 84.7 86.7 89.0 89.4 90.1 90.8 91.0 4 75.1 84.1 88.0 89.6 90.2 90.6 91.6 91.6 5 74.1 83.3 87.5 88.7 89.5 89.7 90.2 90.4 6 73.1 83.4 87.4 88.7 88.9 89.9 90.4 91.1 7 74.0 80.8 88.5 88.7 89.9 90.4 90.5 90.7 8 75.2 82.5 88.9 88.8 90.7 91.2 91.1 91.4 Table 1. Classification accuracy (%) of GLAC with different parameters (, Δ r) for the Indian Pines dataset. (a) (b) Figure 3. Ground truth labels of two hyperspectral datasets: (a) Indian Pines; (b) Pavia University. 3.2. Parameter setting In the proposed feature extraction method, two parameters and Δ r for GLAC are important. First of all, we estimate the optimal parameter set (, Δ r) for the GLAC descriptor. The training samples are randomly selected and the rest of the labeled samples are used for testing. The training and testing samples are fixed for various parameter sets of (, Δ r). For simplicity, we set patch size w = 21 21 in this parameter tuning experiment. The classification results with various parameter sets for the two datasets are shown in Table 1 and Table 2, respectively. From these two tables, larger number of bins ( ) generally generates better classification performance but with higher dimensionality of the GLAC features. Moreover, smaller value of Δ r achieves higher classification accuracy since local gradients are supposed to be highly correlated. Therefore, = 7 and Δ r = 4 are chosen for the Indian Pines dataset in terms of classification accuracy and feature dimensionality. Similarly, = 6 and Δ r = 3 are chosen for the Pavia University dataset. After selecting the parameter set (, Δ r), we study the patch size for the GLAC feature extraction method. The impacts from different patch sizes are investigated and the results are presented in Figure 4. The classification performance tends to be maximum with w = 21 for both of the datasets. In addition, the parameters for ELM with a radial basis function (RBF) kernel are chosen as the ones that maximized the training accuracy by means of a 5-fold cross-validation test in all the experiments. Δ r 1 2 3 4 5 6 7 8 1 71.8 82.3 82.3 85.5 84.7 83.9 85.3 84.5 2 75.0 82.9 83.9 85.3 84.2 85.6 85.3 85.6 3 76.1 82.9 84.2 84.2 84.9 87.2 87.1 87.5 4 73.8 80.3 82.1 83.6 84.0 85.9 85.8 86.2 5 72.6 79.8 81.7 83.4 84.1 85.8 85.9 86.5 6 72.4 77.3 80.5 82.6 83.8 85.3 85.2 85.9 7 70.3 76.7 80.1 82.4 83.0 85.2 85.1 86.5 8 72.1 77.2 81.7 82.0 82.8 84.9 84.7 85.9 Table 2. Classification accuracy (%) of GLAC with different parameters (, Δ r) for the Pavia University dataset. Figure 4. Classification performance versus different patch sizes. 3.3. Results In order to quantify the efficacy of the proposed feature extraction method, we compare it with several state-ofthe-art spatial feature extraction methods for HSI classification. To avoid any bias, the classification experiment is repeated 10 times with different realizations of randomly selected training and testing samples and the classification performance (overall accuracy (OA) and Kappa coefficient of agreement (κ )) is averaged over the 10 trails. The performance of the proposed method is shown in Tables 3 and 4 for the two experimental datasets. 227

Class Samples Features Train Test Spec Gabor EMAP LBP GLAC Alfalfa 6 40 52.00 97.75 95.50 98.75 95.75 Corn-notill 30 1398 56.61 83.58 79.77 88.28 86.98 Corn-mintill 30 800 66.00 84.78 89.46 90.48 94.64 Corn 24 213 75.63 98.26 96.20 98.40 98.92 Grass-pasture 30 453 88.52 93.47 92.45 94.83 95.76 Grass-trees 30 700 92.24 95.33 99.69 96.41 96.83 Grass-pasture-mowed 3 25 76.00 88.80 92.80 94.00 98.80 Hay-windrowed 30 448 96.56 99.96 99.87 99.91 100.00 Oats 2 18 47.22 85.00 85.56 87.78 83.33 Soybean-notill 30 942 71.70 91.24 87.87 91.92 93.27 Soybean-mintill 30 2425 57.79 78.94 88.81 84.06 86.98 Soybean-clean 30 563 68.42 90.83 89.48 92.33 94.37 Wheat 22 183 98.47 98.85 99.56 98.63 99.89 Woods 30 1235 85.55 93.72 97.58 96.49 97.14 Build-Grass-Trees-rives 30 356 69.21 98.62 95.70 99.78 98.74 Stone-Steel-Towers 10 83 86.63 93.25 91.20 93.98 93.73 Overall Accuracy (%) 71.09 88.27 90.72 91.36 92.62 Kappa Coefficient 0.6741 0.8673 0.8942 0.9019 0.9160 Table 3. The classification performance for the Indian Pines dataset. Class Samples Features Train Test Spec Gabor EMAP LBP GLAC Asphalt 30 6601 67.07 70.23 76.23 80.34 80.38 Meadows 30 18619 80.78 86.73 87.49 80.92 85.20 Gravel 30 2069 76.34 80.39 74.53 95.02 93.57 Trees 30 3034 92.20 83.78 93.27 73.83 80.92 Painted Metal Sheets 30 1315 99.38 99.64 98.13 92.10 97.03 Bare Soil 30 4999 70.13 78.39 88.41 94.60 96.06 Bitumen 30 1300 90.41 88.14 95.27 96.77 95.90 Self-Blocking Bricks 30 3652 68.31 85.72 91.85 93.31 95.17 Shadows 30 917 94.56 77.40 98.91 75.27 84.07 Overall Accuracy (%) 78.09 82.82 86.82 84.39 87.35 Kappa Coefficient 0.7171 0.7763 0.8289 0.8005 0.8369 Table 4. The classification performance for the Pavia University dataset. From the results, we can see that the performance of classification with spatial features is much better than that with the spectral signatures only (Spec). For example, GLAC produces over 20% and 9% higher accuracies than Spec for the Indian Pines dataset and the Pavia University dataset, respectively. This is because spatial features can take advantage of local neighborhood information that adjacent pixels share similar characteristics and may belong to the same class due to homogeneous regions in HSI. Among various spatial features, GLAC achieves the highest classification accuracies for both datasets, which demonstrates that GLAC features exhibit more discriminative power than other features. Figure 5 provides an example of visual inspection of the classification maps generated using different features for the Indian Pines dataset. As shown in this figure, classification maps of spatial feature based classification methods are less noisy and more accurate than the map of pixel-wise classification method (i.e., Spec). We also report the computational complexity of different feature extraction methods on the Indian Pines dataset in Table 5. Figure 5. Thematic maps resulting from classification for the Indian Pines dataset. (a) Ground truth map. (b) Spec: 69.77%. (c) Gabor: 87.41%. (d) EMAP: 91.15%. (e) LBP: 91.21%. (f) GLAC: 92.78%. Experiments were carried out using MATLAB on an Intel i7 Quad-core 3.4 GHz desktop computer with 8 GB of RAM. Although GLAC has the highest computational cost, it should be noted that GLAC feature extraction is performed independently on each PC, which means that feature extraction can go parallel. Thus, the speed of GLAC feature extraction on PCs can be greatly improved. 225

Features Processing time (s) GLAC (proposed) 15.21 EMAP 1.12 LBP 3.69 Gabor 2.17 Table 5. Processing times of different feature extraction methods on the Indian Pines dataset. 4. Conclusion and future work In this paper, a spatial feature extraction method based on auto-correlations of local image gradients was proposed for hyperspectral imagery (HSI) classification. The gradient local auto-correlations (GLAC) features utilize spatial and orientational auto-collections of local gradients to describe the rich texture information in hyperspectral images. The experimental results on two standard datasets demonstrated superior performance of GLAC over several state-of-the-art spatial feature extraction methods. Although the proposed GLAC feature extraction method provides effective classification results, we believe that there is room for further improvement. In future work, we plan to extend GLAC to a 3 version (similar to a 3 Gabor filter or a 3 wavelet), thereby extracting features directly from a 3 hyperspectral image cube. Acknowledgement We acknowledge the support of the Natural Science Foundation of China, under Contracts 61272052 and 61473086, the Program for New Century Excellent Talents of the University of Ministry of Education of China, and the key program of Hubei provincial department of education under grant 2014602. References [1] C. Chen, W. Li, E. W. Tramel, and J. E. Fowler, Reconstruction of hyperspectral imagery from random projections using multihypothesis prediction, IEEE Transactions on Geoscience and Remote Sensing, vol. 52, no. 1, pp. 365-374, January 2014. [2] Y. Tarabalka, J. A. Benediktsson, and J. Chanussot, Spectral spatial classification of hyperspectral imagery based on partitional clustering techniques, IEEE Transactions on Geoscience and Remote Sensing, vol. 47, no. 8, pp. 2973-2987, August 2009. [3] H. Su, B. Yong, P. u, H. Liu, C. Chen, and K. Liu, ynamic classifier selection using spectral-spatial information for hyperspectral image classification, Journal of Applied Remote Sensing, vol. 8, no. 1, pp. 085095, August 2014. [4] C. Chen, W. Li, E. W. Tramel, M. Cui, S. Prasad, and J. E. Fowler, Spectral-spatial preprocessing using multihypothesis prediction for noise-robust hyperspectral image classification, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 7, no. 4, pp. 1047-1059, April 2014. [5] C. Chen, E. W. Tramel, and J. E. Fowler, Compressed-sensing recovery of images and video using multihypothesis predictions, Proceedings of the 45th Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, November 2011, pp. 1193-1198. [6] C. Chen, and J. E. Fowler, Single-image super-resolution using multihypothesis prediction, Proceedings of the 46th Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, November 2012, pp. 608-612. [7] Y. Qian, M. Ye, and J. Zhou, Hyperspectral image classification based on structured sparse logistic regression and three-dimensional wavelet texture features, IEEE Transactions on Geoscience and Remote Sensing, vol. 51, no. 4, pp. 2276-2291, Apr. 2013. [8] W. Li, and Q. u, Gabor-filtering based nearest regularized subspace for hyperspectral image classification, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 7, no. 4, pp. 1012 1022, Apr. 2014. [9] C. Chen, W. Li, H. Su, and K. Liu, Spectral-spatial classification of hyperspectral image based on kernel extreme learning machine, Remote Sensing, vol. 6, no. 6, pp. 5795-5814, June 2014. [10] M. Pesaresi, and J. Benediktsson, A new approach for the morphological segmentation of high resolution satellite imagery, IEEE Transactions on Geoscience and Remote Sensing, vol. 39, no. 2, pp. 309-320, Feb. 2001. [11] J. A. Benediktsson, J. A. Palmason, and J. Sveinsson, Classification of hyperspectral data from urban areas based on extended morphological profiles, IEEE Transactions on Geoscience and Remote Sensing, vol. 43, no. 3, pp. 480-491, March 2005. [12] M. alla Mura, J. A. Benediktsson, B. Waske, and L. Bruzzone, Morphological attribute profiles for the analysis of very high resolution images, IEEE Transactions on Geoscience and Remote Sensing, vol. 48, no. 10, pp. 3747-3762, Oct. 2010. [13] M.. Mura, J. A. Benediktsson, B. Waske, and L. Bruzzone, Extended profiles with morphological attribute filters for the analysis of hyperspectral data, Int. J. Remote Sens., vol. 31, no. 22, pp. 5975-5991, Jul. 2010. [14] W. Li, C. Chen, H. Su, and Q. u, Local binary patterns for spatial-spectral classification of hyperspectral imagery, IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 7, pp. 3681-3693, July 2015. [15] T. Kobayashi, and N. Otsu, Image feature extraction using gradient local auto-correlation, in ECCV 2008, Part I, vol. 5302, 2008, pp. 346-358. [16] J. A. Richards, and X. Jia, Remote Sensing igital Image Analysis: An Introduction. Berlin, Germany: Springer-Verlag, 2006. [17] J. Ren, J. Zabalza, S. Marshall, and J. Zheng, Effective Feature Extraction and ata Reduction in Remote Sensing Using Hyperspectral Imaging, IEEE Signal Processing Magazine, vol. 31, no. 4, pp. 149-154, July 2014. [18] G. B. Huang, H. Zhou, X. ing, and R. Zhang, Extreme learning machine for regression and multiclass classification, IEEE Trans. Syst., Man, Cybern., Part B: Cybern., vol. 42, no. 2, pp. 513-529, Apr. 2012. [19] C. Chen, R. Jafari, and N. Kehtarnavaz, Action Recognition from epth Sequences Using epth Motion Maps-based Local Binary Patterns, Proceedings of the IEEE Winter Conference on Applications of Computer Vision, Waikoloa Beach, HI, January 2015, pp. 1092-1099. [20] http://www.ehu.eus/ccwintco/index.php?title=hyperspectral_remo te_sensing_scenes [21] J. Li, P. R. Marpu, A. Plaza, J. M. Bioucas-ias, and J. A. Benediktsson, Generalized composite kernel framework for hyperspectral image classification, IEEE Transactions on Geoscience and Remote Sensing, vol. 51, no. 9, pp. 4816-4829, Sep. 2013. [22] T. Kobayashi, and N. Otsu, Motion recognition using local autocorrelation of space time gradients, Pattern Recognition Letters, vol. 33, no. 9, pp. 1188-1195, July 2012. [23] T-K. Tran, N-N. Bui, and J-Y. Kim, Human detection in video using poselet combine with gradient local auto correlation classifier, Proceedings of 2014 International Conference on IT Convergence and Security, Beijing, China, Oct. 2014, pp. 1-4. 226