Fusion of pixel based and object based features for classification of urban hyperspectral remote sensing data

Similar documents
RECENT advances in the remote sensing technology of. Generalized Graph-Based Fusion of Hyperspectral and LiDAR Data Using Morphological Features

Classification of Hyperspectral Data over Urban. Areas Using Directional Morphological Profiles and. Semi-supervised Feature Extraction

Effect of Hyperspectral image denoising with PCA and total variation on tree species mapping using Apex Data

IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 7, NO. 6, JUNE

Combining Hyperspectral and LiDAR Data for Building Extraction using Machine Learning Technique

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

GRAPH-BASED SEMI-SUPERVISED HYPERSPECTRAL IMAGE CLASSIFICATION USING SPATIAL INFORMATION

3D OBJECT CLASSIFICATION BASED ON THERMAL AND VISIBLE IMAGERY IN URBAN AREA

IMPROVED TARGET DETECTION IN URBAN AREA USING COMBINED LIDAR AND APEX DATA

Dimensionality Reduction using Hybrid Support Vector Machine and Discriminant Independent Component Analysis for Hyperspectral Image

Remote Sensed Image Classification based on Spatial and Spectral Features using SVM

Classification of urban feature from unmanned aerial vehicle images using GASVM integration and multi-scale segmentation

PARALLEL IMPLEMENTATION OF MORPHOLOGICAL PROFILE BASED SPECTRAL-SPATIAL CLASSIFICATION SCHEME FOR HYPERSPECTRAL IMAGERY

An Approach for Combining Airborne LiDAR and High-Resolution Aerial Color Imagery using Gaussian Processes

COMBINING HIGH SPATIAL RESOLUTION OPTICAL AND LIDAR DATA FOR OBJECT-BASED IMAGE CLASSIFICATION

Introduction to digital image classification

URBAN IMPERVIOUS SURFACE EXTRACTION FROM VERY HIGH RESOLUTION IMAGERY BY ONE-CLASS SUPPORT VECTOR MACHINE

Presented at the FIG Congress 2018, May 6-11, 2018 in Istanbul, Turkey

High-Resolution Image Classification Integrating Spectral-Spatial-Location Cues by Conditional Random Fields

APPLICATION OF SOFTMAX REGRESSION AND ITS VALIDATION FOR SPECTRAL-BASED LAND COVER MAPPING

DIGITAL IMAGE ANALYSIS. Image Classification: Object-based Classification

BUILDING DETECTION IN VERY HIGH RESOLUTION SATELLITE IMAGE USING IHS MODEL

Fuzzy Entropy based feature selection for classification of hyperspectral data

Fast Sample Generation with Variational Bayesian for Limited Data Hyperspectral Image Classification

Three Dimensional Texture Computation of Gray Level Co-occurrence Tensor in Hyperspectral Image Cubes

PoS(CENet2017)005. The Classification of Hyperspectral Images Based on Band-Grouping and Convolutional Neural Network. Speaker.

ORGANIZATION AND REPRESENTATION OF OBJECTS IN MULTI-SOURCE REMOTE SENSING IMAGE CLASSIFICATION

BUILDING EXTRACTION FROM VERY HIGH SPATIAL RESOLUTION IMAGE

IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 13, NO. 8, AUGUST

ROBUST JOINT SPARSITY MODEL FOR HYPERSPECTRAL IMAGE CLASSIFICATION. Wuhan University, China

BUILDINGS CHANGE DETECTION BASED ON SHAPE MATCHING FOR MULTI-RESOLUTION REMOTE SENSING IMAGERY

Region Based Image Fusion Using SVM

Spatially variant dimensionality reduction for the visualization of multi/hyperspectral images

MULTIVARIATE TEXTURE DISCRIMINATION USING A PRINCIPAL GEODESIC CLASSIFIER

Classification of Hyper spectral Image Using Support Vector Machine and Marker-Controlled Watershed

Schroedinger Eigenmaps with Nondiagonal Potentials for Spatial-Spectral Clustering of Hyperspectral Imagery

HYPERSPECTRAL image (HSI) acquired by spaceborne

Nearest Clustering Algorithm for Satellite Image Classification in Remote Sensing Applications

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 54, NO. 8, AUGUST

CHAPTER 5 OBJECT ORIENTED IMAGE ANALYSIS

Hyperspectral Image Classification Using Gradient Local Auto-Correlations

Raster Classification with ArcGIS Desktop. Rebecca Richman Andy Shoemaker

Spectral-Spatial Response for Hyperspectral Image Classification

Geospatial Computer Vision Based on Multi-Modal Data How Valuable Is Shape Information for the Extraction of Semantic Information?

FOOTPRINTS EXTRACTION

The Feature Analyst Extension for ERDAS IMAGINE

Randomized Nonlinear Component Analysis for Dimensionality Reduction of Hyperspectral Images

Applying Supervised Learning

Hybrid Model with Super Resolution and Decision Boundary Feature Extraction and Rule based Classification of High Resolution Data

Remote Sensing Introduction to the course

Files Used in This Tutorial. Background. Feature Extraction with Example-Based Classification Tutorial

PRINCIPAL COMPONENT ANALYSIS IMAGE DENOISING USING LOCAL PIXEL GROUPING

Lab 9. Julia Janicki. Introduction

An Object-Oriented Binary Change Detection Method Using Nearest Neighbor Classification

CHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT

CS 340 Lec. 4: K-Nearest Neighbors

Face Recognition using Laplacianfaces

1314 IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 7, NO. 4, APRIL 2014

SYNERGY BETWEEN AERIAL IMAGERY AND LOW DENSITY POINT CLOUD FOR AUTOMATED IMAGE CLASSIFICATION AND POINT CLOUD DENSIFICATION

Available online: 21 Nov To link to this article:

VEHICLE QUEUE DETECTION IN SATELLITE IMAGES OF URBAN AREAS

Figure 1: Workflow of object-based classification

GEOBIA for ArcGIS (presentation) Jacek Urbanski

Classification of Very High Spatial Resolution Imagery Based on the Fusion of Edge and Multispectral Information

Prof. Jose L. Flores, MS, PS Dept. of Civil Engineering & Surveying

A MAXIMUM NOISE FRACTION TRANSFORM BASED ON A SENSOR NOISE MODEL FOR HYPERSPECTRAL DATA. Naoto Yokoya 1 and Akira Iwasaki 2

CO-REGISTERING AND NORMALIZING STEREO-BASED ELEVATION DATA TO SUPPORT BUILDING DETECTION IN VHR IMAGES

Nonlinear data separation and fusion for multispectral image classification

Object-Based Classification & ecognition. Zutao Ouyang 11/17/2015

UAV-based Mapping of Surface Imperviousness for Water Runoff Modelling

DEEP LEARNING TO DIVERSIFY BELIEF NETWORKS FOR REMOTE SENSING IMAGE CLASSIFICATION

IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 7, NO. 6, JUNE

Facial Expression Recognition using Principal Component Analysis with Singular Value Decomposition

Constrained Manifold Learning for Hyperspectral Imagery Visualization

Robustness of Selective Desensitization Perceptron Against Irrelevant and Partially Relevant Features in Pattern Classification

Research on a Remote Sensing Image Classification Algorithm Based on Decision Table Dehui Zhang1, a, Yong Yang1, b, Kai Song2, c, Deyu Zhang2, d

A Vector Agent-Based Unsupervised Image Classification for High Spatial Resolution Satellite Imagery

FACE RECOGNITION USING SUPPORT VECTOR MACHINES

THE detailed spectral information of hyperspectral

Visible and Long-Wave Infrared Image Fusion Schemes for Situational. Awareness

A NEW APPROACH TO OBJECT RECOGNITION ON HIGH RESOLUTION SATELLITE IMAGE *

Time Series Clustering Ensemble Algorithm Based on Locality Preserving Projection

Linear Discriminant Analysis in Ottoman Alphabet Character Recognition

The Analysis of Parameters t and k of LPP on Several Famous Face Databases

Available online at ScienceDirect. Procedia Computer Science 93 (2016 )

Remote Sensing Data Classification Using Combined Spectral and Spatial Local Linear Embedding (CSSLE)

Unsupervised Change Detection in Remote-Sensing Images using Modified Self-Organizing Feature Map Neural Network

Automated Extraction of Buildings from Aerial LiDAR Point Cloud and Digital Imaging Datasets for 3D Cadastre - Preliminary Results

IMPROVING 2D CHANGE DETECTION BY USING AVAILABLE 3D DATA

Spectral Angle Based Unary Energy Functions for Spatial-Spectral Hyperspectral Classification Using Markov Random Fields

Unsupervised learning in Vision

High Resolution Remote Sensing Image Classification based on SVM and FCM Qin LI a, Wenxing BAO b, Xing LI c, Bin LI d

BUILDING DETECTION AND STRUCTURE LINE EXTRACTION FROM AIRBORNE LIDAR DATA

Technical Report. Title: Manifold learning and Random Projections for multi-view object recognition

Frame based kernel methods for hyperspectral imagery data

HIGH spatial resolution remotely sensed (HSRRS) images

Textural Features for Hyperspectral Pixel Classification

AUTOMATIC BUILDING DETECTION FROM LIDAR POINT CLOUD DATA

Keywords: impervious surface mapping; multi-temporal data; change detection; high-resolution imagery; LiDAR; object-based post-classification fusion

Remote Sensing Image Analysis via a Texture Classification Neural Network

Transcription:

Fusion of pixel based and object based features for classification of urban hyperspectral remote sensing data Wenzhi liao a, *, Frieke Van Coillie b, Flore Devriendt b, Sidharta Gautama a, Aleksandra Pizurica a, Wilfried Philips a a iminds Telin IPI, Faculty of Engineering and Architecture, b FORSIT Research Unit, Faculty of Bioscience Engineering, Ghent University, Belgium *Corresponding author: wenzhi.liao@telin.ugent.be, +3292643270, +3292644295 Abstract: Hyperspectral imagery contains a wealth of spectral and spatial information that can improve target detection and recognition performance. Typically, spectral information is inferred pixel based, while spatial information related to texture, context and geometry are deduced on a per object basis. Existing feature extraction methods cannot fully utilize both the spectral and spatial information. Data fusion by simply stacking different feature sources together does not take into account the differences between feature sources. In this paper, we propose a feature fusion method to couple dimension reduction and data fusion of the pixel and object based features of hyperspectral imagery. The proposed method takes into account the properties of different feature sources, and makes full advantage of both the pixel and object based features through the fusion graph. Experimental results on classification of urban hyperspectral remote sensing image are very encouraging. Keywords: Data Fusion, Remote Sensing, Object based features, Pixel based features. 1. Introduction Nowadays, advanced sensor technology and image processing algorithms allow us to measure different aspects of the objects on the Earths surface. Both spectral and spatial characteristics can be extracted from hyperspectral remote sensing data, and used to identify and discriminate between ground objects. In particular, spectral features inferred from hyperspectral image pixels provide a detailed description of the spectral signatures of ground covers, whereas spatial features deduced from image objects give detailed information about texture, context and geometry of the surface objects (Rastner et al., 2014; Myint et al., 2011; Weih and Riggan, 2010). It is clear that spectral or spatial features alone might not be sufficient to obtain reliable classification results in an urban context. Instead, the combination of both data sources can contribute to a more comprehensive interpretation of the ground objects (Zhang and Tang, 2013; Chan et al., 2009). For example, spectral signatures cannot differentiate between objects made of the same material (e.g. roofs and roads made with the same asphalt), while the latter may be easily distinguished by their geometry. On the other hand, spatial features alone may fail to discriminate between objects that are quite different in nature (e.g. grass field and swimming pool), but similar in their geometry. Stacking multi source features together is a widely applied data fusion technique for classification (Chan et al., 2009). These methods first apply feature extraction on each individual feature source, after which all features are concatenated into one stacked vector 179

for classification. While such methods are appealing due to their simplicity, they do not always perform better (and sometimes worse) than using a single feature source. a b c d Figure 1. Some of the object based features, a. First PC extracted from the original HS image, b. GLCM entropy (all directions) (PC1), c. Length to width ratio, and d. Area based on sub object analysis. This is because the value of different components in the stacked feature vector can be significantly unbalanced. As a consequence, the information contained by different feature sources is not equally represented or measured. Furthermore, by stacking several feature sources together, the pooled data may contain redundant information. Last but not least, the increase in the dimensionality of the stacked features, combined with the limited number of labelled samples, may together lead to the problem of the curse of dimensionality. Therefore, we propose a graph based fusion method to couple dimension reduction and data fusion of the pixel and object based features. First, object based features are generated on the first few principal components of the original hyperspectral data. Second, we build a fusion graph where only the feature points with similar spectral and spatial characteristics are connected. Finally, we solve the problem of multi source feature fusion by projecting both feature sources into a linear subspace, on which neighborhood data points (i.e. with both similar spectral and spatial characteristics) in the high dimensional feature space are kept on neighborhood in the low dimensional projected subspace as well. This way, the proposed method takes into account the properties of different feature sources and makes full advantage of both pixel and object based features through the fusion graph. Our graph based data fusion method won the ``Best Paper Challenge'' award of 2013 IEEE Data Fusion Contest, but with experiments on multi sensor data sources (hyperspectral and LiDAR data) (Debes et al., 2014). 2. Hyperspectral data and object based features Experiments were run on hyperspectral remote sensing data from urban area. The data were collected by the ROSIS sensor over the University of Pavia, Italy, with 610 340 pixels and 103 spectral bands after removal of noisy bands. This data set includes 9 land cover/use classes, with very fine spatial resolution of 1.3 meters by pixel. To obtain object based features from a hyperspectral image, principal component analysis (PCA) was first applied to the original hyperspectral data, and the first 4 PCs were selected (representing 99% of the cumulative variance) to generate the object based features. A multi resolution segmentation was performed with ecognition on the first 4 PCs of the original HS image, and a total of 28 object based features were generated: Mean, StdDev, Ratio, GLCM Contrast, Entropy, Correlation in the first 4 PCs, respectively; Area, Length/width, Shape index and Width. Fig.1 shows some of the object based features. 180

3. Proposed feature fusion method We first build a graph for each feature source, for example, the graph constructed by spectral features (i.e., G Spe =(X Spe, A Spe )), where X Spe denotes pixels from original HS data, and A represents the edges of the graph. The edge between data point x i Spec and x j Spec is here Figure 2. Proposed data fusion framework. denoted as A ij Spec {0, 1}; A ij Spec =1 if x i Spec and x j Spec are close and A ij Spec =0 if x i Spec and x j Spec are far apart. The close is defined as belonging to k nearest neighbors (knn) of the other data points. The knns of the data point x i Spec are its k nearest neighbors in terms of spectral signatures. On the other hand, when the graph is constructed by object based features, the knns of the data point x i Spat are its k nearest neighbors in terms of spatial characteristics. We define a fusion graph G Fus =(X Sta, A Fus ), where X Sta =[ X Spec ; X Spat ], A Fus = A Spec A Spat. The operator denotes element wise multiplication, i.e. A ij Fus = A ij Spec A ij Spat. This means that the stacked data point x i Sta and x i Sta are connected only if they have similar both spectral and spatial characteristics. Fig. 2 shows our proposed fusion framework, for more details to obtain the fused features, we refer the readers to our recent work (Debes et al., 2014). 4. Experimental results The SVM classifier with radial basis function (RBF) kernels is applied in our experiments. We compare our proposed method with the schemes of (1) using original HS data (Spectral); (2) using the object based features computed on the first 4 PCs of the original HS data (OBIA); (3) stacking all spectral and object based features together (Sta); (4) features fused by using the graph constructed by stacked features (i.e. LPP (He and Niyogi, 2004)) (LPP). We randomly select 20 labelled samples per class for training. The classifiers were evaluated against the remaining labelled samples by measuring the Overall Accuracy (OA), the Average Accuracy (AA) and the Kappa coefficient (к). Tab. 1 shows the accuracies obtained from the experiments, and Fig. 3 shows the classification maps. Our proposed feature fusion method yields better overall performance, with 4% improvement compared to using single pixel /object based features, and with almost 3% improvements over the other fusion schemes in terms of OA. The use of pixel based features alone yields OA of less than 75%, and thus produces a noisy classification map. The OBIA approach can produce much better results, with more 10% improvements in OA and much smoother classification map. The Sta method produced lower accuracies than only using single object based features, indicating that the spatial information contained in the original HS data was not well exploited in such a stacked architecture. This is not surprising because values of different feature components can be significantly unbalanced, and thus the information contained by different features is not equally well represented. The same problems happen when using the stacked features to build a graph in the LPP method. The proposed method overcomes these problems and achieves thereby better classification accuracies. 181

Table 1. Classification Accuracy assessment report. Spectral OBIA Sta LPP Proposed No. of Features 103 28 131 26 24 OA (%) 74.91 86.19 81.39 87.05 90.66 AA (%) 83.13 87.99 84.20 89.12 91.76 к 0.685 0.820 0.759 0.837 0.878 a b c d e Figure 3. Part of classification maps, a. Ground truth, b. Spectral, c. OBIA, d. Sta, and e. Proposed method. 5. Conclusions In this paper, we presented a graph based feature fusion method to include both pixel and object based features in the classification process. Experiments on a hyperspectral image demonstrate that data fusion by simply stacking several feature sources together may perform worse than using a single feature source. The proposed method considers the differences between pixel and object based features, makes full advantage of both feature sources through fusion graph, and improves thereby strongly the classification performance. Acknowledgment This work was financially supported by the SBO IWT project Chameleon: Domain specific Hyperspectral Imaging Systems for Relevant Industrial Applications. The authors would like to thank Prof. Paolo Gamba from the University of Pavia, Italy, for kindly providing the University Area hyperepsectral data. References Rastner P., Bolch T., Notarnicola C., Paul F., 2014, A Comparison of Pixel and Object Based Glacier Classification With Optical Satellite Images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, Volume 7, 3: 853 862. Myint S. W., Gober P., Brazel A., Clarke S. G., Weng Q., 2011, Per pixel vs. object based classification of urban land cover extraction using high spatial resolution imagery. Remote Sensing of Environment, Volume 115, 5:1145 1161. Zhang A., Tang P., 2013, Fusion algorithm of pixel based and object based classifier for remote sensing image classification. 2013 IEEE International Geoscience and Remote Sensing Symposium, Melbourne, Australie, pp. 2740 2743. Chan J.C.W., Bellens R., Canters F., Gautama S., 2009, An assessment of geometric activity features for per pixel classification of urban man made objects using very high resolution satellite imagery. Photogrammetric Engineering & Remote Sensing, Volume 75, 4: 397 411. Debes C., Merentitis A., Heremans R., Hahn J., Frangiadakis N., Kasteren T., Liao W., Bellens R., Pizurica A., Gautama S., Philips W., Prasad S., Du Q., Pacifici F., 2014, Hyperspectral and LiDAR Data Fusion: Outcome of the 2013 GRSS Data Fusion 182

Contest. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (In Press). Weih R. C., Riggan N. D., 2010, Object Based Classification vs. Pixel Based Classification: Comparative Importance of Multi Resolution Imagery. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences (GEOBIA 2010), Volume: XXXVIII 4/C7. He X. F., Niyogi P., 2004, Locality preserving projections. Advances in Neural Information Processing Systems 16, MIT Press, Cambridge, pp. 153 160. 183