Non-Linear Separation of classes using a Kernel based Fuzzy c-means (KFCM) Approach

Size: px
Start display at page:

Download "Non-Linear Separation of classes using a Kernel based Fuzzy c-means (KFCM) Approach"

Transcription

1 Non-Linear Separation of classes using a Kernel based Fuzzy c-means (KFCM) Approach AKSHARA P BYJU March, 2015 ITC SUPERVISOR Prof. Dr. Ir. A. Stein IIRS SUPERVISOR Dr. Anil Kumar

2

3 Non-Linear Separation of classes using a Kernel based Fuzzy c-means (KFCM) Approach Akshara P. Byju Enschede, the Netherlands [2015] Thesis submitted to the Faculty of Geo-information Science and Earth Observation of the University of Twente in partial fulfilment of the requirements for the degree of Master of Science in Geo-information Science and Earth Observation. Specialization: Geoinformatics THESIS ASSESSMENT BOARD: Chairperson : Prof. Dr. Ir. M. G. Vosselman External Examiner : Dr. S. K. Ghosh ITC Supervisor : Prof. Dr. Ir. A. Stein ITC Professor : Prof. Dr. Ir. A. Stein IIRS Supervisor : Dr. Anil Kumar OBSERVERS: ITC Observer IIRS Observer : Dr. Nicholas Hamm : Dr. S. K. Srivastav

4 DISCLAIMER This document describes work undertaken as part of a programme of study at the Faculty of Geo-information Science and Earth Observation (ITC), University of Twente, The Netherlands. All views and opinions expressed therein remain the sole responsibility of the author, and do not necessarily represent those of the institute.

5 Dedicated to my loving grandmother Shanta Gopinath, mother and father..

6

7 ABSTRACT Fuzzy classification of remote sensing image allows the characterization and classification of land covers with improved robustness and accuracy. Coarser resolution images contain mixed pixels as well as nonlinearly separable data. Presence of these mixed pixels and non-linear data deteriorates the classification accuracy and computational complexity. Kernels were used for clustering and classification problems based on the similarity between any two samples and these samples are implicitly mapped to a feature space where they are linearly separable. In this research, Kernel based fuzzy clustering has been used to handle both the problem of non-linearity and mixed pixel. A supervised Kernel based Fuzzy c-means classifier has been used to improve the performance of FCM classification technique. Eight kernel functions are incorporated to the objective function of the FCM classifier. As a result, the effects of different kernel functions can be visualized in generated fraction images. The best single kernels are selected by optimizing the weight constant which controls the degree of fuzziness using an entropy and mean membership difference calculation. These are combined to study the effect of composite kernels which includes both the spatial and spectral properties. Fuzzy Error Matrix (FERM) was used to assess the accuracy assessment results and was studied for AWiFS, LISS-III and LISS-IV datasets from Resourcesat-1 and Resourcesat-2. Inverse Multiquadratic and Gaussian kernel using Euclidean norm from Resourcesat-1 and Resourcesat-2 respectively were found to have an overall highest fuzzy accuracy 97.03% and 86.03% for LISS-III dataset. Among the composite kernels Gaussian-Spectral kernel was found to have an overall accuracy of 59.27% for LISS-III. Classification accuracy in the case of untrained classifier was also studied were a decrease in average user s accuracy was observed when compared to trained case. Keywords: Classification, Kernels, Kernel Fuzzy clustering, Feature Space, Fuzzy Error Matrix i

8 ACKNOWLEDGEMENTS Firstly I would like to thank God Almighty for his abundant blessings throughout my research work. I would also like to thank my parents for always being there for me and also the encouragement and support they have given throughout my life. I would like to appreciate Prof. Dr. Alfred Stein for the suggestions and valuable remarks throughout my research work. I am honoured to have a supervisor like him. His in-depth knowledge and lucidness in his words helped me to successfully carry out this research. I owe my sincere gratitude for helping me throughout. I express my sincere gratitude to my IIRS supervisor Dr. Anil Kumar for his valuable guidance and assistance he has rered towards the completion of my research work. He has inspired me and helped me in all the ways a best teacher can do for his student. I am highly obliged for the help and support from him. I would also like to thank Dr. S K Srivastav for giving valuable suggestions and making sure of good research progress throughout for all the students. I would also like to thank Mr. P L N Raju, Group Head for the suggestions and support he has given for completing the course. I express my sincere gratitude to all the IIRS faculties for helping me complete my modules successfully. I would also like to thank Nicholas Hamm for his support and kindness he has given during the course. Special thanks to all my dear fris and all the IIRS members for the help and encouragement they have given throughout my course. Akshara P Byju ii

9 TABLE OF CONTENTS 1. INTRODUCTION MOTIVATION AND PROBLEM STATEMENT RESEARCH OBJECTIVE: RESEARCH QUESTIONS INNOVATION AIMED AT THESIS STRUCTURE LITERATURE REVIEW LAND COVER CLASSIFICATION METHODS: FUZZY C-MEANS (FCM) KERNELS ACCURACY ASSESSMENT CLASSIFICATION APPROACHES, STUDY AREA AND METHODOLOGY CLASSIFICATION APPROACHES AND ACCURACY ASSESSMENT CLUSTERING THE FUZZY c-means (FCM) CLASSIFIER KERNELS KERNEL BASED FUZZY C-MEANS (KFCM) CLASSIFIER ACCURACY ASSESSMENT FUZZY ERROR MATRIX (FERM) SUB-PIXEL CONFUSION UNCERTAINTY MATRIX ROOT MEAN SQUARE ERROR (RMSE) ENTROPY MEASURE MEAN MEMBERSHIP DIFFERENCE METHOD STUDY AREA AND MATERIALS USED STUDY AREA MATERIALS USED DATASET PREPROCESSING REFERENCE DATASET GENERATION METHODOLOGY GEO-REFERENCING PREPERATION OF REFERENCE DATASET SUB-PIXEL CLASSIFICATION ALGORITHMS FUZZY C-MEANS (FCM): KERNEL BASED FUZZY c-means (KFCM): FCM WITH COMOSITE KERNELS ACCURACY ASSESSMENT RESULTS PARAMETER ESTIMATION RESULTS OF SUPERVISED FCM CLASSIFIER RESULTS OF FCM CLASSIFIER USING SINGLE KERNELS RESULTS OF FCM CLASSIFIER USING COMPOSITE KERNELS iii

10 4.5. ACCURACY ASSESSMENT RESULTS UNTRAINED CLASSES DISCUSSION CONCLUSIONS AND RECOMMENDATIONS CONCLUSIONS ANSWERS TO RESEARCH QUESTIONS RECOMMENDATIONS REFERENCES APPENDIX A APPENDIX B APPENDIX C iv

11 LIST OF FIGURES FIGURE 2-1: TWO CLUSTERS IN INPUT SPACE DENOTED IN DIFFERENT SHAPE SHOWING THE NON-LINEARLY AND LINEARLY SEPARABLE CASE FIGURE 3-1: CLUSTERING FIGURE 3-2: MAPPING OF KERNELS TO A HIGHER DIMENSIONAL SPACE...15 FIGURE 3-3: AN IMAGE WITH SIX CLASSES IDENTIFIED ALONG WITH THE GENERATED FRACTIONAL IMAGES FIGURE 3-4: GEOGRAPHICAL LOCATION OF STUDY AREA FIGURE 3-5: LISS IV (RESOURCESAT-2) IMAGE OF SITARGANJ S TEHSIL WITH CLASSES (A) AGRICULTURAL FIELD WITH CROP (B) SAL FOREST (C) EUCALYPTUS PLANTATIONS (D) DRY AGRICULTURAL FIELD (E) WATER FIGURE 3-6: METHODOLOGY ADOPTED FIGURE 4-1: VARIATION IN ENTROPY WITH RESPECT TO WEIGHT CONSTANT m FOR GAUSSIAN KERNEL USING EUCLIDEAN NORM (RESOURCESAT-1 AWIFS) FIGURE 4-2: VARIATION IN MEAN MEMBERSHIP DIFFERENCE WITH RESPECT TO WEIGHT CONSTANT m FOR GAUSSIAN KERNEL USING EUCLIDEAN NORM (RESOURCESAT-1 AWIFS) FIGURE 4-3: ESTIMATION OF WEIGHT GIVEN TO EACH KERNEL (λ) USING (A) ENTROPY AND (B) MEAN MEMBERSHIP DIFFERENCE PLOT FOR GAUSSIAN-SPECTRAL KERNEL FROM AWIFS (RESOURCESAT-1)...34 FIGURE 4-4: MISCLASSIFIED OUTPUTS FOR GAUSSIAN-SPECTRAL RESOURCESAT-1 AWIFS FOR m=1.04 AND λ=0.80 FOR (A) AGRICULTURAL FIELD WITH CROP (B) SAL FOREST (C) EUCALYPTUS PLANTATIONS (D) DRY AGRICULTURAL FIELD WITHOUT CROP (E) MOIST AGRICULTURAL FIELD WITHOUT CROP (F) WATER. 34 FIGURE 4-5: FRACTIONAL IMAGES GENERATED FOR OPTIMIZED m VALUES FOR CLASSIFIER FOR (1) LISS-IV, (2) LISS-III AND (3) AWIFS (RESOURCESAT-1) IMAGES WITH IDENTIFIED CLASSES (A) AGRICULTURAL FIELD WITH CROP (B) SAL FOREST (C) EUCALYPTUS PLANTATION (D) DRY AGRICULTURAL FIELD WITHOUT CROP (E) MOIST AGRICULTURAL FIELD WITHOUT CROP AND (F) WATER FIGURE 4-6: FRACTIONAL IMAGES GENERATED FOR OPTIMIZED m VALUES FOR FCM CLASSIFIER OF (1) LISS-IV, (2) LISS-III AND (3) AWIFS (RESOURCESAT-2) IMAGES WITH IDENTIFIED CLASSES (A) AGRICULTURAL FIELD WITH CROP (B) EUCALYPTUS PLANTATION (C) FALLOW LAND (D) SAL FOREST (E) WATER FCM v

12 FIGURE 4-7: GENERATED FRACTIONAL IMAGES FOR OPTIMIZED m VALUES FOR RESOURCESAT-1 LISS-IV FOR (I) LINEAR (II) POLYNOMIAL (III) SIGMOID (IV) GAUSSIAN KERNEL USING EUCLIDEAN NORM (V) RADIAL BASIS (VI) KMOD (VII) INVERSE MULTIQUADRATIC AND (VIII) SPECTRAL ANGLE KERNELS FOR CLASSES IDENTIFIED AS (A) AGRICULTURAL FIELD WITH CROP (B) SAL FOREST (C) EUCALYPTUS PLANTATIONS (D) DRY AGRICULTURAL FIELD WITHOUT CROP (E) MOIST AGRICULTURAL FIELD WITHOUT CROP AND (F) WATER...40 FIGURE 4-8: GENERATED FRACTIONAL IMAGES FOR OPTIMIZED m VALUES FOR RESOURCESAT-2 LISS-IV FOR (I) LINEAR (II) POLYNOMIAL (III) SIGMOID (IV) GAUSSIAN KERNEL USING EUCLIDEAN NORM (V) RADIAL BASIS (VI) KMOD (VII) INVERSE MULTIQUADRATIC AND (VIII) SPECTRAL ANGLE KERNELS FOR CLASSES IDENTIFIED AS (A) AGRICULTURAL FIELD WITH CROP (B) EUCALYPTUS PLANTATION (C) FALLOW LAND(D) SAL FOREST (E) WATER FIGURE 4-9: GENERATED FRACTIONAL IMAGES FOR OPTIMIZED m VALUES OF RESOURCESAT- 1 LISS-IV FOR (I) GAUSSIAN-SPECTRAL (II) IM-SPECTRAL(III) GAUSSIAN-LINEAR (IV) IM-LINEAR(V) LINEAR-SPECTRAL FOR CLASSES IDENTIFIED AS (A) AGRICULTURAL FIELD WITH CROP (B) SAL FOREST (C) EUCALYPTUS PLANTATION (D) DRY AGRICULTURAL FIELD WITHOUT CROP (E) MOIST AGRICULTURAL FIELD WITHOUT CROP (F) WATER.. 47 FIGURE 4-10: GRAPHICAL REPRESENTATION OF AVERAGE USER S ACCURACY FOR UNTRAINED AND TRAINED CASE FOR IM AND FCM RESOURCESAT-1 (A) AWIFS (B) LISS-III AT OPTIMIZED m FOR RESOURCESAT FIGURE 6-1: NON-LINEARITY IN DIFFERENT CLASSES AS 2D SCATTERPLOT FOR RESOURCESAT-1 LISS-IV IMAGE IN (A) BAND 1-BAND2 (B) BAND2-BAND3 (C) BAND1-BAND3 FOR CLASSES IDENTIFIED FIGURE A-1: GENERATED FRACTIONAL IMAGES FOR BEST SINGLE KERNELS FROM LISS-III (RESOURCESAT-1) FOR (I) LINEAR (II) INVERSE MULTIQUADRATIC (III) SPECTRAL ANGLE KERNEL FOR CLASSES IDENTIFIED AS (A) AGRICULTURE FIELD WITH CROP (B) SAL FOREST (C) EUCALYPTUS PLANTATION (D) DRY AGRICULTURE FIELD WITH CROP (E) MOIST AGRICULTURE FIELD WITH CROP (F)WATER FIGURE A-2: GENERATED FRACTIONAL IMAGES FOR BEST SINGLE KERNELS FROM LISS-III (RESOURCESAT-2) FOR (I) LINEAR (II) GAUSSIAN KERNEL USING EUCLIDEAN NORM (III) SPECTRAL ANGLE KERNEL FOR CLASSES IDENTIFIED AS (A) AGRICULTURE FIELD WITH CROP (B) EUCALYPTUS PLANTATION (C) FALLOW LAND (D) SAL FOREST (E)WATER...65 FIGURE A-3: GENERATED FRACTIONAL IMAGES FOR BEST SINGLE KERNELS FROM AWIFS (RESOURCESAT-1) FOR (I) LINEAR (II) INVERSE MULTIQUADRATIC (III) SPECTRAL ANGLE KERNEL FOR CLASSES IDENTIFIED AS (A) AGRICULTURE FIELD WITH CROP (B) SAL FOREST (C) vi

13 EUCALYPTUS PLANTATION (D) DRY AGRICULTURE FIELD WITH CROP (E) MOIST AGRICULTURE FIELD WITH CROP (F) WATER FIGURE A-4: GENERATED FRACTIONAL IMAGES FOR BEST SINGLE KERNELS FROM AWIFS (RESOURCESAT-2) FOR (I) LINEAR (II) GAUSSIAN KERNEL USING EUCLIDEAN NORM (III) SPECTRAL ANGLE KERNEL FOR CLASSES IDENTIFIED AS (A) AGRICULTURE FIELD WITH CROP (B) EUCALYPTUS PLANTATION (C) FALLOW LAND (D) SAL FOREST (E) WATER.. 67 FIGURE A-5: VARIATION IN ENTROPY(E) AND MEAN MEMBERSHIP DIFFERENCE AGAINST THE WEIGHT CONSTANT ( m ) FOR FCM FOR RESOURCESAT-1 AWIFS FOR (I) FCM) (II) LINEAR (III)POLYNOMIAL (IV)SIGMOID(V)GAUSSIAN KERNEL WITH EUCLIDEAN NORM(VI) RADIAL BASIS (VII) KMOD) (VIII)IM (IX) SPECTRAL ANGLE..68 FIGURE A-6: VARIATION IN ENTROPY(E) AND MEAN MEMBERSHIP DIFFERENCE AGAINST THE WEIGHT CONSTANT (m) FOR GAUSSIAN-SPECTRAL ANGLE KERNEL FOR (A) RESOURCESAT-2 AWIFS (B) RESOURCESAT-1 LISS-III AND (C) RESOURCESAT-2 LISS-III FIGURE A-7: VARIATION IN ENTROPY(E) AND MEAN MEMBERSHIP DIFFERENCE AGAINST THE WEIGHT CONSTANT (m) FOR IM-SPECTRAL ANGLE KERNEL FOR (A) RESOURCESAT-2 AWIFS (B) RESOURCESAT-1 LISS-III AND (C) RESOURCESAT-2 LISS-III vii

14 LIST OF TABLES TABLE 3-1: RESOURCESAT-1 AND RESOURCESAT-2 SENSOR SPECIFICATION.25 TABLE 4-1: CLASSES IDENTIFIED AT SITARGANJ S TEHSIL IN AWIFS, LISS-III AND LISS-IV SENSORS FOR RESOURCESAT-1 AND RESOURCESAT TABLE 4-2: ESTIMATED OPTIMIZED m VALUES FOR FCM CLASSIFIER ALONG WITH THE CALCULATED MEAN MEMBERSHIP DIFFERENCE (λ) AND ENTROPY...35 TABLE 4-3: OPTIMIZED m VALUES FOR LOCAL, GLOBAL AND SPECTRAL ANGLE KERNELS FOR AWIFS, LISS-III AND LISS-IV IMAGES (RESOURCESAT-1) ALONG WITH THE CALCULATED MEAN MEMBERSHIP DIFFERENCE (Δ) AND ENTROPY(E)..39 TABLE 4-4: OPTIMIZED m VALUES FOR LOCAL, GLOBAL AND SPECTRAL ANGLE KERNELS FOR AWIFS, LISS-III AND LISS-IV IMAGES (RESOURCESAT-2) ALONG WITH THE CALCULATED MEAN MEMBERSHIP DIFFERENCE (Δ) AND ENTROPY(E)..39 TABLE 4-5: MAXIMUM MEAN MEMBERSHIP DIFFERENCE VALUES ESTIMATED FOR OPTIMIZED VALUES OF M (RESOURCESAT-1 AWIFS).44 TABLE 4-6: OPTIMIZED m VALUES FOR COMPOSITE KERNELS FOR AWIFS, LISS-III AND LISS-IV IMAGES (RESOURCESAT-1) ALONG WITH THE CALCULATED MEAN MEMBERSHIP DIFFERENCE (Δ), ENTROPY(E) AND WEIGHT GIVEN TO EACH KERNEL (λ) TABLE 4-7: OPTIMIZED M VALUES FOR COMPOSITE KERNELS FOR AWIFS, LISS-III AND LISS-IV IMAGES (RESOURCESAT-1) ALONG WITH THE CALCULATED MEAN MEMBERSHIP DIFFERENCE (Δ), ENTROPY(E) AND WEIGHT GIVEN TO EACH KERNEL (Λ) TABLE 4-8: ACCURACY ASSESSMENT RESULTS FOR FCM, BEST SINGLE KERNEL AND BEST COMPOSITE KERNELS 48 TABLE 4-9: COMPARISON OF ACCURACY ASSESSMENT IN TRAINED AS WELL AS UNTRAINED CASE FOR IM KERNEL AND FCM FOR AWIFS WITH LISS-III IMAGE (RESOURCESAT-1)..49 TABLE 4-10: COMPARISON OF ACCURACY ASSESSMENT IN TRAINED AS WELL AS UNTRAINED CASE FOR IM KERNEL AND FCM FOR LISS-III WITH LISS-IV IMAGE (RESOURCESAT-1) 50 TABLE 4-11: COMPARISON OF ACCURACY ASSESSMENT IN TRAINED AS WELL AS UNTRAINED CASE FOR GAUSSIAN KERNEL USING EUCLIDEAN NORM AND FCM FOR AWIFS WITH LISS-III IMAGE (RESOURCESAT-2)..50 TABLE 4-12: COMPARISON OF ACCURACY ASSESSMENT IN TRAINED AS WELL AS UNTRAINED CASE FOR GAUSSIAN KERNEL USING EUCLIDEAN NORM AND FCM FOR LISS-III WITH LISS-IV IMAGE (RESOURCESAT-2)..51 viii

15 TABLE B-1: ACCURACY ASSESSMENT RESULTS FOR FCM CLASSIFIED AWIFS (RESOURCESAT-1) AGAINST LISS-IV (RESOURCESAT-1) REFERENCE DATA 72 TABLE B-2: ACCURACY ASSESSMENT RESULTS FOR INVERSE MULTIQUADRATIC KERNEL CLASSIFIED AWIFS (RESOURCESAT-1) AGAINST LISS-IV (RESOURCESAT-1) REFERENCE DATA.72 TABLE B-3: ACCURACY ASSESSMENT RESULTS FOR LINEAR KERNEL CLASSIFIED AWIFS (RESOURCESAT-1) AGAINST LISS-IV (RESOURCESAT-1) REFERENCE DATA..73 TABLE B-4: ACCURACY ASSESSMENT RESULTS FOR SPECTRAL ANGLE KERNEL CLASSIFIED AWIFS (RESOURCESAT-1) AGAINST LISS-IV (RESOURCESAT-1) REFERENCE DATA 73 TABLE B-5: ACCURACY ASSESSMENT RESULTS FOR IM-SPECTRAL KERNEL CLASSIFIED AWIFS (RESOURCESAT-1) AGAINST LISS-IV (RESOURCESAT-1) REFERENCE DATA..74 TABLE B-6: ACCURACY ASSESSMENT RESULTS FOR IM-SPECTRAL KERNEL CLASSIFIED AWIFS (RESOURCESAT-1) AGAINST LISS-IV (RESOURCESAT-1) REFERENCE DATA..74 TABLE B-7: ACCURACY ASSESSMENT RESULTS FOR FCM CLASSIFIED AWIFS (RESOURCESAT-1) AGAINST LISS-III (RESOURCESAT-1) REFERENCE DATA TABLE B-8: ACCURACY ASSESSMENT RESULTS FOR FCM CLASSIFIED AWIFS (RESOURCESAT-1) AGAINST LISS-III (RESOURCESAT-1) REFERENCE DATA TABLE B-9: ACCURACY ASSESSMENT RESULTS FOR LINEAR KERNEL CLASSIFIED AWIFS (RESOURCESAT-1) AGAINST LISS-III (RESOURCESAT-1) REFERENCE DATA.76 TABLE B-10: ACCURACY ASSESSMENT RESULTS FOR SPECTRAL ANGLE KERNEL CLASSIFIED AWIFS (RESOURCESAT-1) AGAINST LISS-III (RESOURCESAT-1) REFERENCE DATA 76 TABLE B-11: ACCURACY ASSESSMENT RESULTS FOR IM-SPECTRAL ANGLE KERNEL CLASSIFIED AWIFS (RESOURCESAT-1) AGAINST LISS-III (RESOURCESAT-1) REFERENCE DATA 77 TABLE B-12: ACCURACY ASSESSMENT RESULTS FOR LINEAR-SPECTRAL ANGLE KERNEL CLASSIFIED AWIFS (RESOURCESAT-1) AGAINST LISS-III (RESOURCESAT-1) REFERENCE DATA.77 TABLE B-13: ACCURACY ASSESSMENT RESULTS FOR FCM CLASSIFIED LISS-III (RESOURCESAT-1) AGAINST LISS-IV (RESOURCESAT-1) REFERENCE DATA 78 TABLE B-14: ACCURACY ASSESSMENT RESULTS FOR IM KERNEL CLASSIFIED LISS-III (RESOURCESAT-1) AGAINST LISS-IV (RESOURCESAT-1) REFERENCE DATA..78 TABLE B-15: ACCURACY ASSESSMENT RESULTS FOR LINEAR KERNEL CLASSIFIED LISS-III (RESOURCESAT-1) AGAINST LISS-IV (RESOURCESAT-1) REFERENCE DATA..79 TABLE B-16: ACCURACY ASSESSMENT RESULTS FOR SPECTRAL ANGLE KERNEL CLASSIFIED LISS-III (RESOURCESAT-1) AGAINST LISS-IV (RESOURCESAT-1) REFERENCE DATA...79 ix

16 TABLE B-17: ACCURACY ASSESSMENT RESULTS FOR IM-SPECTRAL ANGLE KERNEL CLASSIFIED LISS-III (RESOURCESAT-1) AGAINST LISS-IV (RESOURCESAT-1) REFERENCE DATA TABLE B-18: ACCURACY ASSESSMENT RESULTS FOR LINEAR-SPECTRAL ANGLE KERNEL CLASSIFIED LISS-III (RESOURCESAT-1) AGAINST LISS-IV (RESOURCESAT-1) REFERENCE DATA 80 TABLE B-19: ACCURACY ASSESSMENT RESULTS FOR FCM CLASSIFIED AWIFS (RESOURCESAT-2) AGAINST LISS-III (RESOURCESAT-2) REFERENCE DATA...81 TABLE B-20: ACCURACY ASSESSMENT RESULTS FOR GAUSSIAN KERNEL CLASSIFIED AWIFS (RESOURCESAT-2) AGAINST LISS-III (RESOURCESAT-2) REFERENCE DATA.81 TABLE B-21: ACCURACY ASSESSMENT RESULTS FOR LINEAR KERNEL CLASSIFIED AWIFS (RESOURCESAT-2) AGAINST LISS-III (RESOURCESAT-2) REFERENCE DATA. 82 TABLE B-22: ACCURACY ASSESSMENT RESULTS FOR SPECTRAL-ANGLE KERNEL CLASSIFIED AWIFS (RESOURCESAT-2) AGAINST LISS-III (RESOURCESAT-2) REFERENCE DATA 82 TABLE B-23: ACCURACY ASSESSMENT RESULTS FOR SPECTRAL-GAUSSIAN KERNEL CLASSIFIED AWIFS (RESOURCESAT-2) AGAINST LISS-III (RESOURCESAT-2) REFERENCE DATA 83 TABLE B-24: ACCURACY ASSESSMENT RESULTS FOR LINEAR-SPECTRAL ANGLE KERNEL CLASSIFIED AWIFS (RESOURCESAT-2) AGAINST LISS-III (RESOURCESAT-2) REFERENCE DATA 83 TABLE B-25: ACCURACY ASSESSMENT RESULTS FOR FCM CLASSIFIED AWIFS (RESOURCESAT-2) AGAINST LISS-IV (RESOURCESAT-2) REFERENCE DATA 84 TABLE B-26: ACCURACY ASSESSMENT RESULTS FOR GAUSSIAN KERNEL CLASSIFIED AWIFS (RESOURCESAT-2) AGAINST LISS-IV (RESOURCESAT-2) REFERENCE DATA..84 TABLE B-27: ACCURACY ASSESSMENT RESULTS FOR LINEAR KERNEL CLASSIFIED AWIFS (RESOURCESAT-2) AGAINST LISS-IV (RESOURCESAT-2) REFERENCE DATA..85 TABLE B-28: ACCURACY ASSESSMENT RESULTS FOR SPECTRAL ANGLE KERNEL CLASSIFIED AWIFS (RESOURCESAT-2) AGAINST LISS-IV (RESOURCESAT-2) REFERENCE DATA.85 TABLE B-29: ACCURACY ASSESSMENT RESULTS FOR GAUSSIAN-SPECTRAL ANGLE KERNEL CLASSIFIED AWIFS (RESOURCESAT-2) AGAINST LISS-IV (RESOURCESAT-2) REFERENCE DATA..86 TABLE B-30: ACCURACY ASSESSMENT RESULTS FOR LINEAR-SPECTRAL ANGLE KERNEL CLASSIFIED AWIFS (RESOURCESAT-2) AGAINST LISS-IV (RESOURCESAT-2) REFERENCE DATA..86 TABLE B-31: ACCURACY ASSESSMENT RESULTS FOR FCM CLASSIFIED LISS-III (RESOURCESAT-2) AGAINST LISS-IV (RESOURCESAT-2) REFERENCE DATA 87 TABLE B-32: ACCURACY ASSESSMENT RESULTS FOR GAUSSIAN KERNEL CLASSIFIED LISS-III (RESOURCESAT-2) AGAINST LISS-IV (RESOURCESAT-2) REFERENCE DATA.87 TABLE B-33: ACCURACY ASSESSMENT RESULTS FOR LINEAR KERNEL CLASSIFIED LISS-III (RESOURCESAT-2) AGAINST LISS-IV (RESOURCESAT-2) REFERENCE DATA.88 x

17 TABLE B-34: ACCURACY ASSESSMENT RESULTS FOR SPECTRAL ANGLE KERNEL CLASSIFIED LISS-III (RESOURCESAT-2) AGAINST LISS-IV (RESOURCESAT-2) REFERENCE DATA...88 TABLE B-35: ACCURACY ASSESSMENT RESULTS FOR GAUSSIAN-SPECTRAL ANGLE KERNEL CLASSIFIED LISS-III (RESOURCESAT-2) AGAINST LISS-IV (RESOURCESAT-2) REFERENCE DATA 89 TABLE B-36: ACCURACY ASSESSMENT RESULTS FOR LINEAR-SPECTRAL ANGLE KERNEL CLASSIFIED LISS-III (RESOURCESAT-2) AGAINST LISS-IV (RESOURCESAT-2) REFERENCE DATA 89 TABLE B-37: ACCURACY ASSESSMENT RESULTS FOR FCM CLASSIFIED AWIFS (RESOURCESAT-1) AGAINST LISS-III (RESOURCESAT-1) REFERENCE DATA TABLE B-38: ACCURACY ASSESSMENT RESULTS FOR IM KERNEL CLASSIFIED AWIFS (RESOURCESAT-1) AGAINST LISS-III (RESOURCESAT-1) REFERENCE DATA. 90 TABLE B-39: ACCURACY ASSESSMENT RESULTS FOR LINEAR KERNEL CLASSIFIED AWIFS (RESOURCESAT-1) AGAINST LISS-III (RESOURCESAT-1) REFERENCE DATA.91 TABLE B-40: ACCURACY ASSESSMENT RESULTS FOR FCM CLASSIFIED AWIFS (RESOURCESAT-1) AGAINST LISS-IV (RESOURCESAT-1) REFERENCE DATA 91 TABLE B-41: ACCURACY ASSESSMENT RESULTS FOR IM KERNEL CLASSIFIED AWIFS (RESOURCESAT-1) AGAINST LISS-IV (RESOURCESAT-1) REFERENCE DATA..92 TABLE B-42: ACCURACY ASSESSMENT RESULTS FOR LINEAR KERNEL CLASSIFIED AWIFS (RESOURCESAT-1) AGAINST LISS-IV (RESOURCESAT-1) REFERENCE DATA..92 TABLE B-43: ACCURACY ASSESSMENT RESULTS FOR FCM CLASSIFIED LISS-III (RESOURCESAT-1) AGAINST LISS-IV (RESOURCESAT-1) REFERENCE DATA 93 TABLE B-44: ACCURACY ASSESSMENT RESULTS FOR IM KERNEL CLASSIFIED LISS-III (RESOURCESAT-1) AGAINST LISS-IV (RESOURCESAT-1) REFERENCE DATA..93 TABLE B-45: ACCURACY ASSESSMENT RESULTS FOR LINEAR KERNEL CLASSIFIED LISS-III (RESOURCESAT-1) AGAINST LISS-IV (RESOURCESAT-1) REFERENCE DATA..94 TABLE B-46: ACCURACY ASSESSMENT RESULTS FOR FCM CLASSIFIED AWIFS (RESOURCESAT-2) AGAINST LISS-III (RESOURCESAT-2) REFERENCE DATA TABLE B-47: ACCURACY ASSESSMENT RESULTS FOR GAUSSIAN KERNEL CLASSIFIED AWIFS (RESOURCESAT-2) AGAINST LISS-III (RESOURCESAT-2) REFERENCE DATA.95 TABLE B-48: ACCURACY ASSESSMENT RESULTS FOR LINEAR KERNEL CLASSIFIED AWIFS (RESOURCESAT-2) AGAINST LISS-III (RESOURCESAT-2) REFERENCE DATA.95 TABLE B-49: ACCURACY ASSESSMENT RESULTS FOR FCM CLASSIFIED AWIFS (RESOURCESAT-2) AGAINST LISS-IV (RESOURCESAT-2) REFERENCE DATA 96 xi

18 TABLE B-50: ACCURACY ASSESSMENT RESULTS FOR GAUSSIAN KERNEL CLASSIFIED AWIFS (RESOURCESAT-2) AGAINST LISS-IV (RESOURCESAT-2) REFERENCE DATA..96 TABLE B-51: ACCURACY ASSESSMENT RESULTS FOR LINEAR KERNEL CLASSIFIED AWIFS (RESOURCESAT-2) AGAINST LISS-IV (RESOURCESAT-2) REFERENCE DATA..97 TABLE B-52: ACCURACY ASSESSMENT RESULTS FOR FCM CLASSIFIED LISS-III (RESOURCESAT-2) AGAINST LISS-IV (RESOURCESAT-2) REFERENCE DATA 97 TABLE B-53: ACCURACY ASSESSMENT RESULTS FOR GAUSSIAN KERNEL CLASSIFIED LISS-III (RESOURCESAT-2) AGAINST LISS-IV (RESOURCESAT-2) REFERENCE DATA..98 TABLE B-54: ACCURACY ASSESSMENT RESULTS FOR LINEAR KERNEL CLASSIFIED LISS-III (RESOURCESAT-2) AGAINST LISS-IV (RESOURCESAT-2) REFERENCE DATA..98 xii

19 1. INTRODUCTION Remote Sensing techniques have been widely used to obtain useful information for detection and discrimination of Earth surface cover. Digital images acquired by various sensors are used for a wide range of applications such as disaster management, natural resource monitoring, urban planning, land use/land cover (LULC) mapping and many others. For regional or global level LULC mapping, these digital images have become an effective source of information. Interpreting these raw digital images acquired from various sensors with human interpretation however, has resulted in lower quantitative accuracy. A higher accuracy can be achieved with the intervention of computers to process a digital image (Richards and Jia, 2005). Lillesand and Kiefer (1979) have mentioned digital image classification as a quantitative technique to classify image data into various categories. Results of image classification are summarized in the form of thematic maps by assigning class labels to each pixel in the image. These thematic maps are used in turn for mapping the surface cover information, e.g. for conservation and development purposes. Supervised and Unsupervised image classifications are two broad categories of classification procedure (Campbell, 1996). In unsupervised classification a sample point is assigned to a cluster based on the similarity in spectral values of a pixel. Over time, spectral properties of a class change and at times the procedure identifies samples that do not correspond to that particular class; those are major limitations of this technique (Campbell, 1996). In supervised classification the analyst has control over assigning informational classes based on field data. Several statistical classification algorithms have been developed such as k-means classifier, the minimum Distance to mean classifier and the maximum likelihood classifier (Tso and Mather, 2000). All these classifiers have as a single objective to improve classification accuracy. Traditional classification techniques allocate each pixel to a single land cover class resulting in a hard (or crisp ) partitioning ( Zhang and Foody, 1998). Hard classification techniques assume that a single pixel in the image accounts for a uniform land cover class on the ground corresponding to the pixel size. It is rarely the case in reality; however, that such a pixel on the ground corresponds to a single and uniform land use class. For regional or global level studies coarse resolution remote sensing images are used that are dominated by mixed pixels. Several land cover types or information classes are then contained in a single pixel. Conventional image classification techniques assign these mixed pixels to a single class, thus introducing error in the classified image and resulting in a reduction in classification accuracy. The main reasons for the presence of mixed pixels are the following (Zhang and Foody, 1998 ; Chawla, 2010): A coarse spatial resolution of a sensor results in including several classes in a particular pixel. This results in a composite spectral response which may differ from each of its component classes. Page 1

20 With time, land cover classes degrade from one class to another. For example water can change to moist grassland or with the seasonal change agricultural crops are harvested. As a result these land cover classes are mixed. The value of a pixel recorded by the sensor may be different for highly similar entities and similar for different entities. Pixel values can change based on the interaction of the electromagnetic waves with the atmosphere or objects. Presence of these mixed pixels reduces the classification accuracy in large proportion. Bezdek et al. (1984) introduced Fuzzy c-means (FCM) with the idea of fuzzy sets put forward by Zadeh (1965) to solve the mixed pixel problem. Zadeh s idea was to assign a particular sample or pixel to more than one cluster with the help of a membership grade varying between 0 and 1. A grade close to 1 indicates a high possibility that the sample belongs to that particular cluster and vice-versa (Bezdek et al., 1984). Fuzzy or soft classification techniques increase the accuracy of classification results for coarser resolution images. It is an alternative to c-means clustering algorithm for pattern recognition. FCM is a flexible approach as it assigns sample points in to more than one cluster but it performs well only for spherical clusters (Suganya and Shanthi, 2012). Krishnapuram and Keller (1996) introduced the Possiblistic c-means (PCM) classifier, which is an improvement to the FCM as PCM is more robust to noise errors. Linearly separable classes are the simplest cases in image classification. In pattern analysis, certain samples appear to be non-linear in nature due to redundancy in spectral values. A recent development was to use these kernel methods in FCM to implement a non-linear version of the algorithm. A Kernel based Fuzzy c-means (KFCM) classifier was developed in order to classify non-linear data. For the KFCM, sample data that appear to be non-linear in the input space are mapped to a higher dimensional feature space where the sample points are considered to be linearly separable (Yang et al., 2007). In the original input space these computations become complex and cost-effective. Mercers kernel for clustering introduced kernel functions for the Support Vector Machine classification of non-linear data to calculate the number of clusters within the data and perform classification in the feature space (Girolami, 2002). Because of the ability of KFCM methods to cluster more shapes in the input dataset, their classification accuracies are much higher as compared to FCM (Yang et al., 2007). Different types of kernels such as positive definite kernels and stationary kernels have been discussed in Ben-hur (2001).A total of eight kernel functions was considered in this study categorized as: local kernel, global kernel or spectral kernel. Four local kernels considered are: the Gaussian kernel using the Euclidean Norm, the radial basis kernel, the inverse multiquadratic kernel and kernel with the moderately decreasing with distance (KMOD). Three global kernels are: the linear kernel, the polynomial kernel and the sigmoid kernel (Kumar, 2007). Page 2

21 Two single kernels can be combined to form a composite kernel. This gives a better classification as compared to a single kernel (Camps-valls et al., 2006). Different combinations of single kernels can be adopted for inheriting both the spectral and spatial properties of a single kernel (Camps-valls et al., 2006). Kernel based clustering is more robust to noise and outliers and also tolerates unequal sized clusters which is a major drawback of the FCM algorithm (Zhang and Chen, 2003). To assess the accuracy of soft classified outputs many methods have been put forward (Binaghi et al., 1999; Congalton, 1991; Zhang and Foody, 1998). The traditional error matrix cannot be used because it assigns one-pixel-one-class method. Binaghi et al., (1999) introduced the Fuzzy Error Matrix (FERM) to assess the accuracy of soft classified results. Even though it is appealing, it is not considered as a standard method. In this current research work, classified results are compared based on the minimum entropy and maximum mean membership difference method between the single kernels and composite kernels and the best among them is chosen MOTIVATION AND PROBLEM STATEMENT Coarse resolution remote sensing images are used for mapping purposes at the regional or global level. These images may have mixed pixels as well as non-linearity in data, resulting in an incorrectly classified image. Soft classification methods have been found superior when compared to hard classification in the presence of mixed pixels. Fuzzy classifiers are able to handle mixed pixels whereas kernels are used to handle nonlinear data. In the light of the properties of both FCM and kernels the current research was proposed to study the behaviour of different kernels using a Kernel based Fuzzy c-means (KFCM) classifier. A comparative approach was taken to analyse the performance between single and composite kernels. The combination of the best single kernels is then taken as the composite kernel RESEARCH OBJECTIVE: The main objective of this research work is to optimally separate non-linear classes using a Kernel based Fuzzy c-means approach. The specific objectives are: To develop an objective function for Kernel based Fuzzy-c-Means Classifier (KFCM) to handle non-linear class separation. To select the best single or composite kernel to be used within the KFCM classifier. To evaluate the performance of this classifier in the case of untrained classes. To study the best kernel model with the best possible parameter. Finally, soft outputs of KFCM classifier were studied using an image to image accuracy assessment approach. Page 3

22 1.3. RESEARCH QUESTIONS The following research questions are formulated from the research objectives: 1. How can non-linearity within a class boundary in feature space be handled effectively using KFCM? 2. How can mixed pixels be handled using KFCM? 3. How can the performance of single/composite kernels be evaluated using KFCM? 4. To which degree is the FCM classification algorithm capable to handle non-linear feature vectors of different classes for classification? 5. What will be the effect of using composite kernels on KFCM as compared to single kernels? 1.4. INNOVATION AIMED AT Kernel based Fuzzy c-means approach is studied with eight single kernels and the best among them is selected for the best composite kernel. In this research work, comparative analysis is made between FCM and KFCM performance by optimizing the value of different parameters used in these algorithms THESIS STRUCTURE The thesis accounts for the work done for this particular research work in five chapters. The First chapter gives a brief introduction about this research, the objectives to be accomplished and research questions formulated from the research objectives. The Second chapter describes about the previous work that has been done related to this research work. The Third chapter explains about the concepts and formulas used along with the study area used and methodology adopted. The Fourth chapter deals with the results obtained from the classifier so developed. The Fifth chapter discussed the results so obtained along with the accuracy assessment results. Finally the Sixth chapter concludes the research along with the answers to this research questions and the possibilities of further research work. Page 4

23 2. LITERATURE REVIEW 2.1. LAND COVER CLASSIFICATION METHODS: The literature about Land Use/Land Cover (LULC) information is exceedingly broad. Multi-spectral image classification techniques are used for the information extraction for various environmental studies. Image classification approaches can be classified based on supervised and unsupervised, or crisp and fuzzy as well as parametric and non-parametric(lu and Weng, 2007). As the spectral properties of information classes change over time and some spectral properties did not correspond to information classes, unsupervised classification was not considered as advantageous when compared to supervised approach(campbell, 1996). The k-means algorithm and fuzzy clustering constituted the unsupervised classification methods (Tso and Mather, 2000). Supervised classification algorithms such as the Maximum Likelihood (ML) classifier and the Minimum Distance to mean classifier were introduced taking one-pixel-one-class approach (Choodarathnakara et al.,2012a). In the parallelepiped method, a parallelepiped-like subspace is defined for each class. Even though this method is easy to implement errors occurs in two cases: 1) when a pixel lies in more than one parallelepiped and 2) when a pixel lies outside all parallelepiped (Tso and Mather, 2000). The ML and the Minimum Distance to Means classifiers are based on the evaluation of various spectral response patterns when classifying an unknown pixel. The Minimum Distance to Means classifier is one of the simplest classification approaches but it is insensitive to different degrees of variance in the spectral response data (Lillesand and Kiefer, 1979). The ML classifier is a statistical method that quantitatively evaluates both covariance and correlation of spectral response patterns when an unknown pixel is classified. But the major drawback of the ML classifier is the large number of computations required to classify each pixel (Lillesand and Kiefer, 1979). The ML classifier cannot perform better in the presence of mixed pixels because of the difficulty to differentiate between features in similar spectrum (e.g. forest and grassland) (Tan et al., 2011). For regional or global level studies coarse resolution images are widely used that contain mixed pixels. Conventional crisp classification algorithms are incapable of mapping sub-pixel level information (Settle and Drake, 1993). Statistical or traditional image classifiers such as ML classifier do not take into account the presence of mixed pixels thus, resulting in a low classification accuracy (Kavzoglu and Reis, 2008). A fuzzy classifier addresses the problem that a pixel is assigned to more than one land cover classes. Advanced soft image classification techniques such as Artificial Neural Networks (ANN), Genetic Algorithms (GA), and Decision Tree classifiers are tring research areas. A comparative study between ML classifier and Artificial Neural Network (ANN) by (Kavzoglu and Reis, 2008) showed that the ANN provides better classification accuracy when compared to the ML classifier. Due to spectral similarity and Page 5

24 superposition of spectral regions of several classes, the ML algorithm which relies on the statistical estimates has wrongly identified many pixels in the resulting image FUZZY c-means (FCM) The introduction of fuzzy logic gave way to Fuzzy c-means (FCM) clustering technique. FCM permits a sample data point to belong to several clusters. Zadeh (1965) introduced fuzzy sets, where each sample data point is assigned to a cluster based on a membership grade (degree of sharing) which can range between zero to unity. Fuzzy logic is an effective method in image classification and collateral data can also be classified well (Choodarathnakara et al., 2012b). In FCM, the sample data point is assigned to a cluster based on high intra cluster resemblance (Bezdek et al. 1984). The FCM algorithm used by Bezdek et al., (1984) is unsupervised in nature. When the information about the classes of interest is known a priori, supervised image classification techniques are most widely used (Campbell, 1996). Wang (1990) introduced fuzzy supervised classification of remote sensing images with higher classification accuracy. Supervised FCM classification was used for the estimation and mapping of subpixel land cover composition (Foody, 2000, Atkinson et al., 1997). In FCM, the proportion of the land cover types are reflected in the fuzzy membership values (Fisher and Pathirana, 1990). Earlier, fuzzy approaches dealt with fuzziness only in the class allocation stage but not in the testing or training stage. A classification approach which accommodates fuzziness in allocation, training and testing stage are considered to be fully-fuzzy classified whereas a fuzzy approach which takes fuzziness only at the allocation stage are termed to be partially-fuzzy classified (Zhang and Foody, 1998). Zhang and Foody (2002) showed an improvement in accuracy from 6.6% to 5.0% when a fully fuzzy supervised classification was used rather than partially-fuzzy classifications. FCM generates membership that represents degree of sharing but not degree of typicality (Krishnapuram and Keller, 1996). FCM showed poor performance in the presence of noise and outliers. FCM is one of the most popular techniques used in the field of medical image segmentation. Based upon the concept of data compression an improved FCM (IFCM) was introduced where the dimensionality of the input data was reduced with a change in the cluster and membership value criterion (Hemanth et al., 2009). It has pointed out in Vinushree et al. (2014) that FCM is effective only in clustering crisp, spherical and non-overlapping data. Suganya and Shanthi (2012) has concluded that the algorithm performs well in the case of spherical clustering and sensitive to noise and expects low degree of membership for outliers. The data in an image exhibit different pattern that may or may not be clearly visible. Pattern analysis refers to a class of machine learning algorithms that classifies data based on the properties of different patterns. Linearly separable classes are the simplest case that can appear in the pattern of a data (Isaacs et al., 2007). If the data appear to be non-linearly separable the classification will be computationally intricate in the Page 6

25 original input space. For separating these non-linear data many kernels based methods were introduced in recent years (Girolami, 2002;Camps-Valls and Bruzzone, 2009). These methods map the input data to a higher dimensional space where the data turn out to be linearly separable (Awan and Sap, 2005). Mostly kernel based algorithms were used in Support Vector Machines (SVM). It is a statistical learning approach which uses kernels for remote sensing classification (Pal, 2009). Kernel methods are used in wide range of applications. Hao Huang and Zhu (2006) proposed a non-linear feature extraction algorithm for speech recognition. It has also shown its importance in classification, face recognition, speech recognition and many others. For classifying non-linearly separable data KFCM was introduced and also deals with the drawbacks in fuzzy clustering. (Ravindraiah and Tejaswini, 2013) has studied the hierarchical evolution of different types of fuzzy clustering techniques for image segmentation KERNELS Kernels are machine learning algorithms for pattern analysis which was introduced for SVM clustering which can generate cluster boundaries of arbitrary shape (Ben-hur et al., 2001). Kernel functions maps sample data from the initial sample space into a higher dimensional space where the sample data are linearly separable and allows interpreting data in feature space. When transforming data to a higher dimension it should be ensured that non-linear transformations do not introduce structure to the inherent data (Girolami, 2002). It has been adopted for unsupervised learning method and just suits hyper-spherical or hyper-ellipsoidal clusters. There are different classes of kernels: positive-definite kernels, stationary kernels, locally stationary kernels, non-stationary kernels and reducible kernels based on a statistical perspective (Genton, 2001). Figure 2-1: Two clusters in input space denoted in different shape showing the non-linearly and linearly separable case. Zhang and Chen (2002) introduced fuzzy clustering using kernel methods where both spherical and overlapping datasets have been used to evaluate the performance of KFCM and FCM. Spectral and kernel clustering were found to have a unifying theory where in spectral methods there is an adjacency between Page 7

26 patterns which is analogous of the kernel functions (Filippone at al., 2008). KFCM can be divided into two categories: 1) a prototype of FCM algorithm resides in feature space and is implicitly mapped to the kernel space by means of a kernel function 2) a prototypes is directly constructed in kernel space, which allows more freedom for prototypes in the feature space (Huang at al., 2012). Graves and Pedrycz (2007) evaluated the performance of kernel based fuzzy clustering where they concluded that the performance with kernels was better but required fine tuning of the parameters. These methods are well suited for clustering ring data set and similar structures like square data set. Kim et al. (2001) used Kernel Principal Component Analysis (KPCA) applying a polynomial kernel for the analysis of texture classification. They showed that a kernel PCA gave an overall good performance. Many other uses were also introduced for kernels in the field of image classification (Camps-valls et al., 2004). Camps-valls and Bruzzone (2005) has used linear, polynomial and Radial Basis function kernels for hyperspectral image classification. A polynomial kernel showed an overall good performance and robust to common levels of noise. Bhatt and Mishra (2013) and Bhatt and Mishra (2014) used local kernels i.e. the KMOD and the inverse multiquadratic kernel as well as the global kernels i.e. Linear, Polynomial and Sigmoid kernels to classify water and vegetation. Huang et al. (2011) introduced a weighting matrix to Radial Basis Function (RBF) kernel to weigh the training samples according to their information significance. Composite kernels sum up the spectral and textural information in the input image to the classified output and they gave excellent performance results when compared to a single kernel (Camps-valls et al., 2006). Kernels can be combined based on the stacked approach, direct summation, weighted summation and cross information kernel. Different types of kernels were used for multi-temporal classification of remote sensing images and have been used for change detection, tackling real world problems such as urban monitoring (Camps-valls et al., 2008). Composite kernels resulted with the best results in the case of urban monitoring. The overall accuracy of various mixtures of kernel functions varies with change in the weight given to each kernel in SVM (Kumar et al., 2006). The best combination of kernels will change with the datasets used. KFCM algorithms have been used to achieve optimization of clustering and classification (KOCC) simultaneously. Even though, KFCM outperforms FCM, sometimes the clustering also deps on the densities and shape of the datasets used (Tsai and Lin, 2011). The computational load of KFCM is very high if the total number of data points is large, especially if these methods are used for image segmentation. KFCM can partition datasets only up to quadratic functions and it s still a research area for higher polynomial functions. Kernel with Moderate Decrease of spatial distance (KMOD) class preserves the whole data closeness information while still penalizing the far neighbourhood in the case of sparse data (Ayat et al., 2001). Such kernels are more reliable when compared to the others. KMOD gives the best results in separating patterns when compared to the RBF or polynomial kernels. Page 8

27 2.4. ACCURACY ASSESSMENT Accuracy assessment of a remote sensing data provides a measure of confidence on the quality of product to the users. Many approaches have been introduced to assess the classification accuracy. Congalton (1991) introduced error matrix or confusion matrix or contingency table which is a square array of numbers set out in rows and columns which express the number of sample units assigned to a particular category relative to the actual category as verified on the ground (Congalton, 1991). The error matrix not only represents a tabular form of accuracy but also presents the overall accuracy, users as well as producers accuracy (Congalton, 1991). The confusion matrix simply tells how well the classifier can classify the training area and nothing more (Lillesand and Kiefer, 1979). The Kappa statistics was considered to be a fundamental measure of accuracy (Smits et al., 1999). This statistic serves as an indicator of the extent to which the percentage correct values of an error matrix are due to true agreement or chance agreement (Lillesand and Kiefer, 1979). But these measures are used to ascertain hard classification results. Due to the presence of sub-pixel class boundaries these measures hardly represents the actual value of the quality of the classified image. Need for accuracy assessment of sub-pixel classified images are shown in Latifovic and Olthof (2004). For assessing the accuracy of soft classification outputs, no regular assessment technique is available (Harikumar, 2014). If no soft reference dataset is available then the output of fuzzy classification can be hardened which may lead to data loss (Binaghi et al., 1999; Silvan-Cardenas and Wang, 2008;Okeke and Karnieli, 2006; Harikumar, 2014). Silvan-Cardenas and Wang (2008) discussed the various basic operators used for sub-pixel classification where MIN operator gives the maximum sub-pixel overlap among classes, PROD operator measures the expected class overlap between the reference and assessed sub-pixel partitions and LEAST operator measures the minimum possible sub-pixel overlap. If the assessed data matched perfectly with reference data then the error matrix should appear diagonal which was not in the case of composite operators. Thus to satisfy the property of diagonalization, composite operators MIN-PROD, MIN-LEAST and MIN- PROD (Pontius and Cheuk, 2006)were introduced. Based on the traditional error matrix, (Binaghi et al., 1999) introduced Fuzzy Error Matrix (FERM) for the evaluation of soft classifiers but even this method cannot be considered a standard one. FERM provided a more accurate measure of Overall Accuracy (OA) with multimembership grades which proved useful than a conventional OA based on hardened values (Binaghi et al., 1999). Silvan-Cardenas and Wang (2008) proposed a sub-pixel confusion-uncertainty matrix (SCM) for the confusion created in sub-pixel area allocation which reports the confusion intervals in the form of a center-value plus-minus maximum error to account for the sub-pixel uncertainty. The Mean Relative Error (MRE), Root Mean Square Error (RMSE) and the Correlation Coefficient (CC) criteria deps on the actual and desired outputs of the classifier and hence it is more depent on the error in the results. Dehghan and Ghassemian (2006) proposed entropy measure which deps on the Page 9

28 actual outputs of the classifier and they are sensitive to uncertainty. When the ground data are fuzzy the interpretation of entropy values will be difficult, in these cases cross entropy value helps (Foody, 1995). Using fuzzy classification and fuzzy ground data the results of cross entropy indicates closeness in land cover composition. (Yun-song and Yu-feng, 2010) compared the accuracy of KFCM and FCM algorithm using an error matrix where the classification accuracy of KFCM was 3% higher than that of FCM. In the presence of mixed pixels the FERM gives a better result when compared to the traditional error matrix. Hence, in this research work the accuracy of the generated classified outputs has been assessed using the FERM for all the kernels. The performance of KFCM and FCM is compared so as to determine which classifier gives good result. Page 10

29 3. CLASSIFICATION APPROACHES, STUDY AREA AND METHODOLOGY The first section discuss about the various concepts and approaches used in this research work along with the different kernels used. The second section describes about the study area used, various sensors and also explains in detail about the processing steps for the datasets used. The third section explains the methodology adopted to carry out this research work CLASSIFICATION APPROACHES AND ACCURACY ASSESSMENT The lists of symbols used for this section are as follows: X = {x 1, x 2,. x n } : set of n sample points x i : spectral response of a pixel Y : subset of set X c : number of clusters N : number of pixels m : weighting component U : membership matrix of size (c n) μ(x) : membership grade of sample point x μ ij : membership value of a pixel in i th row and j th column V = {v 1, v 2. v 2 } : set vector of cluster centres A : is the weight matrix I : identity matrix 2 A : squared norm of A d ij : squared distance norm between the sample point and a cluster center K(.,.) : kernel function Page 11

30 CLUSTERING Clustering refers to grouping of pixels that are spectrally similar in multispectral space (Richards and Jia, 2005). Clustering partitions the data into different clusters based on the similar properties (Figure 3-1). Different algorithms for clustering have been introduced such as single pass clustering algorithms and hierarchical clustering. Clustering can also be divided into hard and soft clustering (Richards and Jia, 2005). In the case of hard clustering each pixel in the input image is assigned to a single cluster whereas in fuzzy clustering due each pixel is assigned to more than one cluster with a membership grade to each class, thus showing the degree of belongingness of a particular class in a pixel (Zadeh, 1965). Figure 3-1: Clustering We now consider the Fuzzy c-means(fcm) classifier which is a widely used soft clustering technique introduced by Bezdek et al. (1984). FCM operates by assigning sample data to different cluster using a membership grade that varies between 0 and 1 (Bezdek et al., 1984) THE FUZZY c-means (FCM) CLASSIFIER A fuzzy set is characterized by a membership function that associates each sample data point to a value in the interval [0, 1] symbolizing the membership grade. Let Y represent a set (class) in X (space of points) then; the fuzzy set Y is represented as in equation (3.1) (Camps-Valls and Bruzzone, 2009), Y = { f(x, μ(x)) x X } (3.1) Here μ(x) represents the membership grade and x represents sample object in X (Zadeh, 1965). Each sample data point has a membership value between zero and one. A membership value close to one represents a high degree of similarity between the sample point and the cluster (Bezdek et al., 1984). Fuzzy clustering is an alternative to unsupervised classification using k-means. In fuzzy clustering, each pixel may belong to two or more clusters and will have a membership value for each cluster. FCM is one of the most widely accepted iterative unsupervised fuzzy clustering algorithms which allows sample data point to belong to more than one cluster. FCM algorithm partitions dataset X = {x 1, x 2 x n } into c fuzzy subsets subject to a few constraints. A fuzzy c partition of X can be represented by a (c n) U Page 12

31 matrix where each entry μ ij represents the class membership of a pixel (Tso and Mather, 2000). The matrix U satisfies the two constraints mentioned in equation (3.2a) and (3.2b) (Tso and Mather, 2000); μ ij [0, 1] (3.2a) and c μ ij = 1 for all i j=1 (3.2b) The clustering criterion used in FCM is attained by minimizing the least square error objective function mentioned in equation (3.3) (Tso and Mather, 2000): N c J FCM (U, V) = (μ ij ) m x i v j 2 A i=1 j=1, 1< m < (3.3) where m is the membership weighting component which controls the degree of fuzziness, V = {v 1, v 2 v n } represents the vector of cluster centers (mean feature vector from training sites), x i represents the spectral response of a pixel (feature vector), c is the number of cluster centers and N represents the number of pixels. x i v j 2 is the squared distance (d ij ) norm between measured value and cluster center which is given in equation(3.4)(kumar, 2007); d 2 ij = x i v j 2 = (x i v j ) T A (x i v j ) (3.4) where A is the weight matrix. In Equation (3.5b) and (3.5c) represents the distance calculated for cluster j. Several norms are applicable for use in equation (3.4). Amongst the available norms, mainly three norms are widely used in particular the Euclidean norm, the diagonal Norm and the Mahalonobis norm (Bezdek et al., 1984). The formulations of each norm are as mentioned in equations (3.5a), (3.5b) and (3.5c) (Bezdek et al., 1984), A = I Euclidean Norm (3.5a) A = D j 1 Diagonal Norm (3.5b) Page 13

32 A = C j 1 Mahalonobis Norm (3.5c) where I is the identity matrix, D j is the diagonal matrix with diagonal elements eigen values of variance covariance matrix of C j given in equation (3.6) (Bezdek et al., 1984), N C j = (x i v j )(x i v j ) T i=1 (3.6) where N v j = x i N i=1 (3.7) If A = I, then the objective function J FCM identifies hyper spherical clusters. For any other norm the clusters identified are hyper ellipsoidal. One of the drawback of using any norm is that the preference of clusters of a certain data even though it is not present in the input dataset. For each class there represents a corresponding membership matrix, thus updating the values for each matrix is necessary. The class membership matrix μ ij is updated by equation (3.8) (Tso and Mather, 2000), μ ij = c k=1 1 ( d ij 2 d ik 2 ) 1 (m 1) and the cluster centers are obtained by equation (3.9)(Tso & Mather, 2000), v j = N i=1 μ ij m. x i N m μ ij i=1 (3.8) (3.9) Class membership values designate the proportions of different classes to a particular pixel. The FCM algorithm (unsupervised) is summarized in steps 1 to step 4 (Tso and Mather, 2000), 1. Initialize the matrix = [u ij ], U (0). 2. Compute the cluster center using Equation (3.9). 3. Update the membership matrix using Equation (3.8). 4. Repeat steps 2 and 3 until ( U new U old < ε). where ε represents the false tolerance value whose usual value is given as Page 14

33 Weighting Component m: The value of m controls the degree of fuzziness and is also known as fuzzifier. As m changes from one to infinity FCM ts to change from a crisp classifier to an entirely fuzzy classifier. Cannon et al. (1986) proposed that the value of m ranges between 1.3 to 1.8. Generally, the optimized value of m ranges between values 1.5 to 2.0. Zimmermann (2001) suggested to take the value of m equal to 2, but there has been no theoretical justification of choosing the value. Number of cluster centers c: When the user does not know about the number of information classes, more knowledge of the number of cluster centers becomes necessary. Kim et al. (2009) proposed a cluster validity index method which determines the optimal number of clusters for fuzzy partitions KERNELS Kernels are used in machine learning for data analysis, in particular in SVM classifiers. The kernel concept is based on an optimal linear separating hyperplane fitted between training samples in a higher dimensional feature space (Camps-Valls and Bruzzone, 2009). All samples that belong to the same class are separated along the side of the hyperplane. Boser et al. (1992) concluded that maximizing the margin between a class boundary and the training samples is a better method and optimizes the cost functions such as the mean squared error. When classes are not linearly separable the training samples are mapped to a higher dimensional space where they are considered to be linearly separable (Figure 3-2). Feature Map φ Separating hyperplane Input Space Higher Dimensional Space Figure 3-2: Mapping of kernels to a higher dimensional space For illustration of a kernel mapping, consider a few sample data of a two non-empty sets X T as in equation (3.10) (Camps-Valls and Bruzzone, 2009), (x 1, t 1 ), (x 2, t 2 ) (x n, t n ) X T (3.10) Page 15

34 where x i represents input data from a set X and t i T represents the target elements. Original samples in X is mapped into a higher dimensional feature space F as in equation (3.11) (Camps-Valls and Bruzzone, 2009), φ X F, x φ(x) (3.11) Suppose we take any two samples s, s i in the input space then, K(x, x i ) = φ(x), φ(x i ) F (3.12) The function K is called a kernel and.,. is the inner product between x and x i. Mapping φ is referred as the feature map and the dot product space F is the feature space (Camps-Valls and Bruzzone, 2009). Computational complexity in original input space is reduced to a considerable amount with the use of a kernel function. Mercer s condition for kernels states that: K(x, x i ) 0 (3.13) Every function K(x, x i ) which satisfies Mercers condition is called an eligible kernel (Kumar, 2007). Different types of kernels are present in the machine learning algorithms. In this research work mainly three types of kernels are considered: local kernels, global kernels and spectral angle kernel which are discussed below. 1. Local Kernels: Local kernels are based on the evaluation of quadratic distance between any two training samples. Only the data that are close or in the proximity of each others have an influence on the kernel values (Kumar, 2007). All kernels which are based on a distance function are local kernels. A few examples of local kernels are mentioned in equations (3.14) to (3.18) (Kumar, 2007): a) Gaussian kernel with the Euclidean norm: K(x, x i ) = exp ( 0.5(x x i )A 1 (x x i ) T (Mohamed and Farag, (3.14) 2004) where A is a weight matrix and is given by: Euclidean Norm A = I (3.15a) b) Radial basis kernel: K(x, x i ) = exp ( x x i 2 ) (3.16) c) Kernel with moderate decreasing (KMOD): 1 K(x, x i ) = exp ( 1 + x x i 2) 1 (3.17) Page 16

35 d) Inverse multiquadratic kernel 1 K(x, x i ) = x x i (3.18) 2. Global kernel: Those samples that are far away from each other have an influence on the kernel value (Kumar, 2007). All kernels which are based on the dot-product are global. Few global kernels considered for this study were as mentioned in equations (3.19) to (3.21) : a) Linear kernel: One of the simple kernels is based on the dot product. K(x, x i ) = x. x i (3.19) b) Polynomial kernel: This kernel computes the inner product of all monomials up to degree p. K(x, x i ) = (x. x i + 1) p (3.20) c) Sigmoid kernel: K(x, x i ) = tanh ( x. x i + 1) (3.21) 3. Spectral Kernel: To fit the hyperspectral point of view, we consider other criteria that take the spectral signature into consideration. The spectral angle (SA) α(x, x i ) is defined in order to measure the spectral difference between x and x i while being robust to differences of the overall energy (e.g. illumination, shadows) as mentioned in equation (3.22) (Kumar, 2007; Mercier and Lennon, 2003), α(x, x i ) = arccos ( x. x i x x i ) (3.22) Composite Kernels: A mixture of kernels can be used to mix the dual characteristics i.e. the characteristics of the dot product or the Euclidean distance with the spectral angle (Kumar, 2007; Mercier and Lennon, 2003). Mercer s single kernels can be combined to include the spatial and spectral properties to a new family of kernels termed as composite kernels. This family of kernels (Camps-valls et al., 2006): can enhance the classification accuracy when compared to the traditional single kernels can make the classification more flexible by considering both the spectral and spatial properties. can increase the computational efficiency. There are different methods for combining two different kernels such as stacked approach, direct summation kernel, weighted summation kernel and cross-information kernel (Camps-valls et al., 2006). In Page 17

36 this research work weighted summation kernel method has been adopted for composite kernel. Composite kernels can be expressed as (3.23) (Kumar, 2007); K(x, x i ) = λk a (x, x i ) + (1 λ)k b (x, x i ) (3.23) where K a (x, x i ) and K b (x, x i ) can be any two local, global or spectral kernels and λ represents a positive real-valued free parameter (0 < λ < 1) which represents the weight given for each kernel. When using composite kernels, fine tuning of λ is also necessary along with the degree of fuzziness. As K a (x, x i ) and K b (x, x i ) satisfy both Mercers condition for eligible kernels, the linear combinations is also an eligible kernels. In this study the best single kernel among the local as well as global category are combined with the spectral kernel. Also, a combination of two global and a local and global kernels performance has also been considered KERNEL BASED FUZZY C-MEANS (KFCM) CLASSIFIER The FCM classifier assigns sample data points to multiple clusters thus overcoming the drawback of hard classifiers. The FCM classifier is effective in the presence of spherical and non-overlapping data clusters. For non-spherical overlapping data clusters, Kernel based Fuzzy c-means (KFCM) classifier was introduced. The idea of KFCM maps the input data to the high dimensional feature space and performs the FCM classifier in this space. Literature has shown that KFCM performs better than FCM by reducing the computational complexity (Jain and Srivastava, 2013;Kaur et al., 2012). The FCM classifier is performed by minimizing the objective function as mentioned in equation (3.3). Let φ be an implicit map function where x represents the samples in feature space H of equation (3.12). KFCM is based on the minimization of objective function (Yang et al., 2007) equation (3.24), N c J KFCM (U, V) = (μ ij ) m φ(x i ) φ(v j ) 2, 1 < m < (3.24) A i=1 j=1 where, φ(x i ) φ(v j )) 2 = (φ(x i ) φ(v j )) T. (φ(x i ) φ(v j )) = φ(x i ) T. φ(x i ) φ(x i ) T φ(v j ) φ(v j ) T φ(x i ) + φ(v j ) T. φ(v j ) = K(x i, x i ) + K(v j, v j ) 2(K(x i, v j )) (3.25) If K(x, x) = 1, then equation 3.26 can be written as equation (3.27), φ(x i ) φ(v j )) 2 = 2 2 (K(x i, v j )) = 2 (1 K(x i, v j )) (3.26) Substituting equation (3.26) in equation (3.24), we get, Page 18

37 N c J KFCM (U, V) = 2 (μ ij ) m (1 K(x i, v j )) i=1 j=1, 1< m < (3.27) and the class membership matrix is updated by equation (3.28), μ ij = 1 c ( 1 K(x i,v j ) k=1 1 K(x k,v j ) ) 1 (m 1) We then obtain the cluster center using the equation mentioned in (3.29). (3.28) v j = N i=1 μ ij m. K(x i, v j ) x i N m i=1 μ ij K(x i, v j ) (3.29) Here, the function K(x i, v j ) can be replaced by any of the eight kernel function discussed in Section 3.3. The KFCM classifier is carried out in the following steps 1 to 5 (Yang et al., 2007): 1. Choose the number of cluster centers and determine the termination criteria. 2. Choose a kernel function K(.,. ) and determine its parameters. 3. Initialize the cluster center v j and calculate the membership matrix. 4. Update the cluster center v j using equation (3.30) and calculate the membership matrix by equation (3.29). 5. If ( U new U old < ε) then Stop otherwise go to Step ACCURACY ASSESSMENT Accuracy assessment is important in order to assess the quality of classified outputs and to compare different classification algorithms (Okeke and Karnieli, 2006). One way to represent the accuracy of the classification results is the error matrix also termed the confusion matrix or the contingency table. The error matrix gives the agreement of accuracy assessment between the classified and reference data along with the misclassified results. Based on the error matrix several statistical measures have been introduced such as the Kappa coefficient, user s accuracy and producer s accuracy that are all used for summarizing information about the accuracy assessment. The error matrix can only be used in the case of hard classification i.e. when a pixel represents a single class and not when a pixel covers more than one class(silvan-cardenas and Wang, 2008). For soft classification therefore, it cannot be applied. To assess the accuracy of a soft classification other methods were introduced (Binaghi et al., 1999 ; Congalton, 1991; Jr and Cheuk, 2006). The Fuzzy Error Matrix (FERM) was the most appealing approach used. This section describes about the methods introduced to assess the accuracy of soft classified results. Page 19

38 FUZZY ERROR MATRIX (FERM) An error matrix is a square array of number which is set out in rows and columns where the rows represent the sample elements of the classified data and the columns represent the number of sample elements corresponding to the reference data. In an error matrix the diagonal elements show the number of pixels that are correctly classified and the off diagonal elements show misclassification. In the case of FERM the set of classified as well as reference data are considered as fuzzy sets which have the membership matrix between [0, 1] where the interval denotes the interval of real numbers from 0 to 1.The fuzzy set operator min is used in the building of error matrix to provide FERM which provides a maximum sub-pixel overlap between the classified and the reference image as in equation (3.30) (Binaghi et al., 1999): μ Cm R n (x) = min (μ Cm (x), μ Rn (x)) (3.30) Here R n represents the set of the reference data assigned to class n, C m represents the set of classified data assigned to class m and μ represents the membership grade of the class within a pixel. The overall accuracy is considered as the simplest value form of information for accuracy assessment. In the case of error matrix the overall accuracy is calculated as the sum of the total number of diagonal elements by the total number of sample elements whereas in the case of FERM the overall accuracy is calculated by summing the diagonal elements by the total membership grade found in the reference data as given in equation (3.31) (Kumar, 2007). OA FERM = i=1 M(i, j) c c i=1 R j (3.31) Where OA represents the overall accuracy, M (i, j) represents the member in the m th class in the soft classified output and n th class in the soft reference data, c represents the number of classes and R j represents the sum of the membership grade of class n from the soft reference data SUB-PIXEL CONFUSION UNCERTAINTY MATRIX It is difficult to determine the actual overlap among the classes which are based on land-cover fractions. This is usually termed as sub-pixel area allocation problem (Silvan-Cardenas and Wang, 2008). The minimum and maximum overlap between any two classes deps upon the spatial distribution of these classes within a pixel. This problem gives a unique solution when more than one class is either overestimated or underestimated at each pixel where the sub-pixel confusion can be determined uniquely. In the other case when there is no unique solution the solution can be represented by confusion intervals. If no solution exists, a sub-pixel Confusion Matrix (SCM) contains confusion intervals in the form of a Page 20

39 center value plus-minus the maximum error. The confusion matrix of a soft classification satisfies a) the diagonalization property where the matrix is diagonal if the assessed data matches the classified data and b) marginal sums property where the marginal sums match the total grades from the classified as well as assessed data(silvan-cardenas and Wang, 2008). For assessing the pixel-class relationship in sub-pixel classifications, various operators were defined. The MIN operator gives the maximum possible overlap between the classified and the assessed data. It may overestimate the actual sub-pixel agreement and disagreement, however resulting in greater marginal sums. The Similarity Index (SI) is a variant of the MIN operator and gives a normalized sub-pixel overlap. The PROD operator gives the expected overlap between the assessed and reference sub-pixel partitions. The LEAST operator gives the minimum possible sub-pixel overlap between two classes (Silvan-Cardenas and Wang, 2008). The various basic operators however cannot satisfy the property of diagonalization and hence composite operators MIN-PROD, MIN-MIN and MIN-LEAST were put forth. The MIN-MIN operator assigns the diagonal elements first followed by the off diagonal elements. The MIN-LEAST operator uses the MIN operator for the diagonal elements and the LEAST operator for the off-diagonal elements. The MIN- PROD uses the MIN operator for the diagonal elements and normalized PROD operator for the offdiagonal elements. The MIN-MIN and the MIN-LEAST operators were introduced to provide minimum and maximum sub pixel overlap. When at most one class is either underestimated or overestimated in such cases the MIN-PROD composite operator is used (Silvan-Cardenas and Wang, 2008) ROOT MEAN SQUARE ERROR (RMSE) The Root Mean Squared Error (RMSE) is the squared difference between the membership values of the classified and reference image. It is calculated (3.32) as (Dehghan and Ghassemian, 2006), c RMSE = 1 N (μ ij μ ij ) 2 N j=1 i=1 where μ ij represents the membership values pixel in the classified image, μ ij (3.32) represents the membership values of the pixels in the reference image, c is the total number of classes and N represents the number of pixels in the image. A lower RMSE value represents a low uncertainty and vice versa. The RMSE can be calculated in two ways: 1) for complete image- Global RMSE and 2) for per class fractional images- per class RMSE (Chawla, 2010). The global RMSE is calculated by equation (3.33) and the per-class RMSE is calculated by equation (3.33), c RMSE = 1 N (μ ij μ ij ) 2 N j=1 i=1 (3.33) Page 21

40 ENTROPY MEASURE Dehghan and Ghassemian (2006) introduced the entropy measure to assess the quality of classification. The reason was the Root Mean Square Error (RMSE), the Mean Relative Error (MRE) and the Correlation Coefficient (CC) measures for accuracy assessment dep upon the actual and desired outputs of the classifier and hence are dep on the error whereas the entropy measure is depent only upon the actual outputs of the classifier and thus it is less sensitive to error variations. This measure determines the accuracy of the classification result based on a single number per pixel. The entropy measure is expressed as mentioned in equation (3.34) (Dehghan and Ghassemian, 2006), N c Entropy, E = μ ij log 2 (μ ij ) i=1 j=1 (3.34) where N represents the number of pixels in the image, c represents the number of classes, μ ij represents the membership value assigned for i th pixel of class j. Fuzzy classifiers generate soft classified outputs in the form of fractional images. The representation of membership values calculated for a particular dataset for each class is shown in as fractional images (Harikumar, 2014). For five classes, five fractional images are generated. In a fractional image of a particular class, the membership values for that class will be high and the membership values for all the other classes will be low. For calculating the entropy of a particular class, 1. The mean of a few training samples is calculated for the class under consideration where it appears to be homogeneous. 2. Using equation (3.35) entropy values are calculated using the membership values of that sample in all fractional images. If there are for example three classes and membership values from the testing sites of the fractional images are equal to 0.8, 0.3 and 0.2 for each of the three classes then using equation (3.35) the entropy values can be calculated as: E = (0.8 log 2 0.8) + (0.3 log 2 0.3) + (0.2 log 2 0.2) = High entropy value represents higher uncertainty and vice-versa. In this work, entropy value is used to optimize the value of the parameters m and λ. Here, for both FCM and KFCM the fractional images were generated for each class for all values of m varying from 1.1 to 2.0. As m moves to a greater value the generated fractional images were not meaningful. A low entropy result shows the quality of the classified image. In a fractional image low entropy is obtained when the difference between the membership values Page 22

41 of the class under consideration (favourable class) is high and the membership values for all other classes (unfavourable classes) are very low. In the case of composite kernels the uncertainty has been calculated to optimize λ also along with m MEAN MEMBERSHIP DIFFERENCE METHOD The entropy measure alone cannot be used for the optimization of various parameters used in this research work. This may result in misclassification in the generated outputs. Thus, the mean membership difference calculation method is adopted. It also helps to match fuzziness in the image to the fuzziness in the ground. In this method the optimization of m was found by calculating the difference between the membership values of the class of interest and the average of the membership values in other classes. The calculated value should be maximum or ting to The method can be explained with an example. For example, consider the analyst identified six classes (class 1, 2, 3, 4, 5, 6) for a dataset. For six classes, six fractional images are generated. The membership values in the fractional images will be high when the class is present and low in other regions (Harikumar, 2014) Suppose the class of interest is class 1 as shown in Figure Figure 3-3: An image with six classes identified along with the generated fractional images This method can be concluded in the following steps: 1. Consider the fraction image generated for the class of interest (Class 1-water). 2. Consider seven to eight pixels from the homogeneous areas of the class under consideration and for all other classes. 3. Calculate the mean of the pixels for all the classes (class 1 to class 6) from the testing site for each class. 4. Calculate the membership value difference between the class under consideration and the membership values of all the other classes in the same fraction image. e.g. (Δ12 = class 1-class 2, Δ13=class 1-class 3, Δ14=class 1-class 4, Δ15 = class 1-class 5, Δ16 = class 1-class 6). 5. Calculate the mean of all the differences calculated in step 4 ((Δ12+Δ13+Δ14+Δ15+Δ16)/6). Page 23

42 From the above steps, if we consider fraction image of class 1 the membership values of the pixels for that class will be high i.e. ideally equal to or close to for homogeneous areas and the membership values for all the other classes the membership value will be ideally equal to zero or practically approaching to zero. Thus, if we calculate the mean of the membership value difference between these two then the value should be ting to or the mean membership value difference calculated should be highest. This procedure has to be done for each fraction images generated for the given parameters. The class of interest can be selected based on the homogeneity. When a class is more homogeneous the probability that the membership grade ts to 1 is high. When the class is less homogeneous the probability that the membership grade ts to is very low. Thus for the optimization of the weight component m the selection of homogeneous class was necessary. For this research work, water class is considered to check the mean membership difference as this class have been identified more homogeneous when compared to the other classes. Values of m and λ are optimized considering minimum entropy and maximum mean difference STUDY AREA AND MATERIALS USED This section identifies the study area, gives an explanation for selecting this particular study area and describes the materials used. The specifications of each sensor and the pre-processing stages of datasets have been included. Steps for generating soft LISS-IV reference data for the validation of AWiFS and LISS-III images have also been included STUDY AREA Selection of a study area in any research is important for evaluating the efficiency and performance of adopted methodology. The study area considered for this particular research work was Sitarganj s Tehsil, Udham Singh Nagar district, Uttarakhand state, India (Singha, 2013). The considered area ext from N to N latitudes and E to E longitudes (Singha, 2013). Sitagarnj s Tehsil was recognized as it contained six land cover classes e.g. agricultural fields with a crop, agricultural fields without crop both dry and moist, Sal and Eucalyptus forests and two water reservoirs: the Baigul (Sukhi) and Dhora reservoirs. The reasons for selecting this study area include: Presence of mixed pixels which occurs because of degradation of land cover classes from one to another (water to grassland) will help to assess the capability of Kernel based Fuzzy c-means (KFCM) classifier. Data from the sensors AWiFS, LISS-III and LISS-IV from Resourcesat-1 and Resourcesat-2 were available of the same date to perform image to image accuracy assessment. A field visit for the study area was conducted in November, Page 24

43 Final results of KFCM can be compared with the final results of Fuzzy-c-Means (FCM) (Singha, 2013) MATERIALS USED Appropriate use of Remote Sensing (RS) data which defers in spectral, spatial and temporal properties deps on the suitable algorithms used in any research work. In this study, AWiFS (Advanced Wide Field Sensor), LISS-III (Linear Imaging Self-Scanning System-III) and LISS-IV (Linear Imaging Self-Scanning System-IV) images of both Resourcesat-1 of IRS-P6 (Indian Remote sensing Satellite) and Resourcesat-2 were used. Resourcesat-1 (IRS-P6) was launched in 2003, with the objective of natural resource management with a 5-24 day repeat cycle. The images from AWiFS, LISS-III and LISS-IV were acquired at the same time. The dataset available from Resourcesat-1 was captured at 15 th October 2007 and from Resourcesat-2 at 23 rd November 2011 (Chawla, 2010). The soft classified outputs from finer resolution LISS-IV image were used as for the validation of the soft outputs of LISS-III and AWiFS. The specifications of the satellite data used are shown in Table 3-1. Table 3-1: Resourcesat-1 and Resourcesat-2 sensor specification Specification Spatial Resolution(m) Radiometric Resolution Swath(km) AWiFS LISS-III LISS-IV Resourcesat- Resourcesat- Resourcesat- Resourcesat Resourcesat- Resourcesat (Max 70 (Max Mode) Mode) 70.3 (Pan 70 (Mono Mode) Mode) Spectral Resolution (µm) Page 25

44 Figure 3-4: Geographical Location of Study Area Figure 3-5: LISS IV (Resourcesat-2) image of Sitarganj s Tehsil with classes (a) Agricultural field with crop (b) Sal Forest (c) Eucalyptus Plantations (d) Dry Agricultural field (e) Water Page 26

45 DATASET PREPROCESSING Geo-rectification is necessary when an accurate area, distance and direction measurements are required to be made from imagery as well as overlaying images to have a pixel to pixel correspondence. Here, LISS-IV image is used as a reference image for rectification of LISS-III and AWiFS datasets. The first step was the image-to-map rectification of LISS-IV image with digital form of Survey of India (SOI) toposheet, numbered 53 P. The LISS-IV image was geo-registered in UTM projection spheroid and vertical datum 9 being Everest North, Zone 44. Geo-registration of LISS-III and AWiFS images were done with the geometrically corrected LISS-IV image. Outputs from finer resolution LISS-IV image was used as reference data for the evaluation of coarser resolution AWiFS and LISS-III images, resampling is necessary for accuracy assessment purpose. For this purpose all three images AWiFS, LISS-III and LISS-IV images were resampled in such a way that the pixel size in all three images were in ratio 1:4:12, respectively. This pixel size ratio was maintained to have full pixel correspondence for applying FERM accuracy assessment approach. Thus, finer resolution pixels were integrated to form a coarser resolution i.e. 4 4 =16 LISS-IV pixels were combined to form coarser resolution pixel for LISS-III. The aggregated LISS-IV pixels are used for the accuracy assessment of AWiFS and LISS-III. Being easy and fast to use the Nearest Neighbor Resampling technique is used here. Also, it retains the original data file values (Chawla, 2010) REFERENCE DATASET GENERATION For this research, the classified outputs of finer resolution LISS-IV image are used as reference dataset. The reference data used for AWiFS were the LISS-III and LISS-IV images. Similarly, the reference dataset used for LISS-III image was LISS-IV. Because of the following reasons the soft ground data were not acquired (Chawla, 2010): It is not possible to locate sub-pixel classes on the ground. Some areas were inaccessible and thus, obtaining ground data in soft mode was difficult. In this research classified output images were generated in the form of fraction images for each class under consideration. Hence, the fraction images of finer resolution LISS-IV was used as the reference image for the accuracy assessment of AWiFS and LISS-III fractional images. To avoid errors in the test and reference datasets both images were used with same date of acquisition. Kloditz et al. (1998) proposed a multi-resolution method to estimate the classification accuracy of low resolution image by using a high resolution image where every high resolution pixel within a defined area contributes to the corresponding low resolution pixel. It has been shown that there is no loss of information in the lower resolution images rather the pattern is preserved. Multi-resolution technique was used to classify finer resolution reference dataset as their resolutions were not the same and cannot be used for direct accuracy assessment. Page 27

46 3.3. METHODOLOGY The main objective of this research work was to develop a Kernel based Fuzzy c-means classifier. This chapter deals with the detailed explanation of steps adopted to achieve the objectives mentioned in section 1.2. AWiFS, LISS-III and LISS-IV images Pre-processing (Geo-Registration) Supervised Soft Classification Approaches: Fuzzy c-means Classification (FCM) Kernel based Fuzzy c-means (KFCM) FCM with combination of best kernels Image-to-Image Accuracy Assessment PROPOSED KERNELS: LOCAL KERNELS Gaussian Kernel Using Euclidean Norm Radial Basis Kernel KMOD Kernel Inverse Multiquadratic Kernel GLOBAL KERNELS Linear Kernel Polynomial Kernel Sigmoid Kernel SPECTRAL ANGLE KERNEL Figure 3-6: Methodology Adopted The flowchart of the adopted methodology is shown in Figure GEO-REFERENCING Initially, the AWiFS, LISS-III and LISS-IV images of Resourcesat-1 and Resourcesat-2 were geometrically rectified and geo-registered. Using Survey of India (SOI) toposheet the finer resolution LISS-IV images Page 28

47 were geo-registered followed by the geo-registration of AWiFS and LISS-III. The process of geometric correction and geo-registration of datasets are explained in detail in PREPERATION OF REFERENCE DATASET From the KFCM classifier soft classified outputs were generated, so for accuracy assessment the generation of soft reference dataset was necessary. The soft outputs were in the form of fraction images generated for each class under consideration. The results of the LISS-IV image were used as reference dataset for both AWiFS AND LISS-III. The detailed explanation of the reference dataset generation has been given in SUB-PIXEL CLASSIFICATION ALGORITHMS Supervised KFCM classifier was adopted to generate the outputs of sub-pixel classification outputs. Three approaches i.e. Fuzzy c-means (FCM), FCM with single kernels (KFCM) and FCM with composite kernels, considered for this study are explained in detail in the following sections FUZZY C-MEANS (FCM): Different algorithms are known for the fuzzy based clustering. The output of these sub-pixel classification algorithms were obtained in the form of fraction images for each class under consideration. Weight component m controls the degree of fuzziness which was optimized based on the maximum mean membership difference between favourable and unfavourable classes and minimum entropy. Out of the three norms introduced by Bezdek et al. (1984) only one is considered i.e. Euclidean norm as the Diagonal and Mahalonobis norms are sensitive to noise and thus reduce the classification accuracy (Kumar, 2007).This approach was adopted for a comparative analysis between simple FCM results and KFCM approach KERNEL BASED FUZZY c-means (KFCM): Mainly three categories of kernels were considered: Local Kernels, Global Kernels and Spectral Angle Kernel. In this study, four Local Kernels were used: Gaussian Kernel using Euclidean Norm, Radial Basis Kernel, Kernel with Moderate Decreasing (KMOD) and Inverse Multiquadratic Kernel. Global kernels used were three: Linear Kernel, Polynomial Kernel and Sigmoid Kernel. Overall eight single kernels were studied using FCM approach. Followed by the implementation of eight single kernels, the next step was to optimize the weight component m using mean membership difference between favourable and unfavourable class method and entropy method. The best single kernels for each global and local category were selected based on the maximum mean membership difference between favourable and unfavourable classes and minimum entropy. Page 29

48 FCM WITH COMOSITE KERNELS The composite kernels were obtained from the best single kernels. In composite kernels, the weight factor λ is given for each kernel which varies from 0.1 to 0.9. For composite kernel the, optimization of m and λ was necessary and this is done considering maximum mean membership difference between favorable and unfavorable class and minimum entropy from where the best composite kernel was concluded. Untrained case outputs were also obtained by not training the KFCM classifier with the signature data of a class, here in this study agricultural field with crop under was considered as untrained class. As the approach used was fuzzy, the classified outputs were generated in the form of fractional images. Fractional images are the pictorial representation of the membership values generated for a particular class (Harikumar, 2014). Number of fractional images generated equals the number of classes considered. After the generation of fraction images the entropy measure and mean membership difference values are analysed to select the best kernels. Selections of training samples were important for all the three classification approaches. Hence, mean of the membership values of the samples thus collected were calculated for each class. This mean values was used to find the difference between a favourable and nonfavourable class ACCURACY ASSESSMENT Accuracy Assessment is an important for assessing the quality of the classified outputs. Image to image accuracy assessment was done with reference dataset as LISS-IV for both AWiFS and LISS-III. For this here, Fuzzy Error Matrix (FERM) was used to generate overall accuracy. The overall classification accuracy of KFCM classifier was compared with that of FCM classifier. Accuracy in the case of untrained case has been also evaluated. Page 30

49 4. RESULTS 4.1. PARAMETER ESTIMATION To assure the best classified outputs from the algorithms used in this research work, it was required to estimate optimal values for weight constant m and the weight given to each kernel λ for FCM, FCM using single kernel as well as FCM using the composite kernel. Outputs from these classifiers were obtained as fractional images because the classification approach was fuzzy. Also, it was necessary to optimize the parameters m and λ to match the fuzziness in the image to the fuzziness in the ground. Optimizations of both parameters were based upon the calculation of entropy measure and mean membership difference (uncertainty) calculation discussed in sections and respectively. m was optimized in the case of both single as well as composite kernel. The next two subsections explain the two cases. Why both uncertainty and entropy calculation method to optimize m? Figure 4-1 shows a lower entropy value calculated for Gaussian kernel. It is observed that as m varies from 4.0 to 10.0 the entropy values reaches a saturation point. An increase in entropy can be seen between range 1.0 and 4.0. But, it was difficult to find the optimal m just by considering the low entropy values. Thus, mean membership difference method was also considered. In Figure 4-2 we can see that mean membership difference reaches maximum or reaches 1.0 for m values in range between 1.0 and 4.0. According to the criteria for parameter optimization (minimum entropy and maximum mean membership difference), it can be concluded from this that optimized m lies within the range 1.0 and 4.0. As the value of m increases, a decrease in mean membership difference is seen. This occurs because as the value of m increases fuzziness increases. Out of all the classes identified among Resourcesat-1 and Resoourcesat-2 water class is more homogeneous. Also, from Figure 4-1 it is observed that class water has the least entropy measure as compared to the other classes. Similarly, from Figure 4-2 it is observed that water class reaches maximum membership value 1.0 and remains constant for lower values of m. Thus, water class fraction image generated by the classifier have been considered for the optimization of various parameters used in this research work. Parameter m was optimized for all the kernels (Table 4-3; Table 4-4). The entropy and mean membership differences generated for FCM and all the single kernels for Resourcesat-1 AWiFS are given in Figure A-5 (Appix A). Page 31

50 Table 4-1: Classes Identified at Sitarganj s Tehsil in AWiFS, LISS-III and LISS-IV sensors for Resourcesat-1 and Resourcesat-2. Resourcesat-1 (a) Agricultural field with crop (b) Sal Forest (c) Eucalyptus plantations (d) Dry Agricultural field without crop (e) Moist Agricultural field without crop (f) Water Resoucesat-2 (a) Crop (b) Eucalyptus Plantation (c) Fallow Land (d) Sal Forest (e) Water Figure 4-1: Variation in Entropy with respect to weight constant m for Gaussian kernel using Euclidean norm (Resourcesat-1 AWiFS) Page 32

51 Figure 4-2: Variation in mean membership difference with respect to weight constant m for Gaussian kernel using Euclidean norm (Resourcesat-1 AWiFS) Optimization of weight factor (λ) for composite kernels As discussed in section the composite kernel requires a weight factor which gives weight λ to kernel K a and 1 λ to kernel K b. In the case of composite kernels, it was necessary to optimize both the parameters λ as well as m. For this λ values were considered within the range between 0.90 and But when the weight given to kernel K a is higher misclassified outputs are generated as shown in Figure 4-4. Misclassification occurs because among single kernels, if K a has better performance than K b then a high weight to K b in a composite case may result in a kernel with a lower performance. Figure 4-3 shows the entropy and mean membership difference graph generated for a Gaussian-Spectral kernel. From Figure 4-3 it can be seen that Δ reaches for λ=0.8 and m=1.04. But when the fractional mages are interpreted we get misclassification. Figure 4-4 shows misclassification for all the classes except water. Thus, optimization of parameter m and λ for composite kernels was also based on interpreting the generated fractional images. As the value of m decreases it can be seen that there is a steep increase in the mean membership difference. With the increase in m and increase in λ the entropy value also decreases. It can be understood from Figure 4-4 that a lower weight given to Gaussian kernel will give misclassified outputs. It is observed that agriculture field with crop is misclassified as moist agriculture, sal forest as agriculture field with crop (Figure 4-4). The fraction image generated for eucalyptus plantations do not show high membership values. If a higher weight was given to Gaussian kernel, then lower entropy values and Page 33

52 maximum mean membership differences were observed. The optimized m and λ values for the composite kernels are given in section 4.4. (a) (b) Figure 4-3: Estimation of weight given to each kernel (λ) using (a) entropy and (b) mean membership difference plot for Gaussian-Spectral kernel from AWiFS (Resourcesat-1) Misclassification in Agriculture field with crop as moist agriculture Misclassification in Sal Forest as Agriculture field with crop Eucalyptus plantation (a) (b) (c) Misclassification in moist agriculture as sal plantations (d) (e) (f) Figure 4-4: Misclassified outputs for Gaussian-Spectral Resourcesat-1 AWiFS for m=1.04 and λ=0.80 for (a) Agricultural field with crop (b) Sal Forest (c) Eucalyptus plantations (d) Dry agricultural field without crop (e) Moist agricultural field without crop (f) Water Page 34

53 4.2. RESULTS OF SUPERVISED FCM CLASSIFIER To compare the performance of FCM classifier with KFCM classifier it was necessary to generate the fractional images for FCM classifier. Using the entropy and mean membership difference method m was optimized for the FCM classifier. Optimized m values for all the six images are shown in Table 4-2. Here it was observed that maximum mean membership difference obtained for Resourcesat 1 and 2 were 1.0 and 0.80 respectively. The maximum mean membership difference obtained for Resourcesat-2 is slightly less than Resourcesat-1 due to its higher radiometric resolution. Generated fraction images for FCM Resourcesat-1 and -2 have been shown in Figure 4-5 and Figure 4-6. Table 4-2: Estimated optimized m values for FCM classifier along with the calculated mean membership difference (λ) and entropy AWiFS LISS-III LISS-IV Δ Entropy m Δ Entropy m Δ Entropy m RESOURCESAT e e e RESOURCESAT e e e While interpreting the fractional images it can be seen that Resourcesat-2 classifies the land cover classes much better when compared to Resourcesat-1 due to lower radiometric resolution. The optimized m value from Resourcesat-2 was 1.01 for LISS-IV imagery. When the value of m ts to 1.01 fuzziness decreases. In FCM, higher values of entropy are obtained as m varies from 3.0 to Higher values of entropy indicate higher uncertainty (Figure A-5 (i), Appix A). Similarly, we can see that the mean membership difference approaches 1.0 for lesser values of m (Figure A-5 (i), Appix A). While considering eucalyptus plantation, it can be seen that among the datasets used the Resourcesat-2 LISS-III image has the least entropy that is uncertainty is less. But even then merging of classes for all the three fraction images of classes agriculture field with crop, sal forest and eucalyptus plantation were found. The fraction image for the eucalyptus plantation highlights all the three vegetation classes. The mean membership difference calculated for all the classes were having maximum value of Page 35

54 (a) (b) (c) (d) (e) (f) (1) (2) (3) Figure 4-5: Fractional images generated for optimized m values for FCM classifier for (1) LISS-IV, (2) LISS-III and (3) AWiFS (Resourcesat-1) images with identified classes (a) Agricultural field with crop (b) Sal Forest (c) Eucalyptus Plantation (d) Dry agricultural field without crop (e) Moist agricultural field without crop and (f) Water Page 36

55 (a) (b) (c) (d) (e) (4) (5) (6) Figure 4-6: Fractional images generated for optimized m values for FCM classifier of (1) LISS-IV, (2) LISS-III and (3) AWiFS (Resourcesat-2) images with identified classes (a) Agricultural field with crop (b) Eucalyptus Plantation (c) Fallow Land (d) Sal Forest (e) Water 4.3. RESULTS OF FCM CLASSIFIER USING SINGLE KERNELS Using entropy method and mean membership difference method the value m was optimized for the KFCM classifier. Optimized m values along with their estimated entropy and mean membership difference measures are given in Table 4-3 and Table 4-4 for Resourcesat-1 and Resourcesat-2 datasets. Global kernels have a lower entropy than local kernels and spectral angle kernel and reaches maximum mean membership difference (Δ = 1.0). It was observed that for Resourcesat-1 the Inverse Multiquadratic (IM) kernel has the lowest entropy and the maximum mean membership difference (Δ) 1.0. As the three classes in the dataset are placed under the category vegetation their mean feature vectors are almost the same. This may be the reason why IM and SA kernel misclassify agriculture field as either eucalyptus or Sal or a combination of three. Thus, IM was concluded the best single kernel for Resourcesat 1.0. In Resourcesat-2, the Gaussian kernel with the Euclidean Norm resulted into the best results. Fraction images generated for Gaussian kernel with Resoucesat-2 show no misclassification. The maximum mean Page 37

56 membership difference, however, was equal to 0.80 which shows less fuzziness as compared to IM (Resourcesat-1). The radiometric resolution of Resourcesat-2 is higher than that of Resourcesat-1 and as a result the maximum mean membership differences of Resourcesat-2 for the three images were equal to Overall, the Gaussian kernel has the lowest entropy; corresponding fraction images generated are given in Figure 4-7. While considering the fraction images of the linear, polynomial and sigmoid kernels, the three vegetation classes: agricultural field with crop, sal forest and eucalyptus plantation do not highlight their corresponding feature classes but instead it merges all the three classes with high membership values. For the class water these local kernels also merge the patches of moist agricultural field without crop. Even the entropy measure for these kernels was higher. The entropy measure and fractional images generated by the global kernels and spectral angle kernel conveys a much better classified output as compared to the local kernels. Even then the fractional images generated for the classes agricultural field with crop, sal and eucalyptus are merged in a few cases, because of similarity in spectral values. Entropy measure of the classified outputs: Uncertainty of the classified results can be assessed using the entropy values. In this study, classified outputs were generated in the form of fractional images. For Resourcesat-1, the lowest entropy was obtained for the Inverse Multiquadratic kernel which comes under the category of local kernels. The highest entropy values were observed for the local kernels i.e., uncertainty is more in their case. It is clearer from the fraction images generated by the global kernels. Fraction images generated by all the three global kernels do not highlight their feature classes which come under the vegetation category. Agriculture field with crop, sal forest and eucalyptus plantations are all merged in their fractional images. These kernels are not able to differentiate between the spectral values of these classes. Neither the local kernels nor the spectral angle kernel shows this misclassification which indicates a poor performance of global kernels. For Resourcesat-2, Gaussian kernel Euclidean norm has the overall lower entropy value. Page 38

57 Table 4-3: Optimized m values for local, global and spectral angle kernels for AWiFS, LISS-III and LISS- IV images (Resourcesat-1) along with the calculated Mean Membership Difference ( ) and Entropy(E) Sensors/ Kernels AWiFS LISS-III LISS-IV Δ E m Δ E m Δ E m RESOURCESAT-1 LOCAL GLOBAL Linear e e Polynomial Sigmoid e e Gaussian (Euclidean) e e e Radial Basis e e e KMOD e e e Inverse Multi e e e quadratic Spectral Angle e e Table 4-4: Optimized m values for local, global and spectral angle kernels for AWiFS, LISS-III and LISS- IV images (Resourcesat-2) along with the calculated Mean Membership Difference ( ) and Entropy (E) Sensors/ Kernels AWiFS LISS-III LISS-IV Δ E m Δ E m Δ E m RESOURCESAT-2 LOCAL GLOBAL Linear e Polynomial Sigmoid Gaussian (Euclidean) e e e Radial Basis e e e KMOD e e e Inverse Multi e e e quadratic Spectral Angle e e e Page 39

58 (a) (b) (c) (d) (e) (f) (i) (ii) (iii) (iv) (v) Page 40

59 (vi) (a) (b) (c) (d) (e) (vii) (viii) Figure 4-7: Generated fractional images for optimized m values for Resourcesat-1 LISS-IV for (i) Linear (ii) Polynomial (iii) Sigmoid (iv) Gaussian kernel using Euclidean norm (v) Radial Basis (vi) KMOD (vii) Inverse Multiquadratic and (viii) Spectral Angle kernels for classes identified as (a) Agricultural field with crop (b) Sal forest (c) Eucalyptus plantations (d) Dry agricultural field without crop (e) Moist agricultural field without crop and (f) Water Page 41

60 (i) (ii) (iii) (iv) (v) Page 42

61 (vi) (vii) (viii) Figure 4-8: Generated fractional images for optimized m values for Resourcesat-2 LISS-IV for (i) Linear (ii) Polynomial (iii) Sigmoid (iv) Gaussian kernel using Euclidean norm (v) Radial Basis (vi) KMOD (vii) Inverse Multiquadratic and (viii) Spectral Angle kernels for classes identified as (a) Agricultural field with crop (b) Eucalyptus Plantation (c) Fallow Land(d) Sal Forest (e) Water Table 4-5 shows the maximum mean membership difference values for optimized m values for each kernel for the Resourcesat-1 AWiFS dataset. The global kernels give the lowest values as compared to local kernels and spectral angle kernel. Water class being more homogeneous it gives high values even for global kernels. When analysing the mean membership difference values calculated for other classes, however, it is much lower compared to Water. Fractional images generated for the local kernel and spectral angle kernel highlight features with respect to their corresponding feature class. For instance, for sal forest, patches were better visible for local kernels as compared to the global and spectral angle kernel. Even though the spectral angle kernel has the highest mean membership difference, it can be seen from the fraction images that different classes merge with other classes. Water being more homogeneous than other classes gives higher mean membership difference values for all kernels. Page 43

62 Table 4-5: Maximum mean membership difference values estimated for optimized values of m (Resourcesat-1 AWiFS) Kernels Dry Moist Agricultural Sal Eucalyptus Agricultural Agricultural field with Forest Plantations field without Field without crop crop crop Water Linear Polynomial Sigmoid Gaussian (Euclidean) Radial Basis IM KMOD SA Note: Highlighted values for kernels denotes the acceptable values RESULTS OF FCM CLASSIFIER USING COMPOSITE KERNELS Composite kernels were tested to incorporate the spatial properties of global/local kernels and spectral properties of spectral angle kernel. For this study five combinations of composite kernels have been studied. For Resoucesat-1 and 2, the IM kernel and Gaussian kernel with the Euclidean norm were the best single local kernels. Thus, to mix the spectral properties these kernels were added to the spectral angle kernel. Also, a combination of a local kernel and a global kernel were considered. Even though the linear kernel did not give good results, it has been added to the spectral angle kernel to check improvement in performance. Table 4-6 and Table 4-7 shows five combinations of composite kernels and their optimized m and λ values along with the calculated entropy and mean membership difference for Resourcesat- 1 and Resourcesat-2 respectively. Among the different combinations of composite kernels, the lowest entropy value was obtained for Resourcesat-1 with the IM-Spectral kernel, which is a combination of a global and spectral angle kernel. When a combination of a local kernel and spectral angle kernel (a local-spectral kernel) was compared with a combination of a global kernel and a spectral angle kernel (a global-spectral kernel) the performance of the latter was better. There was little difference between the entropy values of single linear kernel and the Page 44

63 composite linear-spectral kernel. The fractional images generated for the combination of different kernels for Resourcesat-1 has been shown in Figure 4-9. It can be seen that for the Gaussian-spectral kernel there is a misclassification between the sal forest and eucalyptus plantations. Visually, the fractional images generated by the linear-spectral kernel do not highlight the classes considered thus indicating misclassification. Considering the entropy values, the best composite kernels considered are: Resourcesat-1 IM-Spectral kernel and Resourcesat-2 Gaussian-Spectral kernel, Resourcesat-1 IM-Linear kernel and Resourcesat-1 and 2 Linear-Spectral kernel. The accuracy assessment results for selected kernels are given in the next section. Table 4-6: Optimized m values for composite kernels for AWiFS, LISS-III and LISS-IV images (Resourcesat-1) along with the calculated Mean Membership Difference (Δ), Entropy(E) and weight given to each kernel (λ) Sensors / Kernels AWiFS LISS-III LISS-IV E λ m E λ m E λ m RESOURCESAT-1 Gaussian Spectral e e e IM Spectral e e e Gaussian Linear e e e IM Linear e e e Linear Spectral e Page 45

64 Table 4-7: Optimized m values for composite kernels for AWiFS, LISS-III and LISS-IV images (Resourcesat-1) along with the calculated Mean Membership Difference (Δ), Entropy(E) and weight given to each kernel (λ) Sensors / Kernels Gaussian Spectral IM Spectral Gaussian Linear IM Linear Linear Spectral AWiFS LISS-III LISS-IV E λ m E λ m E λ m RESOURCESAT e e e e e e e e e e e e Page 46

65 (i) (ii) (iii) (iv) (v) Figure 4-9: Generated fractional images for optimized m values of Resourcesat-1 LISS-IV for (i) Gaussian-Spectral (ii) IM-Spectral(iii) Gaussian-Linear (iv) IM-Linear(v) Linear-Spectral for classes identified as (a) Agricultural field with crop (b) Sal Forest (c) Eucalyptus Plantation (d) Dry agricultural field without crop (e) Moist agricultural field without crop (f) Water Page 47

66 4.5. ACCURACY ASSESSMENT RESULTS In order to assess the accuracy of the classified outputs generated by FCM and FCM using single or composite kernels an image to image accuracy assessment approach was selected. The high resolution LISS-IV image is used for the reference dataset generation. Section 3.5 explains various accuracy assessment methods. Among them FERM was used in this research study because of the generation of soft outputs. The assessed fuzzy overall accuracy measures for the selected best kernels, FCM and composite kernels were shown in Table 4-8. It is assessed for AWiFS image with LISS-III and LISS-IV as reference images and for LISS-III image with LISS-IV image used as the reference dataset. The accuracy measure obtained for FCM classifier helps to compare its performance with KFCM classifier. It is assessed for AWiFS image with LISS-III and LISS-IV as reference images and for LISS-III image with LISS-IV image used as the reference dataset. Table 4-8: Accuracy assessment results for FCM, best single kernel and best composite kernels CLASSIFIER AWiFS v/s LISS-III AWiFS v/s LISS-IV LISS-III v/s LISS-IV R1 (%) R2 (%) R1 (%) R2 (%) R1 (%) R2 (%) FCM IM kernel Gaussian kernel SA kernel Linear Kernel IM Spectral Gaussian Spectral Linear Spectral The accuracy measure obtained for FCM classifier helps to compare its performance with KFCM classifier. Using the entropy and mean membership difference method, the best single kernels concluded come under the category of local kernels. So as to combine the global kernel with local kernels, a combination of a local and global kernel was considered. Similarly, so as to mix the global properties with spectral the assessment of accuracy was necessary for a spectral-global kernel was considered. When interpreting the fractional images generated using the linear kernel, a higher misclassification rate is observed for the vegetation class. Among the two best kernels it can be seen that the IM kernel has a higher fuzzy accuracy equal to 97.30% and 96.97% for AWiFS Resourcesat-1 dataset respectively. This is higher than that of FCM classifier being equal to 84.43% and 77.95% for Resourcesat-1 AWiFS, respectively. The accuracy for LISS-III dataset was higher as compared to the AWiFS dataset due to an Page 48

67 increase in spatial resolution. KFCM classifier has a higher accuracy as compared to the FCM classifier. Using the Gaussian kernel overall accuracy equal 76.42% and 83.03% respectively were obtained. These are slightly lower as compared to the accuracies obtained with the FCM classifier. The composite kernel shows overall the lowest accuracy as compared to single kernels and FCM. Among the considered composite kernels, the Gaussian-Spectral showed the highest overall accuracy of 29.37% and 59.27% for AWiFS and LISS-III Resourcesat-2 respectively (Table B-1 to Table B-36, Appix B) UNTRAINED CLASSES Several classes were ignored by the analyst during the training stage of a classifier these correspond to untrained classes. The untrained classes show high membership values for spectrally different class and thus a decrease of the classification accuracy (Foody, 2000). In this research work, the KFCM classifier was ignored the mean class values of agricultural field with crop for both Resourcesat-1 and Resourcesat-2 datasets. Table 4-9 to 4-12 compares the fuzzy user s accuracy of the best single kernels Inverse Multiquadratic and Gaussian for both trained as well as untrained case. The detailed measures for accuracy assessment were given in Appix B. Table 4-9: Comparison of accuracy assessment in trained as well as untrained case for IM kernel and FCM for AWiFS with LISS-III image (Resourcesat-1) Accuracy Assessment Method Inverse Multiquadratic FCM Trained Untrained Trained Untrained Fuzzy User s Accuracy Sal Forest Eucalyptus Plantation Dry Agricultural Field Without Crop Moist Agricultural Field Without Crop Water Average Users Accuracy Page 49

68 Table 4-10: Comparison of accuracy assessment in trained as well as untrained case for IM kernel and FCM for LISS-III with LISS-IV image (Resourcesat-1) Accuracy Assessment Method Inverse Multiquadratic FCM Trained Untrained Trained Untrained Fuzzy User s Accuracy Sal Forest Eucalyptus Plantation Dry Agricultural Field Without Crop Moist Agricultural Field Without Crop Water Average Users Accuracy The Inverse Multiquadratic (IM) kernel was identified earlier to be the best single kernel in Resourcesat-2. When we compare the average user s accuracy in the case of untrained case to the trained classifier, a decrease in the average user s accuracy is observed. For AWiFS Resourcesat-1, the average user s accuracy decreased to 41.74% and 23.67% in the case of IM and simple FCM respectively. For Resoucesat-2, there is at 18.94% and 21.14% decrease in the average user s accuracy of Gaussian kernel using Euclidean Norm and FCM respectively for LISS III images respectively. For Resoucesat-1 AWiFS using the spectral Angle kernel there is a decrease of % in the untrained case. More detail accuracy assessment for untrained case is explained in Appix B (Appix B.7 to B.10). Figure 4-10 shows the graphical representation between the trained and untrained values. Table 4-11: Comparison of accuracy assessment in trained as well as untrained case for Gaussian kernel using Euclidean norm and FCM for AWiFS with LISS-III image (Resourcesat-2). Gaussian kernel using Accuracy Assessment Method Euclidean Norm FCM Trained Untrained Trained Untrained Fuzzy User s Accuracy Eucalyptus Plantation Fallow Land Sal Plantations Water Average Users Accuracy Page 50

69 Table 4-12: Comparison of accuracy assessment in trained as well as untrained case for Gaussian kernel using Euclidean norm and FCM for LISS-III with LISS-IV image (Resourcesat-2) Gaussian kernel using Accuracy Assessment Method Euclidean Norm FCM Trained Untrained Trained Untrained Fuzzy User s Accuracy Eucalyptus Plantation Fallow Land Sal Plantations Water Average Users Accuracy Trained Trained 40 Untrained 40 Untrained IM FCM 0 IM FCM (a) (b) Figure 4-10: Graphical representation of average user s accuracy for untrained and trained case for IM and FCM Resourcesat-1 (a) AWiFS (b) LISS-III at optimized m for Resourcesat-1. Page 51

70 5. DISCUSSION The present chapter discusses the various results obtained from the three classification approaches used. In this study different single as well as composite kernels were incorporated in the FCM objective function to handle non-linearity in data. The main objective of this research was to optimally separate non-linear classes using KFCM approach. Classification problems can be resolved by the use of various classifiers that may be suitable for a specific datasets. Spectral characteristics of various class labels can differ in their geometric structure for different bands. Classes that can be separated using a linear decision boundary are the simplest case. Non-linear nature of the data structures can exist due to the variation in the spectral values. A high variation observed in the spectral values of one band may be lower for another band which leads to non-linearity in data. Figure 6-1 shows the presence of non-linearity in the datasets used in this study. Non-linearity in classes: Agricultural field with crop, Sal plantations, Eucalyptus plantation and water were observed in band1- band2. It is clear that these three classes cannot be separated using a linear decision boundary. Choice of selecting most pertinent kernel relies on the problem under study. Pal (2009) used five kernels namely linear, polynomial, sigmoid, radial basis and linear spline kernel for image classification. Considering the problem of non-linearity eight kernels (Table 4-3) were considered for this study. Three types of single kernels were considered which exhibit dissimilar properties and these were integrated to give composite kernels. Camps-valls et al. (2006) showed different methods to combine these single kernels for hyperspectral image classification. Among them weighted summation method have been adopted in this study. The initial focus in this research work was given for optimizing different parameters of the classifiers. Setting optimum values for different classifiers is important for their successful performance. These values may change with the dataset used. Optimal values of m was obtained based on the minimum entropy and maximum mean membership difference criteria. For FCM, FCM using single kernels and FCM using composite kernels the entropy is very high for higher value more than 4.0. Similarly, the maximum mean membership difference was obtained for values of m i.e. between 1.01 and 2.0. Based on this interpretation, from Table- 4-2 for Resourcesat-1 AWiFS, LISS-III and LISS-IV m was optimized at 1.35, 1.39 and 1.34 respectively (Appix A.5). Optimized values of m for single as well as composite kernels for all the six images are given on Table 4-3, Table 4-4 and Table 4-6. FCM resulted in an overall accuracy of 77.95% and 82.35% for AWiFS and LISS-III for Resourcesat-1. In the past studies it has been shown that FCM resulted in an overall accuracy of 80.89% and 81.83% for the AWiFS image of Resourceat-1 and Resourcesat-2 respectively (Singha, 2013). Among the single kernels, Page 52

71 IM kernel produced higher overall classification accuracy of 96.97% and 97.63% (Table 4-8) for AWiFS and LISS-III, Resousesat-1. When comparing the classification accuracy of KFCM with FCM there has been an overall increase of about 21% and 15% for AWiFS and LISS-III datasets respectively. There was an overall decrease in average user s accuracy from 97.31% to 55.57% for Resourcesat-1 AWiFS IM kernel in untrained case (Table 4-9). The composite kernels resulted in the least classification accuracy when compared to FCM and KFCM. From Table 4-8, Gaussian-Spectral kernel has the highest overall accuracy of 29.37% and 59.27% respectively. Using weighted summation combination approach (Camps-valls et al., 2006), the composite kernels gave the least overall accuracy. This deps on the performance of the single kernels taken in the combination which results in the lower accuracy. Local kernels affect the kernel values if the sample point resides to the same cluster as its closest neighbours. This may be true for the dataset used in this research work as it has more homogeneous area and thus, the performance of local kernel is better as compared to that of global category. Those sample points which resides far away from each other of the same class still have influence on the kernel values are the global kernels. Certain sample point may be far but may be present in the sub cluster of another class. Because of more heterogeneous area this may be true for global kernels and thus the performance was lowest as compared to other categories. In agricultural field with crop sowing or harvesting of crops is done at different times which provide variation within agriculture fields. Thus heterogeneity can be found between agriculture fields. Also sal and eucalyptus planation has heterogeneity due to small grasslands within these forest patches as well as variation with in sal or eucalyptus trees. This could be the reason why the behaviour of a few kernels gives poor results for vegetation class. When the generated fractional images were interpreted for the linear, polynomial and sigmoid kernels the low membership values were found. As well as all the vegetation classes have shown similar membership values in their corresponding fractional images. From Figure 4-7, it can be observed that in water fraction image moist agricultural field with crop class shows high membership values. This shows that, global kernels cannot classify the classes with a small variation in spectral values as local kernel. This is the reason why the fractional images for agricultural field with crop; sal forest and eucalyptus plantation shows similar membership values for global kernels. Thus, it can be concluded that the performance of a particular kernel deps on how well it can differentiate small changes in their spectral nature. Classification was also tested for untrained classes where the classifier was not trained using a class (in this work, agricultural field with crop was not used for training). There is an overall decrease in the average user s accuracy in the untrained case as compared to the trained case (Table 4-9). Figure 4-10 shows the graphical representation of the trained and untrained case for FCM and IM kernel. Untrained agricultural field with crop pixels has been merged to sal or eucalyptus classes due to which average user accuracy has reduced (Table B.37 to Table B.54 Appix B.7. to B.10). Page 53

72 When the overall classification accuracy was considered it can be concluded that KFCM with IM kernel performs better than FCM classifier. KFCM classifier also reduces the mixed pixel problem because of its fuzzy nature. To conclude KFCM performs better than FCM; it is required to perform the classification with images of varying resolutions. In this study all the kernels were tested for both medium and coarser resolution images. The behaviour of different kernels may also differ with the datasets used. Still these kernels with fuzzy classifiers may be tested for large number of different datasets. Page 54

73 6. CONCLUSIONS AND RECOMMENDATIONS 6.1. CONCLUSIONS Resolution of remote sensing images plays a significant role in occurrence of mixed piexls. Presence of mixed pixels has been a problem which may results in inaccurate classification results. Sub-pixel classification methods such as FCM, Artificial Neural Network (ANN) are a solution for these uncertain situations. Also, classes may be difficult to separate from each other using a straight line or a hyperplane where they appear to be non-linear. Doing so may lead to reduced classification. Thus, to solve the problem of non-linearity and mixed pixels a kernel based fuzzy approach have been tested in this study. The main objective of this research work was to optimally separate non-linear classes using KFCM approach. From the comparative evaluation of various sub-pixel classifiers used, KFCM classifier with IM kernel achieved the overall highest classification accuracy. It was also observed that optimal values of different parameters weighted constant m and weight given to each kernel λ played a significant role in the performance of KFCM based classifier. To assess the accuracy of soft classification, choices of methods are available. Among them Fuzzy Error Matrix (FERM) was recommed. A decrease in accuracy values were seen when a coarser resolution AWiFS image was assessed with a finer resolution LISS-IV image due to an increase in spatial resolution. This shows that information extracted from a finer resolution image is more adjacent to its ground truth information. It was also observed a change in the accuracy assessment and mean membership values were reflected due to an increase in radiometric resolution of Resourcesat-2 in comparison to Resourcesat-1. Among the various single kernels used, IM and Gaussian kernel with Euclidean norm has the highest overall performance. IM kernel has the highest overall accuracy of 97.31% for Resourcesat-1 AWiFS as compared to the other (Table 4-8). Among the composite kernels, Gaussian-Spectral kernel has about 61.56% overall accuracy which is less in comparison to single kernel. Composite kernel performance deps on the performance of single kernels considered to frame composite kernel. If a best single kernel with lower entropy is combined with a kernel having higher entropy, the resulting composite kernel will have a lower performance. Other methods such as stacked approach, direct summation, cross-information methods are recommed to combine two single kernels for making composite kernels (Camps-valls et al., 2006). In this study, it was also managed to carry out the effect on accuracy assessment results while dropping agriculture field with crop as untrained class. To conclude up KFCM classifier performed better than FCM classifier. This study may be concluded as, the presence of non-linear data and mixed pixels may not be considered as a problem. But these are the reasons for less classification accuracy. Page 55

74 6.2. ANSWERS TO RESEARCH QUESTIONS A-1 How can non-linearity within class boundaries be effectively handled using KFCM? Answer: The samples in a data which cannot be separated using a straight line or a hyperplane they appear to be non-linear. The Resourcesat-1 LISS-IV image used in this research work has non-linear data which is clear from 2D scatterplot as shown in Figure 6-1 taking two bands at a time. Considering Figure 6-1(b), the samples taken for agricultural field with crop, sal forest and eucalyptus plantation appear non-linear and cannot be separated using a hyperplane. Figure 6-1: Non-Linearity in different classes as 2D scatterplot for Resourcesat-1 LISS-IV image in (a) band 1-band2 (b) band2-band3 (c) band1-band3 for classes identified (a) (b) (c) Agricultural field with crop Sal forest Eucalyptus plantations Dry agricultural field without crop Moist agricultural field without crop Water Thus, these non-linear samples are mapped to a higher dimensional space using kernel functions where they are linearly separated and the nonlinearity in input space is removed for separating the different classes. Even though one cannot visualize the linear separation in higher dimensional space but this can be proved comparing the classification accuracy between FCM and KFCM. B-1 How can mixed pixels be handled using KFCM? Answer: Mixed pixels occur when more than one land cover corresponds within a single pixel. FCM algorithm handles the occurrence of mixed pixels by estimating the membership values for each land cover Page 56

75 classes within a pixel and thus increases the classification accuracy. As a fuzzy approach is used in this study, KFCM handles mixed pixels as FCM. C-1 How to evaluate the performance of single/composite kernels in KFCM? Answer: The uncertainty in the different single or composite kernels can be found using the entropy values calculated for both the case. The accuracy can be improved by optimizing the value of m which matches the fuzziness in the ground with the fuzziness in the image. Optimization of m was done by selecting kernel with minimum entropy and maximum mean membership difference. As the approach used was fuzzy, the classified output images were in the form of fractional images. The performance of single or composite kernel can be evaluated using an image to image accuracy assessment technique where a high resolution image was used to evaluate the performance of coarser resolution images. Fuzzy Error Matrix (FERM) was used to assess the classification accuracy. D-1 To which degree is the FCM classification algorithm capable to handle non-linear feature vectors of different classes for classification? Answer: FCM classifier has a reduction in accuracy of about 15% and 21% for AWiFS and LISS-III datasets when compared with that of KFCM. This decrease in overall accuracy shows the drawback of FCM algorithm to handle non-linear feature vectors in the input space. This happened for the trained classifier. For the untrained case, there is a decrease in average user s accuracy when compared with their corresponding trained case. E-1 What will be the effect of using composite kernels as compared to single kernels? Answer: Composite kernels are used in order to incorporate the spectral as well as properties of local or global kernels which come in local proximity and global proximity in a classified image. Also, in this research work, a local as well as global kernel was added to a spectral kernel and combinations of local as well as global kernels were also studied. But it was found that a composite kernel has a reduced accuracy as compared to FCM as well as FCM using single kernel. Page 57

76 6.3. RECOMMENDATIONS For every research, it is of high importance to assess the quality of the product. For effective classification of data various researchers have introduced many algorithms. Even though the KFCM classifier solves the problem of non-linearity, it does not solve the problem of overlap between different classes. Thus there are many limitations with the classifier used for this research work. The KFCM classification technique can be improved with the following points under consideration: Possiblistic c-means (PCM) algorithm have been proved to deal with noises and outliers (Krishnapuram and Keller, 1996). Thus a Kernel based Possiblistic c-means (KPCM) algorithm can be studied to improve the performance of KFCM classification. For the composite kernels, weighted summation method has been used. Other methods such as stacked approach, direct summation kernel (Camps-valls et al., 2006) can be used to study the behaviour of composite kernels. The classified results can be improved by optimizing the weight component m. In this research work, the mean membership difference value has been taken (ideal case) which may not be the case when you match the fuzziness in the image to the fuzziness in the ground (less value than 1.000). Unsupervised Kernel based Fuzzy c-means (KFCM) clustering could be done where the mean feature vectors are not initialized from the signature data of various classes. Page 58

77 REFERENCES Atkinson, P. M., Cutler, M. E. J., & Lewis, H. (1997). Mapping sub-pixel proportional land cover with AVHRR imagery. International Journal of Remote Sensing, 18(4), Awan, A. M., & Sap, M. N. M. (2005). Clustering spatial data using a kernel-based algorithm. In Proceedings of the Annual Research Seminar (pp ). Ayat, N. E., Cheriet, M., Remaki, L., & Suen, C. Y. (2001). KMOD-A New Support Vector Machine Kernel With Moderate Decreasing. IEEE, Ben-hur, A., Horn, D., Siegelmann, H. T., & Vapinik, V. (2001). Support Vector Clustering. Journal of Machine Learning Research, 2, Bezdek, J. C., Ehrlich, R., & Full, W. (1984). FCM : The Fuzzy c-means Clustering Algorithm. Computers & GeoSciences, 10, Bhatt, S. R., & Mishra, P. K. (2013). Study of Local Kernel with Fuzzy C Mean Algorithm. International Journal of Advanced Research in Computer Science & Software Engineering, 3(12), Bhatt, S. R., & Mishra, P. K. (2014). Analysis of Global Kernels Using Fuzzy C Means Algorithm. International Journal of Advanced Research in Computer Science and Software Engineering, 4(6), Binaghi, E., Brivio, P. A., Ghezzi, P., & Rampini, A. (1999). A fuzzy set-based accuracy assessment of soft classification. Pattern Recognitiion Letters, 20, Boser, B. E., Guyon, I. M., & Vapnik, V. N. (1992). A Training Algorithm for Optimal Margin Classifiers. In 5th Annual ACM workshop on COLT (pp ). Campbell, J. B. (1996). Introduction to Remote Senisng (pp ). Camps-valls, G., & Bruzzone, L. (2005). Kernel-Based Methods for Hyperspectral Image Classification. IEEE Transactions on GeoScience and Remote Sensing, 43(6), Camps-Valls, G., & Bruzzone, L. (2009). Kernel Methods for Remote Sensing Data Analysis (pp ). Camps-valls, G., Gómez-chova, L., Calpe-maravilla, J., Martín-guerrero, J. D., Soria-olivas, E., Alonsochordá, L., Member, A. (2004). Robust Support Vector Method for Hyperspectral Data Classification and Knowledge Discovery. IEEE Transactions on GeoScience and Remote Sensing, 42(7), Camps-valls, G., Gomez-chova, L., Muñoz-marí, J., Vila-francés, J., & Calpe-maravilla, J. (2006). Composite Kernels for Hyperspectral Image Classification. IEEE GeoScience and Remote Sensing Letters, 3(1), Camps-valls, G., Member, S., Gómez-chova, L., Muñoz-marí, J., Rojo-álvarez, J. L., Martínez-ramón, M., & Member, S. (2008). Kernel-Based Framework for Multitemporal and Multisource Remote Sensing Data Classification and Change Detection. IEEE Transactions on GeoScience and Remote Sensing, 46(6), Page 59

78 Cannon, R. L., Dave, J. V, Bezdek, J. C., & Trivedi, M. M. (1986). Segmentation of Thematic Mapper Image Using the Fuzzy c-means Clusterng Algorthm. IEEE Transactions on GeoScience and Remote Sensing, 24(3), Chawla, S. (2010). Possibilistic c-means -Spatial Contexutal Information based sub-pixel classification approach for multi-spectral data. Choodarathnakara, A. L., Kumar, D. T. A., Koliwad, D. S., & Patil, D. C. G. (2012a). Mixed Pixels : A Challenge in Remote Sensing Data Classification for Improving Performance. International Journal of Advanced Research in Computer Engineering & Technology, 1(9), Choodarathnakara, A. L., Kumar, D. T. A., Koliwad, D. S., & Patil, D. C. G. (2012b). Soft Classification Techniques for RS Data. IJCSET, 2(11), Congalton, R. G. (1991). A Review of Assessing the Accuracy of Classifications of Remotely Sensed Data. Remote Sensing of Environment, 46(1991), Dehghan, H., & Ghassemian, H. (2006). Measurement of uncertainty by the entropy : application to the classification of MSS data. International Journal of Remote Sensing, 27(18), Filippone, M., Camastra, F., Masulli, F., & Rovetta, S. (2008). A survey of kernel and spectral methods for clustering. Pattern Recognition, 41, Fisher, P. F., & Pathirana, S. (1990). The Evaluation of Fuzzy Membership of Land Cover Classes in the Suburban Zone. Remote Sensing of Environment, 34, Foody, G. M. (1995). Cross-entropy for the evaluation of the accuracy of a fuzzy land cover classification with fuzzy ground reference data. ISPRS Journal of Photogrammetry and Remote Sensing, 50(5), Foody, G. M. (2000). Estimation of sub-pixel land cover composition in the presence of untrained classes. Computers & GeoSciences, 26, Genton, M. G. (2001). Classes of Kernels for Machine Learning : A Statistics Perspective. Journal of Machine Learning Research, 2, Girolami, M. (2002). Mercer Kernel Based Clustering in Feature Space. IEEE Transactions on Neural Networks, 13(3), Graves, D., & Pedrycz, W. (2007). Performance of kernel-based fuzzy clustering. Electronic Letters, 43(25). Harikumar, A. (2014). The effects of discontinuity adaptive MRF models on the Noise classifier. Hemanth, D. J., Selvathi, D., & Anitha, J. (2009). Effective Fuzzy Clustering Algorithm for Abnormal MR Brain Image Segmentation. In IEEE International Advanced Computing Conference (pp ). Huang, H., Chuang, Y., & Chen, C. (2011). Multiple Kernel Fuzzy Clustering. IEEE Transactions on Fuzzy Systems, Huang, H., Chuang, Y., & Chen, C. (2012). Multiple Kernel Fuzzy Clustering. IEEE Transactions on Fuzzy Systems, 20(1), Huang, H., & Zhu, J. (2006). Kernel based Non-linear Feature Extraction Methods for Speech Recognition. In Proceedings of the Sixth International Conference on Intelligent Systems and Applications. Page 60

79 Isaacs, J. C., Foo, S. Y., & Meyer-baese, A. (2007). Novel Kernels and Kernel PCA for Pattern Recognition. In Proceedings of 2007 IEEE Symposium on Computer Intelligence in Robotics and Automation (pp ). Jain, C., & Srivastava, G. (2013). Designing a Classifier with KFCM Algorithm to Achieve Optimization of Clustering and Classification Simultaneously. International Journal of Emerging Technology and Advanced Engineering, 3(9), Jr, R. G. P., & Cheuk, M. L. (2006). A generalized cross tabulation matrix to compare soft classified maps at multiple resolutions. International Journalof Geographical Information Science, 20(1), Kaur, P., Gupta, P., & Sharma, P. (2012). Review and Comparison of Kernel Based Fuzzy Image Segmentation Techniques. I.J. Intelligent Systems and Applications, 7, Kavzoglu, T., & Reis, S. (2008). Performance Analysis of Maximum Likelihood and Artificial Neural Network Classifiers for Training Sets with Mixed Pixels. GIScience & Remote Sensing, 45(3), Kim, D., Lee, K. H., & Lee, D. (2009). On cluster validity index for estimation of the optimal number of fuzzy clusters. Pattern Recognition, 37(2004), Kim, K. I., Park, S. H., & Kim, H. J. (2001). Kernel Principal Component Analysis for Texture Classification. IEEE Signal Processing Letters, 8(2), Kloditz, C., Boxtel, A. Van, Carfagna, E., & Deursen, W. Van. (1998). Estimating the Accuracy of Coarse Scale Classification Using High Scale Information. Photogrammetric Engineering & Remote Sensing, 64(2), Krishnapuram, R., & Keller, J. M. (1996). The Possibilistic C-Means Algorithm: Insights and Recommations. IEEE Transactions on Fuzzy Systems, 4(3), Kumar, A. (2007). Unpublished PhD Thesis. Investigation in Sub-pixel classification approaches for Land Use and Land Cover Mapping. IIT Roorkee. Kumar, A., Ghosh, S. K., & Dadhwal, V. K. (2006). A comparison of the performance of fuzzy algorithm versus statistical algorithm based sub-pixel classifier for remote sensing data. In International Society for Photogrammetry and Remote Sensing (pp. 1 5). Latifovic, R., & Olthof, I. (2004). Accuracy assessment using sub-pixel fractional error matrices of global land cover products derived from satellite data. Remote Sensing of Environment, 90, Lillesand, T. M., & Kiefer, R. W. (1979). Remote Sensing and Image Interpretation (pp ). Lu, D., & Weng, Q. (2007). International Journal of Remote A survey of image classification methods and techniques for improving classification performance. International Journal of Remote Sensing, 28(5), Mercier, G., & Lennon, M. (2003). Support Vector Machines for Hyperspectral Image Classification with Spectral-based kernels. In IGARSS (pp ). Mohamed, R. M., & Farag, A. A. (2004). Mean Field Theory for Density Estimation Using Support Vector Machines. Computer Vision and Image Processing Laboratory, University of Louisville, Louisville, KY, Page 61

80 Okeke, F., & Karnieli, A. (2006). Remote Methods for fuzzy classification and accuracy assessment of historical aerial photographs for vegetation change analyses. Part I : Algorithm development. International Journal of Remote Sensing, 1-2(December 2014), Pal, M. (2009). Kernel Methodsin Remote Sensing: A Review. ISH Journal of Hydraulic Engineering, 15, doi: / Ravindraiah, R., & Tejaswini, K. (2013). A Survey of Image Segmentation Algorithms Based On Fuzzy Clustering. International Journal of Computer Science and Mobile Computing, 2(7), Richards, J. A., & Jia, X. (2005). Remote Sensing Digital Image Analysis (pp ). Settle, J. J., & Drake, N. A. (1993). Linear mixing and the estimation of ground cover proportions. International Journal of Remote Sensing, 14(6), Silvan-Cardenas, J. L., & Wang, L. (2008). Sub-pixel confusion uncertainty matrix for assessing soft classifications. Remote Sensing of Environment, 112, Singha, M. (2013). Study the effect of discontinuity adaptive MRF models in fuzzy based classifier Study the effect of discontinuity adaptive MRF models in fuzzy based classifier. Smits, P. C., Dellepiane, S. G., & Schowengerdt, R. A. (1999). Quality assessment of image classification algorithms for land-cover mapping : A review and a proposal for a cost- based approach. International Journal of Remote Sensing, 20(8), Suganya, R., & Shanthi, R. (2012). Fuzzy C- Means Algorithm- A Review. International Journal of Scientific and Research Publications, 2(11), 1 3. Tan, K. C., Lim, H. S., & Jafri, M. Z. M. (2011). Comparison of Neural Network and Maximum Likelihood Classifiers for Land Cover Classification Using Landsat Multispectral Data. In IEEE Conference on Open Systems (pp ). Tsai, D., & Lin, C. (2011). Fuzzy C -means based clustering for linearly and nonlinearly separable data. Pattern Recognition, 44, Tso, B., & Mather, P. M. (2000). Classification of Remotely Sensed Data (pp ). Vinushree, N., Hemalatha, B., & Kaliappan, V. (2014). Efficient Kernel-Based Fuzzy C-Means Clustering For Pest Detection and Classification. In World Congress on Computing and Communication Technologies (pp ). Wang, F. (1990). Fuzzy Supervised Classification of Remote Sensing Images. IEEE Transactions on GeoScience and Remote Sensing, 28(2), Yang, A., Jiang, L., & Zhou, Y. (2007). A KFCM-based Fuzzy Classifier. In Fourth International Conference on Fuzzy Systems and Knowledge Discovery. Yun-song, S., & Yu-feng, S. (2010). Remote sensing image classification and recognition based on KFCM. In 5th International Conference on Computer Science and Education (ICCSE), 2010 (pp ). Zadeh, L. A. (1965). Fuzzy Sets. Information and Control, 8, Page 62

81 Zhang, D., & Chen, S. (2002). Fuzzy Clustering Using Kernel Method. In International Conference on Control and Automation (pp ). Zhang, D., & Chen, S. (2003). Clustering incomplete data using kernel-based fuzzy c-means algorithm. Neural Processing Letters, 18, Zhang, & Foody. (2002). Fully-fuzzy supervised classification of sub-urban land cover from remotely sensed imagery : Statistical and artificial neural network approaches. International Journal of Remote Sensing, 22(5), Zhang, J., & Foody, G. M. (1998). A fuzzy classification of sub-urban land cover from remotely sensed imagery. International Journal of Remote Sensing, 19(14), Zimmermann, H. J. (2001). Fuzzy Set Theory- and Its Applications (pp ). Page 63

82 APPENDIX A A.1. Generated fraction images for the best single kernels (a) (b) (c) (d) (e) (f) (i) (ii) (iii) Figure A-1: Generated fractional images for best single kernels from LISS-III (Resourcesat-1) for (i) linear (ii) Inverse Multiquadratic (iii) spectral angle kernel for classes identified as (a) Agriculture field with crop (b) Sal forest (c) Eucalyptus plantation (d) Dry agriculture field with crop (e) Moist agriculture field with crop (f) Water Page 64

83 (a) (b) (c) (d) (e) (i) (ii) (iii) Figure A-2: Generated fractional images for best single kernels from LISS-III (Resourcesat-2) for (i) linear (ii) Gaussian kernel using Euclidean norm (iii) spectral angle kernel for classes identified as (a) Agriculture field with crop (b) Eucalyptus plantation (c) Fallow land (d) Sal forest (e) Water Page 65

84 (a) (b) (c) (d) (e) (f) (i) (ii) (iii) Figure A-3: Generated fractional images for best single kernels from AWiFS (Resourcesat-1) for (i) linear (ii) Inverse Multiquadratic (iii) spectral angle kernel for classes identified as (a) Agriculture field with crop (b) Sal forest (c) Eucalyptus plantation (d) Dry agriculture field with crop (e) Moist agriculture field with crop (f) Water Page 66

85 (a) (b) (c) (d) (e) (i) (ii) (iii) Figure A-4: Generated fractional images for best single kernels from AWiFS (Resourcesat-2) for (i) linear (ii) Gaussian kernel using Euclidean norm (iii) spectral angle kernel for classes identified as (a) Agriculture field with crop (b) Eucalyptus plantation (c) Fallow land (d) Sal forest (e) Water Page 67

86 A.5.Variation in Entropy ( E) and Mean Membership Difference against the weight constant (m) for FCM and FCM using single kernels. (i) (ii) (iii) (iv) (v) Page 68

87 (vi) (vii) (viii) (ix) Figure A-5: Variation in Entropy(E) and Mean membership difference against the weight constant ( m ) for FCM for Resourcesat-1 AWiFS for (i) FCM) (ii) linear (iii)polynomial (iv)sigmoid(v)gaussian kernel with Euclidean Norm(vi) Radial Basis (vii) KMOD) (viii)im (ix) Spectral Angle Page 69

88 A.6. Variation in Entropy (E) and Mean Membership Difference against the weight constant (m) for composite kernels. (a) (b) (c) Figure A-6: Variation in Entropy(E) and Mean membership difference against the weight constant (m) for Gaussian-spectral angle kernel for (a) Resourcesat-2 AWiFS (b) Resourcesat-1 LISS-III and (c) Resourcesat-2 LISS-III Page 70

89 (a) (b) (c) Figure A-7: Variation in Entropy(E) and Mean membership difference against the weight constant (m) for IM-spectral angle kernel for (a) Resourcesat-2 AWiFS (b) Resourcesat-1 LISS-III and (c) Resourcesat- 2 LISS-III Page 71

Novel Intuitionistic Fuzzy C-Means Clustering for Linearly and Nonlinearly Separable Data

Novel Intuitionistic Fuzzy C-Means Clustering for Linearly and Nonlinearly Separable Data Novel Intuitionistic Fuzzy C-Means Clustering for Linearly and Nonlinearly Separable Data PRABHJOT KAUR DR. A. K. SONI DR. ANJANA GOSAIN Department of IT, MSIT Department of Computers University School

More information

An Effective FCM Approach of Similarity and Dissimilarity Measures with Alpha-Cut

An Effective FCM Approach of Similarity and Dissimilarity Measures with Alpha-Cut An Effective FCM Approach of Similarity and Dissimilarity Measures with Alpha-Cut Sayan Mukhopadhaya 1, 2, *, Anil Kumar 3, * and Alfred Stein 2 1 Leibniz Centre for Agricultural Landscape Research (ZALF),

More information

Data: a collection of numbers or facts that require further processing before they are meaningful

Data: a collection of numbers or facts that require further processing before they are meaningful Digital Image Classification Data vs. Information Data: a collection of numbers or facts that require further processing before they are meaningful Information: Derived knowledge from raw data. Something

More information

A new way of calculating the sub-pixel confusion matrix: a comparative evaluation using an artificial dataset. Stien Heremans and Jos Van Orshoven

A new way of calculating the sub-pixel confusion matrix: a comparative evaluation using an artificial dataset. Stien Heremans and Jos Van Orshoven A new way of calculating the sub-pixel confusion matrix: a comparative evaluation using an artificial dataset. Stien Heremans and Jos Van Orshoven KU Leuven, Department of Earth and Environmental Sciences

More information

SOME ISSUES RELATED WITH SUB-PIXEL CLASSIFICATION USING HYPERION DATA

SOME ISSUES RELATED WITH SUB-PIXEL CLASSIFICATION USING HYPERION DATA SOME ISSUES RELATED WITH SUB-PIXEL CLASSIFICATION USING HYPERION DATA A. Kumar a*, H. A. Min b, a Indian Institute of Remote Sensing, Dehradun, India. anil@iirs.gov.in b Satellite Data Processing and Digital

More information

Introduction to digital image classification

Introduction to digital image classification Introduction to digital image classification Dr. Norman Kerle, Wan Bakx MSc a.o. INTERNATIONAL INSTITUTE FOR GEO-INFORMATION SCIENCE AND EARTH OBSERVATION Purpose of lecture Main lecture topics Review

More information

High Resolution Remote Sensing Image Classification based on SVM and FCM Qin LI a, Wenxing BAO b, Xing LI c, Bin LI d

High Resolution Remote Sensing Image Classification based on SVM and FCM Qin LI a, Wenxing BAO b, Xing LI c, Bin LI d 2nd International Conference on Electrical, Computer Engineering and Electronics (ICECEE 2015) High Resolution Remote Sensing Image Classification based on SVM and FCM Qin LI a, Wenxing BAO b, Xing LI

More information

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CHAPTER 4 CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS 4.1 Introduction Optical character recognition is one of

More information

Nearest Clustering Algorithm for Satellite Image Classification in Remote Sensing Applications

Nearest Clustering Algorithm for Satellite Image Classification in Remote Sensing Applications Nearest Clustering Algorithm for Satellite Image Classification in Remote Sensing Applications Anil K Goswami 1, Swati Sharma 2, Praveen Kumar 3 1 DRDO, New Delhi, India 2 PDM College of Engineering for

More information

Remote Sensed Image Classification based on Spatial and Spectral Features using SVM

Remote Sensed Image Classification based on Spatial and Spectral Features using SVM RESEARCH ARTICLE OPEN ACCESS Remote Sensed Image Classification based on Spatial and Spectral Features using SVM Mary Jasmine. E PG Scholar Department of Computer Science and Engineering, University College

More information

Lab 9. Julia Janicki. Introduction

Lab 9. Julia Janicki. Introduction Lab 9 Julia Janicki Introduction My goal for this project is to map a general land cover in the area of Alexandria in Egypt using supervised classification, specifically the Maximum Likelihood and Support

More information

(Refer Slide Time: 0:51)

(Refer Slide Time: 0:51) Introduction to Remote Sensing Dr. Arun K Saraf Department of Earth Sciences Indian Institute of Technology Roorkee Lecture 16 Image Classification Techniques Hello everyone welcome to 16th lecture in

More information

SPATIAL BIAS CORRECTION BASED ON GAUSSIAN KERNEL FUZZY C MEANS IN CLUSTERING

SPATIAL BIAS CORRECTION BASED ON GAUSSIAN KERNEL FUZZY C MEANS IN CLUSTERING SPATIAL BIAS CORRECTION BASED ON GAUSSIAN KERNEL FUZZY C MEANS IN CLUSTERING D.Vanisri Department of Computer Technology Kongu Engineering College, Perundurai, Tamilnadu, India vanisridd@gmail.com Abstract

More information

Fuzzy Entropy based feature selection for classification of hyperspectral data

Fuzzy Entropy based feature selection for classification of hyperspectral data Fuzzy Entropy based feature selection for classification of hyperspectral data Mahesh Pal Department of Civil Engineering NIT Kurukshetra, 136119 mpce_pal@yahoo.co.uk Abstract: This paper proposes to use

More information

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University Classification Vladimir Curic Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University Outline An overview on classification Basics of classification How to choose appropriate

More information

Statistical Analysis of Metabolomics Data. Xiuxia Du Department of Bioinformatics & Genomics University of North Carolina at Charlotte

Statistical Analysis of Metabolomics Data. Xiuxia Du Department of Bioinformatics & Genomics University of North Carolina at Charlotte Statistical Analysis of Metabolomics Data Xiuxia Du Department of Bioinformatics & Genomics University of North Carolina at Charlotte Outline Introduction Data pre-treatment 1. Normalization 2. Centering,

More information

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION 6.1 INTRODUCTION Fuzzy logic based computational techniques are becoming increasingly important in the medical image analysis arena. The significant

More information

A Fuzzy C-means Clustering Algorithm Based on Pseudo-nearest-neighbor Intervals for Incomplete Data

A Fuzzy C-means Clustering Algorithm Based on Pseudo-nearest-neighbor Intervals for Incomplete Data Journal of Computational Information Systems 11: 6 (2015) 2139 2146 Available at http://www.jofcis.com A Fuzzy C-means Clustering Algorithm Based on Pseudo-nearest-neighbor Intervals for Incomplete Data

More information

A Vector Agent-Based Unsupervised Image Classification for High Spatial Resolution Satellite Imagery

A Vector Agent-Based Unsupervised Image Classification for High Spatial Resolution Satellite Imagery A Vector Agent-Based Unsupervised Image Classification for High Spatial Resolution Satellite Imagery K. Borna 1, A. B. Moore 2, P. Sirguey 3 School of Surveying University of Otago PO Box 56, Dunedin,

More information

DESIGN AND EVALUATION OF MACHINE LEARNING MODELS WITH STATISTICAL FEATURES

DESIGN AND EVALUATION OF MACHINE LEARNING MODELS WITH STATISTICAL FEATURES EXPERIMENTAL WORK PART I CHAPTER 6 DESIGN AND EVALUATION OF MACHINE LEARNING MODELS WITH STATISTICAL FEATURES The evaluation of models built using statistical in conjunction with various feature subset

More information

Fuzzy C-means Clustering with Temporal-based Membership Function

Fuzzy C-means Clustering with Temporal-based Membership Function Indian Journal of Science and Technology, Vol (S()), DOI:./ijst//viS/, December ISSN (Print) : - ISSN (Online) : - Fuzzy C-means Clustering with Temporal-based Membership Function Aseel Mousa * and Yuhanis

More information

Digital Image Classification Geography 4354 Remote Sensing

Digital Image Classification Geography 4354 Remote Sensing Digital Image Classification Geography 4354 Remote Sensing Lab 11 Dr. James Campbell December 10, 2001 Group #4 Mark Dougherty Paul Bartholomew Akisha Williams Dave Trible Seth McCoy Table of Contents:

More information

DIGITAL IMAGE ANALYSIS. Image Classification: Object-based Classification

DIGITAL IMAGE ANALYSIS. Image Classification: Object-based Classification DIGITAL IMAGE ANALYSIS Image Classification: Object-based Classification Image classification Quantitative analysis used to automate the identification of features Spectral pattern recognition Unsupervised

More information

Change Detection in Remotely Sensed Images Based on Image Fusion and Fuzzy Clustering

Change Detection in Remotely Sensed Images Based on Image Fusion and Fuzzy Clustering International Journal of Electronics Engineering Research. ISSN 0975-6450 Volume 9, Number 1 (2017) pp. 141-150 Research India Publications http://www.ripublication.com Change Detection in Remotely Sensed

More information

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University Classification Vladimir Curic Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University Outline An overview on classification Basics of classification How to choose appropriate

More information

Fuzzy Segmentation. Chapter Introduction. 4.2 Unsupervised Clustering.

Fuzzy Segmentation. Chapter Introduction. 4.2 Unsupervised Clustering. Chapter 4 Fuzzy Segmentation 4. Introduction. The segmentation of objects whose color-composition is not common represents a difficult task, due to the illumination and the appropriate threshold selection

More information

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009 Learning and Inferring Depth from Monocular Images Jiyan Pan April 1, 2009 Traditional ways of inferring depth Binocular disparity Structure from motion Defocus Given a single monocular image, how to infer

More information

CHAPTER 5 FUZZY LOGIC CONTROL

CHAPTER 5 FUZZY LOGIC CONTROL 64 CHAPTER 5 FUZZY LOGIC CONTROL 5.1 Introduction Fuzzy logic is a soft computing tool for embedding structured human knowledge into workable algorithms. The idea of fuzzy logic was introduced by Dr. Lofti

More information

Spatial Information Based Image Classification Using Support Vector Machine

Spatial Information Based Image Classification Using Support Vector Machine Spatial Information Based Image Classification Using Support Vector Machine P.Jeevitha, Dr. P. Ganesh Kumar PG Scholar, Dept of IT, Regional Centre of Anna University, Coimbatore, India. Assistant Professor,

More information

Image Classification. RS Image Classification. Present by: Dr.Weerakaset Suanpaga

Image Classification. RS Image Classification. Present by: Dr.Weerakaset Suanpaga Image Classification Present by: Dr.Weerakaset Suanpaga D.Eng(RS&GIS) 6.1 Concept of Classification Objectives of Classification Advantages of Multi-Spectral data for Classification Variation of Multi-Spectra

More information

Particle Swarm Optimization Methods for Pattern. Recognition and Image Processing

Particle Swarm Optimization Methods for Pattern. Recognition and Image Processing Particle Swarm Optimization Methods for Pattern Recognition and Image Processing by Mahamed G. H. Omran Submitted in partial fulfillment of the requirements for the degree Philosophiae Doctor in the Faculty

More information

Lab #4 Introduction to Image Processing II and Map Accuracy Assessment

Lab #4 Introduction to Image Processing II and Map Accuracy Assessment FOR 324 Natural Resources Information Systems Lab #4 Introduction to Image Processing II and Map Accuracy Assessment (Adapted from the Idrisi Tutorial, Introduction Image Processing Exercises, Exercise

More information

A generalized cross-tabulation matrix to compare soft-classified maps at multiple resolutions

A generalized cross-tabulation matrix to compare soft-classified maps at multiple resolutions International Journal of Geographical Information Science Vol. 20, No. 1, January 2006, 1 30 Research Article A generalized cross-tabulation matrix to compare soft-classified maps at multiple resolutions

More information

CHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT

CHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT CHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT 2.1 BRIEF OUTLINE The classification of digital imagery is to extract useful thematic information which is one

More information

Improved Version of Kernelized Fuzzy C-Means using Credibility

Improved Version of Kernelized Fuzzy C-Means using Credibility 50 Improved Version of Kernelized Fuzzy C-Means using Credibility Prabhjot Kaur Maharaja Surajmal Institute of Technology (MSIT) New Delhi, 110058, INDIA Abstract - Fuzzy c-means is a clustering algorithm

More information

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY A PATH FOR HORIZING YOUR INNOVATIVE WORK UNSUPERVISED SEGMENTATION OF TEXTURE IMAGES USING A COMBINATION OF GABOR AND WAVELET

More information

Journal of Engineering Technology (ISSN )

Journal of Engineering Technology (ISSN ) A Kernel Functions Analysis For Support Vector Machines To Identify ARGAN Forest Using Sentinel-2 Images Omar EL KHARKI 1, Miriam Wahbi 1, Jamila Mechbouh 2, Danielle Ducrot 3, Ali Essahlaoui 4, Khalid

More information

EVALUATION OF CONVENTIONAL DIGITAL CAMERA SCENES FOR THEMATIC INFORMATION EXTRACTION ABSTRACT

EVALUATION OF CONVENTIONAL DIGITAL CAMERA SCENES FOR THEMATIC INFORMATION EXTRACTION ABSTRACT EVALUATION OF CONVENTIONAL DIGITAL CAMERA SCENES FOR THEMATIC INFORMATION EXTRACTION H. S. Lim, M. Z. MatJafri and K. Abdullah School of Physics Universiti Sains Malaysia, 11800 Penang ABSTRACT A study

More information

Supervised vs. Unsupervised Learning

Supervised vs. Unsupervised Learning Clustering Supervised vs. Unsupervised Learning So far we have assumed that the training samples used to design the classifier were labeled by their class membership (supervised learning) We assume now

More information

Fast Fuzzy Clustering of Infrared Images. 2. brfcm

Fast Fuzzy Clustering of Infrared Images. 2. brfcm Fast Fuzzy Clustering of Infrared Images Steven Eschrich, Jingwei Ke, Lawrence O. Hall and Dmitry B. Goldgof Department of Computer Science and Engineering, ENB 118 University of South Florida 4202 E.

More information

Texture Image Segmentation using FCM

Texture Image Segmentation using FCM Proceedings of 2012 4th International Conference on Machine Learning and Computing IPCSIT vol. 25 (2012) (2012) IACSIT Press, Singapore Texture Image Segmentation using FCM Kanchan S. Deshmukh + M.G.M

More information

Application of nonparametric Bayesian classifier to remote sensing data. Institute of Parallel Processing, Bulgarian Academy of Sciences

Application of nonparametric Bayesian classifier to remote sensing data. Institute of Parallel Processing, Bulgarian Academy of Sciences Application of nonparametric Bayesian classifier to remote sensing data Nina Jeliazkova, nina@acad.bg, +359 2 979 6606 Stela Ruseva, stela@acad.bg, +359 2 979 6606 Kiril Boyanov, boyanov@acad.bg Institute

More information

Unsupervised Learning : Clustering

Unsupervised Learning : Clustering Unsupervised Learning : Clustering Things to be Addressed Traditional Learning Models. Cluster Analysis K-means Clustering Algorithm Drawbacks of traditional clustering algorithms. Clustering as a complex

More information

INF 4300 Classification III Anne Solberg The agenda today:

INF 4300 Classification III Anne Solberg The agenda today: INF 4300 Classification III Anne Solberg 28.10.15 The agenda today: More on estimating classifier accuracy Curse of dimensionality and simple feature selection knn-classification K-means clustering 28.10.15

More information

Data Analysis 3. Support Vector Machines. Jan Platoš October 30, 2017

Data Analysis 3. Support Vector Machines. Jan Platoš October 30, 2017 Data Analysis 3 Support Vector Machines Jan Platoš October 30, 2017 Department of Computer Science Faculty of Electrical Engineering and Computer Science VŠB - Technical University of Ostrava Table of

More information

Biclustering Bioinformatics Data Sets. A Possibilistic Approach

Biclustering Bioinformatics Data Sets. A Possibilistic Approach Possibilistic algorithm Bioinformatics Data Sets: A Possibilistic Approach Dept Computer and Information Sciences, University of Genova ITALY EMFCSC Erice 20/4/2007 Bioinformatics Data Sets Outline Introduction

More information

Classification (or thematic) accuracy assessment. Lecture 8 March 11, 2005

Classification (or thematic) accuracy assessment. Lecture 8 March 11, 2005 Classification (or thematic) accuracy assessment Lecture 8 March 11, 2005 Why and how Remote sensing-derived thematic information are becoming increasingly important. Unfortunately, they contain errors.

More information

Lecture 11: Classification

Lecture 11: Classification Lecture 11: Classification 1 2009-04-28 Patrik Malm Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University 2 Reading instructions Chapters for this lecture 12.1 12.2 in

More information

Image Classification. Introduction to Photogrammetry and Remote Sensing (SGHG 1473) Dr. Muhammad Zulkarnain Abdul Rahman

Image Classification. Introduction to Photogrammetry and Remote Sensing (SGHG 1473) Dr. Muhammad Zulkarnain Abdul Rahman Image Classification Introduction to Photogrammetry and Remote Sensing (SGHG 1473) Dr. Muhammad Zulkarnain Abdul Rahman Classification Multispectral classification may be performed using a variety of methods,

More information

An ICA based Approach for Complex Color Scene Text Binarization

An ICA based Approach for Complex Color Scene Text Binarization An ICA based Approach for Complex Color Scene Text Binarization Siddharth Kherada IIIT-Hyderabad, India siddharth.kherada@research.iiit.ac.in Anoop M. Namboodiri IIIT-Hyderabad, India anoop@iiit.ac.in

More information

Figure 1: Workflow of object-based classification

Figure 1: Workflow of object-based classification Technical Specifications Object Analyst Object Analyst is an add-on package for Geomatica that provides tools for segmentation, classification, and feature extraction. Object Analyst includes an all-in-one

More information

Remote Sensing & Photogrammetry W4. Beata Hejmanowska Building C4, room 212, phone:

Remote Sensing & Photogrammetry W4. Beata Hejmanowska Building C4, room 212, phone: Remote Sensing & Photogrammetry W4 Beata Hejmanowska Building C4, room 212, phone: +4812 617 22 72 605 061 510 galia@agh.edu.pl 1 General procedures in image classification Conventional multispectral classification

More information

Pattern Recognition & Classification

Pattern Recognition & Classification CEE 6150: Digital Image Processing 1 Classification Supervised parallelpiped minimum distance maximum likelihood (Bayes Rule) > non-parametric > parametric Unsupervised (clustering) K-Means ISODATA support

More information

Uttam Kumar and Ramachandra T.V. Energy & Wetlands Research Group, Centre for Ecological Sciences, Indian Institute of Science, Bangalore

Uttam Kumar and Ramachandra T.V. Energy & Wetlands Research Group, Centre for Ecological Sciences, Indian Institute of Science, Bangalore Remote Sensing and GIS for Monitoring Urban Dynamics Uttam Kumar and Ramachandra T.V. Energy & Wetlands Research Group, Centre for Ecological Sciences, Indian Institute of Science, Bangalore 560 012. Remote

More information

CHAPTER 4 FUZZY LOGIC, K-MEANS, FUZZY C-MEANS AND BAYESIAN METHODS

CHAPTER 4 FUZZY LOGIC, K-MEANS, FUZZY C-MEANS AND BAYESIAN METHODS CHAPTER 4 FUZZY LOGIC, K-MEANS, FUZZY C-MEANS AND BAYESIAN METHODS 4.1. INTRODUCTION This chapter includes implementation and testing of the student s academic performance evaluation to achieve the objective(s)

More information

Applying Supervised Learning

Applying Supervised Learning Applying Supervised Learning When to Consider Supervised Learning A supervised learning algorithm takes a known set of input data (the training set) and known responses to the data (output), and trains

More information

University of Florida CISE department Gator Engineering. Clustering Part 5

University of Florida CISE department Gator Engineering. Clustering Part 5 Clustering Part 5 Dr. Sanjay Ranka Professor Computer and Information Science and Engineering University of Florida, Gainesville SNN Approach to Clustering Ordinary distance measures have problems Euclidean

More information

AUTOMATED STUDENT S ATTENDANCE ENTERING SYSTEM BY ELIMINATING FORGE SIGNATURES

AUTOMATED STUDENT S ATTENDANCE ENTERING SYSTEM BY ELIMINATING FORGE SIGNATURES AUTOMATED STUDENT S ATTENDANCE ENTERING SYSTEM BY ELIMINATING FORGE SIGNATURES K. P. M. L. P. Weerasinghe 149235H Faculty of Information Technology University of Moratuwa June 2017 AUTOMATED STUDENT S

More information

Clustering. Supervised vs. Unsupervised Learning

Clustering. Supervised vs. Unsupervised Learning Clustering Supervised vs. Unsupervised Learning So far we have assumed that the training samples used to design the classifier were labeled by their class membership (supervised learning) We assume now

More information

S. Sreenivasan Research Scholar, School of Advanced Sciences, VIT University, Chennai Campus, Vandalur-Kelambakkam Road, Chennai, Tamil Nadu, India

S. Sreenivasan Research Scholar, School of Advanced Sciences, VIT University, Chennai Campus, Vandalur-Kelambakkam Road, Chennai, Tamil Nadu, India International Journal of Civil Engineering and Technology (IJCIET) Volume 9, Issue 10, October 2018, pp. 1322 1330, Article ID: IJCIET_09_10_132 Available online at http://www.iaeme.com/ijciet/issues.asp?jtype=ijciet&vtype=9&itype=10

More information

2. LITERATURE REVIEW

2. LITERATURE REVIEW 2. LITERATURE REVIEW CBIR has come long way before 1990 and very little papers have been published at that time, however the number of papers published since 1997 is increasing. There are many CBIR algorithms

More information

Fraud Detection using Machine Learning

Fraud Detection using Machine Learning Fraud Detection using Machine Learning Aditya Oza - aditya19@stanford.edu Abstract Recent research has shown that machine learning techniques have been applied very effectively to the problem of payments

More information

FACE RECOGNITION USING SUPPORT VECTOR MACHINES

FACE RECOGNITION USING SUPPORT VECTOR MACHINES FACE RECOGNITION USING SUPPORT VECTOR MACHINES Ashwin Swaminathan ashwins@umd.edu ENEE633: Statistical and Neural Pattern Recognition Instructor : Prof. Rama Chellappa Project 2, Part (b) 1. INTRODUCTION

More information

Chapter 7 UNSUPERVISED LEARNING TECHNIQUES FOR MAMMOGRAM CLASSIFICATION

Chapter 7 UNSUPERVISED LEARNING TECHNIQUES FOR MAMMOGRAM CLASSIFICATION UNSUPERVISED LEARNING TECHNIQUES FOR MAMMOGRAM CLASSIFICATION Supervised and unsupervised learning are the two prominent machine learning algorithms used in pattern recognition and classification. In this

More information

Land cover classification using reformed fuzzy C-means

Land cover classification using reformed fuzzy C-means Sādhanā Vol. 36, Part 2, April 2011, pp. 153 165. c Indian Academy of Sciences Land cover classification using reformed fuzzy C-means 1. Introduction BSOWMYA and B SHEELARANI Department of Electronics

More information

Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference

Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference Minh Dao 1, Xiang Xiang 1, Bulent Ayhan 2, Chiman Kwan 2, Trac D. Tran 1 Johns Hopkins Univeristy, 3400

More information

INCREASING CLASSIFICATION QUALITY BY USING FUZZY LOGIC

INCREASING CLASSIFICATION QUALITY BY USING FUZZY LOGIC JOURNAL OF APPLIED ENGINEERING SCIENCES VOL. 1(14), issue 4_2011 ISSN 2247-3769 ISSN-L 2247-3769 (Print) / e-issn:2284-7197 INCREASING CLASSIFICATION QUALITY BY USING FUZZY LOGIC DROJ Gabriela, University

More information

Research on a Remote Sensing Image Classification Algorithm Based on Decision Table Dehui Zhang1, a, Yong Yang1, b, Kai Song2, c, Deyu Zhang2, d

Research on a Remote Sensing Image Classification Algorithm Based on Decision Table Dehui Zhang1, a, Yong Yang1, b, Kai Song2, c, Deyu Zhang2, d 4th National Conference on Electrical, Electronics and Computer Engineering (NCEECE 205) Research on a Remote Sensing Image Classification Algorithm Based on Decision Table Dehui Zhang, a, Yong Yang, b,

More information

Enhanced Hemisphere Concept for Color Pixel Classification

Enhanced Hemisphere Concept for Color Pixel Classification 2016 International Conference on Multimedia Systems and Signal Processing Enhanced Hemisphere Concept for Color Pixel Classification Van Ng Graduate School of Information Sciences Tohoku University Sendai,

More information

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality

More information

Chapter DM:II. II. Cluster Analysis

Chapter DM:II. II. Cluster Analysis Chapter DM:II II. Cluster Analysis Cluster Analysis Basics Hierarchical Cluster Analysis Iterative Cluster Analysis Density-Based Cluster Analysis Cluster Evaluation Constrained Cluster Analysis DM:II-1

More information

Improving the Efficiency of Fast Using Semantic Similarity Algorithm

Improving the Efficiency of Fast Using Semantic Similarity Algorithm International Journal of Scientific and Research Publications, Volume 4, Issue 1, January 2014 1 Improving the Efficiency of Fast Using Semantic Similarity Algorithm D.KARTHIKA 1, S. DIVAKAR 2 Final year

More information

A Comparative Study of Conventional and Neural Network Classification of Multispectral Data

A Comparative Study of Conventional and Neural Network Classification of Multispectral Data A Comparative Study of Conventional and Neural Network Classification of Multispectral Data B.Solaiman & M.C.Mouchot Ecole Nationale Supérieure des Télécommunications de Bretagne B.P. 832, 29285 BREST

More information

Unsupervised Change Detection in Optical Satellite Images using Binary Descriptor

Unsupervised Change Detection in Optical Satellite Images using Binary Descriptor Unsupervised Change Detection in Optical Satellite Images using Binary Descriptor Neha Gupta, Gargi V. Pillai, Samit Ari Department of Electronics and Communication Engineering, National Institute of Technology,

More information

Clustering and Dissimilarity Measures. Clustering. Dissimilarity Measures. Cluster Analysis. Perceptually-Inspired Measures

Clustering and Dissimilarity Measures. Clustering. Dissimilarity Measures. Cluster Analysis. Perceptually-Inspired Measures Clustering and Dissimilarity Measures Clustering APR Course, Delft, The Netherlands Marco Loog May 19, 2008 1 What salient structures exist in the data? How many clusters? May 19, 2008 2 Cluster Analysis

More information

Principal Component Image Interpretation A Logical and Statistical Approach

Principal Component Image Interpretation A Logical and Statistical Approach Principal Component Image Interpretation A Logical and Statistical Approach Md Shahid Latif M.Tech Student, Department of Remote Sensing, Birla Institute of Technology, Mesra Ranchi, Jharkhand-835215 Abstract

More information

Clustering CS 550: Machine Learning

Clustering CS 550: Machine Learning Clustering CS 550: Machine Learning This slide set mainly uses the slides given in the following links: http://www-users.cs.umn.edu/~kumar/dmbook/ch8.pdf http://www-users.cs.umn.edu/~kumar/dmbook/dmslides/chap8_basic_cluster_analysis.pdf

More information

Facial Expression Detection Using Implemented (PCA) Algorithm

Facial Expression Detection Using Implemented (PCA) Algorithm Facial Expression Detection Using Implemented (PCA) Algorithm Dileep Gautam (M.Tech Cse) Iftm University Moradabad Up India Abstract: Facial expression plays very important role in the communication with

More information

Machine Learning for NLP

Machine Learning for NLP Machine Learning for NLP Support Vector Machines Aurélie Herbelot 2018 Centre for Mind/Brain Sciences University of Trento 1 Support Vector Machines: introduction 2 Support Vector Machines (SVMs) SVMs

More information

Data mining with Support Vector Machine

Data mining with Support Vector Machine Data mining with Support Vector Machine Ms. Arti Patle IES, IPS Academy Indore (M.P.) artipatle@gmail.com Mr. Deepak Singh Chouhan IES, IPS Academy Indore (M.P.) deepak.schouhan@yahoo.com Abstract: Machine

More information

Unsupervised Learning and Clustering

Unsupervised Learning and Clustering Unsupervised Learning and Clustering Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Spring 2009 CS 551, Spring 2009 c 2009, Selim Aksoy (Bilkent University)

More information

2. On classification and related tasks

2. On classification and related tasks 2. On classification and related tasks In this part of the course we take a concise bird s-eye view of different central tasks and concepts involved in machine learning and classification particularly.

More information

ECG782: Multidimensional Digital Signal Processing

ECG782: Multidimensional Digital Signal Processing ECG782: Multidimensional Digital Signal Processing Object Recognition http://www.ee.unlv.edu/~b1morris/ecg782/ 2 Outline Knowledge Representation Statistical Pattern Recognition Neural Networks Boosting

More information

Image Compression: An Artificial Neural Network Approach

Image Compression: An Artificial Neural Network Approach Image Compression: An Artificial Neural Network Approach Anjana B 1, Mrs Shreeja R 2 1 Department of Computer Science and Engineering, Calicut University, Kuttippuram 2 Department of Computer Science and

More information

The Curse of Dimensionality

The Curse of Dimensionality The Curse of Dimensionality ACAS 2002 p1/66 Curse of Dimensionality The basic idea of the curse of dimensionality is that high dimensional data is difficult to work with for several reasons: Adding more

More information

A Robust Band Compression Technique for Hyperspectral Image Classification

A Robust Band Compression Technique for Hyperspectral Image Classification A Robust Band Compression Technique for Hyperspectral Image Classification Qazi Sami ul Haq,Lixin Shi,Linmi Tao,Shiqiang Yang Key Laboratory of Pervasive Computing, Ministry of Education Department of

More information

Semi-Supervised Clustering with Partial Background Information

Semi-Supervised Clustering with Partial Background Information Semi-Supervised Clustering with Partial Background Information Jing Gao Pang-Ning Tan Haibin Cheng Abstract Incorporating background knowledge into unsupervised clustering algorithms has been the subject

More information

Clustering and Visualisation of Data

Clustering and Visualisation of Data Clustering and Visualisation of Data Hiroshi Shimodaira January-March 28 Cluster analysis aims to partition a data set into meaningful or useful groups, based on distances between data points. In some

More information

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition Pattern Recognition Kjell Elenius Speech, Music and Hearing KTH March 29, 2007 Speech recognition 2007 1 Ch 4. Pattern Recognition 1(3) Bayes Decision Theory Minimum-Error-Rate Decision Rules Discriminant

More information

Image Transformation Techniques Dr. Rajeev Srivastava Dept. of Computer Engineering, ITBHU, Varanasi

Image Transformation Techniques Dr. Rajeev Srivastava Dept. of Computer Engineering, ITBHU, Varanasi Image Transformation Techniques Dr. Rajeev Srivastava Dept. of Computer Engineering, ITBHU, Varanasi 1. Introduction The choice of a particular transform in a given application depends on the amount of

More information

Equation to LaTeX. Abhinav Rastogi, Sevy Harris. I. Introduction. Segmentation.

Equation to LaTeX. Abhinav Rastogi, Sevy Harris. I. Introduction. Segmentation. Equation to LaTeX Abhinav Rastogi, Sevy Harris {arastogi,sharris5}@stanford.edu I. Introduction Copying equations from a pdf file to a LaTeX document can be time consuming because there is no easy way

More information

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra Pattern Recall Analysis of the Hopfield Neural Network with a Genetic Algorithm Susmita Mohapatra Department of Computer Science, Utkal University, India Abstract: This paper is focused on the implementation

More information

Aardobservatie en Data-analyse Image processing

Aardobservatie en Data-analyse Image processing Aardobservatie en Data-analyse Image processing 1 Image processing: Processing of digital images aiming at: - image correction (geometry, dropped lines, etc) - image calibration: DN into radiance or into

More information

Remote Sensing Introduction to the course

Remote Sensing Introduction to the course Remote Sensing Introduction to the course Remote Sensing (Prof. L. Biagi) Exploitation of remotely assessed data for information retrieval Data: Digital images of the Earth, obtained by sensors recording

More information

SLIDING WINDOW FOR RELATIONS MAPPING

SLIDING WINDOW FOR RELATIONS MAPPING SLIDING WINDOW FOR RELATIONS MAPPING Dana Klimesova Institute of Information Theory and Automation, Prague, Czech Republic and Czech University of Agriculture, Prague klimes@utia.cas.c klimesova@pef.czu.cz

More information

Available online Journal of Scientific and Engineering Research, 2019, 6(1): Research Article

Available online   Journal of Scientific and Engineering Research, 2019, 6(1): Research Article Available online www.jsaer.com, 2019, 6(1):193-197 Research Article ISSN: 2394-2630 CODEN(USA): JSERBR An Enhanced Application of Fuzzy C-Mean Algorithm in Image Segmentation Process BAAH Barida 1, ITUMA

More information

CHAPTER 3. Preprocessing and Feature Extraction. Techniques

CHAPTER 3. Preprocessing and Feature Extraction. Techniques CHAPTER 3 Preprocessing and Feature Extraction Techniques CHAPTER 3 Preprocessing and Feature Extraction Techniques 3.1 Need for Preprocessing and Feature Extraction schemes for Pattern Recognition and

More information

The Gain setting for Landsat 7 (High or Low Gain) depends on: Sensor Calibration - Application. the surface cover types of the earth and the sun angle

The Gain setting for Landsat 7 (High or Low Gain) depends on: Sensor Calibration - Application. the surface cover types of the earth and the sun angle Sensor Calibration - Application Station Identifier ASN Scene Center atitude 34.840 (34 3'0.64"N) Day Night DAY Scene Center ongitude 33.03270 (33 0'7.72"E) WRS Path WRS Row 76 036 Corner Upper eft atitude

More information

SAM and ANN classification of hyperspectral data of seminatural agriculture used areas

SAM and ANN classification of hyperspectral data of seminatural agriculture used areas Proceedings of the 28th EARSeL Symposium: Remote Sensing for a Changing Europe, Istambul, Turkey, June 2-5 2008. Millpress Science Publishers Zagajewski B., Olesiuk D., 2008. SAM and ANN classification

More information

Analysis of Functional MRI Timeseries Data Using Signal Processing Techniques

Analysis of Functional MRI Timeseries Data Using Signal Processing Techniques Analysis of Functional MRI Timeseries Data Using Signal Processing Techniques Sea Chen Department of Biomedical Engineering Advisors: Dr. Charles A. Bouman and Dr. Mark J. Lowe S. Chen Final Exam October

More information