HYPERSPECTRAL remote sensing images are very important

Size: px
Start display at page:

Download "HYPERSPECTRAL remote sensing images are very important"

Transcription

1 762 IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 6, NO. 4, OCTOBER 2009 Ensemble Classification Algorithm for Hyperspectral Remote Sensing Data Mingmin Chi, Member, IEEE, Qian Kun, Jón Atli Beneditsson, Fellow, IEEE, andruifeng Abstract In real applications, it is difficult to obtain a sufficient number of training samples in supervised classification of hyperspectral remote sensing images. Furthermore, the training samples may not represent the real distribution of the whole space. To attac these problems, an ensemble algorithm which combines generative mixture of Gaussians) and discriminative support cluster machine) models for classification is proposed. Experimental results carried out on hyperspectral data set collected by the reflective optics system imaging spectrometer sensor, validates the effectiveness of the proposed approach. Index Terms Ensemble classification, hyperspectral remote sensing images, mixture of Gaussians MoGs), support cluster machine SCM). I. INTRODUCTION HYPERSPECTRAL remote sensing images are very important for the discrimination of spectrally similar landcover classes. In order to obtain a reliable classifier, a large amount of representative training samples are necessary for hyperspectral data compared to multispectral remote sensing data. In real applications, it is difficult to obtain sufficient number of training samples for supervised learning. Furthermore, the training samples may not represent the real distribution of the whole space. These result in a quantity problem for training samples in the design of a robust supervised classifier. In recent years, semisupervised learning SSL) methods [1] [3], usually, have been exploited to overcome the problems with small numbers of labeled samples for the classification of hyperspectral remote sensing images, such as self-labeling approaches [1], low-density separation SSL approaches [2], and label-propagation SSL approaches [3]. The methods previously mentioned usually exploit generative or discriminative approaches, where the estimation criterion is used for adjusting the parameters and/or structure of the classification approaches. There is little literature on the use of both generative and discriminative models for the quantity problem. In [4], the authors Manuscript received December 22, 2008; revised April 10, First published July 28, 2009; current version published October 14, This wor was supported in part by the Natural Science Foundation of China under Contract , by the Ph.D. Programs Foundation of the Ministry of Education of China under Contract , and by the Research Fund of the University of Iceland. M. Chi, Q. Kun, and R. Feng are with the School of Computer Science, Fudan University, Shanghai , China mmchi@fudan.edu.cn; @fudan.edu.cn; fengrui@fudan.edu.cn). J. A. Beneditsson is with the Faculty of Electrical and Computer Engineering, University of Iceland, 107 Reyjavi, Iceland benedit@hi.is). Color versions of one or more of the figures in this paper are available online at Digital Object Identifier /LGRS wored on a generative model and adopted a discriminative model to correct the bias of the generative classifier learnt by small-size training samples. In this letter, we propose an ensemble algorithm, which benefits the advantages of both generative and discriminative models to deal with the quantity problem in the classification of hyperspectral remote sensing images. In particular, both labeled and unlabeled data are represented with a generative model [i.e., mixture of Gaussians MoGs)]. Then, the estimated model is used for discriminative learning. This is motivated by the recently proposed discriminative classification approach, support cluster machine SCM) [5]. The SCM was originally used to address large-scale supervised learning problems. The main idea in the SCM is that the labeled data are at first modeled using a generative model. Then, the ernel, the similarity measure between Gaussians, is defined by probability product ernels PPKs) [6]. In other words, the obtained PPK ernel is used to train support vector machines SVMs) where the learned models contain support clusters rather than support vectors the name SCM is based on this). In the SCM, the number of clusters is important to obtain the best classification results. If the selected number of Gaussians not limited to Gaussians) does not fit the data well, the classification accuracy can decrease. For a small size training set problem, the mixture model estimated by only labeled samples cannot represent the distribution for the whole data. To attac the aforementioned problem, it is proposed here to first use both labeled and unlabeled samples to estimate an MoG. Then, different sets of the MoGs are generated by going from few coarse representation) to many fine representation) numbers of clusters. Finally, the output classification result is integrated by an ensemble technique based on the ones obtained from individual SCMs learnt by different sets of MoGs. In terms of the different estimated MoGs, the corresponding PPK ernel matrixes can be computed and used as inputs to standard SVMs for training. The accuracies and the reliability of the proposed algorithm have been evaluated on reflective optics system imaging spectrometer ROSIS) hyperspectral remote sensing data collected over the University of Pavia, Italy. The results are promising when compared to the state-of-the-art classifiers. The rest of this letter is organized as follows. The next section describes the proposed ensemble algorithm with generative/ discriminative models. Section III discusses the data used in the experiments, reports and discusses the results provided by the different algorithms. Finally, conclusions and discussion are given in Section IV X/$ IEEE

2 CHI et al.: ENSEMBLE CLASSIFICATION ALGORITHM FOR HYPERSPECTRAL REMOTE SENSING DATA 763 II. ENSEMBLE ALGORITHM WITH GENERATIVE/DISCRIMINATIVE MODELS Let the given labeled data set X l =x i,y i ) n i=1, X l R D n be made up of n labeled samples with a D-dimensional input space. We wor on a binary classification problem, i.e., y i = +1 if x i is labeled as the positive class and 1 otherwise. Let the unlabeled data set X u =x i ) n+m i=n+1, X u R D m consist of m unlabeled samples. To alleviate the quantity problem, we use a generative model to extract as much statistical information as possible from a large amount of unlabeled samples together with the small-size labeled patterns. Namely, a large amount of unlabeled samples are used for better estimation of the data distribution. In our framewor, an SCM is used for discriminative learning based on the mixtures. However, it is difficult to evaluate the influence of the number of mixtures on the classification results [5]. Therefore, different number of mixtures are modeled and used as inputs to the base classifiers, SCMs. Finally, we propose to integrate the results in order to improve the classification accuracy and stability. Note that in a multiclass case, classes usually are highly overlapped. After clustering, the proposed algorithm based on MoG estimation cannot wor better than SVM estimations. However, the proposed approach can be used for absolute classification [7] in remote sensing applications. A. Generative Model: MoG To obtain information from unlabeled data, the corresponding statistical information is used in this letter. In detail, a large amount of unlabeled samples are used to better estimate the data distribution, e.g., using a MoG not limited to Gaussians). Let us assume that the data set X = {X l, X u } is drawn independently from a MoG model Θ. The log lielihood function can be modeled for the independent identically distributed data lx) =lnpx Θ) = n+m i=1 { K } ln π N x i θ ) where the MoG model contains K components, i.e., Θ = {θ }={μ, Σ )}, =1,...,K, μ denotes the mean vector and Σ the covariance matrix, and π the mixing coefficient of the th component. In this letter, the expectation-maximization EM) algorithm [8] is adopted to estimate the parameters. Since the estimation of the mixture model does not tae class labels into account, a deterministic label needs to be assigned to each component. If only a very small-size labeled set is available, some components may contain all unlabeled samples. In that case, we would discard such components. Components containing different labels are divided until there are no samples with different labels. Accordingly, we can have the inputs {Θ, y} to an SCM for learning, where y R K is the label vector for all the components. B. Discriminative Model: SCM After obtaining inputs with MoGs, the similarity measure between Gaussians is defined and an SVM-lie learning framewor is adopted for discriminative learning. After that, the ernel between Gaussian and a vector is also defined for prediction. 1) PPK With MoG: After the data are represented by MoG, the similarity between the components can be calculated by the PPK [6] in the form κ = κθ, θ ) =π π ) ρ R D N ρ x μ, Σ )N ρ x μ, Σ )dx =π π ) ρ ρ D/2 2π) 1 2ρ)D 2 Σ 1 2 Σ ρ 2 Σ ρ 2 exp ρ )) μ p Σ 1 2 μ + μ Σ 1 μ μ Σ 1 μ where ρ is a constant, Σ = Σ 1 + Σ 1, μ = Σ 1 μ + Σ 1 μ. To reduce the computational cost, it is assumed that the features are statistically independent. Hence, a diagonal covariance matrix is used, i.e., Σ = diagσ 1) )2,...,σ d) )2 ). 2) Discriminative Learning: After obtaining the ernel matrix K =κ ) K, =1, we can use an SVM-lie classifier for training, i.e., an SCM [5]. Here, the SCM maximizes the margin between the positive and negative clusters, rather than data vectors, i.e., with the constraints 1 min w,b,ξ 2 w w + C 1) π ξ 2) y w φθ )+b ) 1 ξ, =1,...,K 3) where φ ) is a mapping function, which is a generative distribution of the Gaussian form in our case) and the slac ξ is multiplied by the weight π the prior of the th cluster in MoG) such that a misclassified cluster with more samples could be given a heavier penalty [5]. Incorporating the constraints in 3) and ξ 0, = 1,...,K, to the cost function in 2), and using the Lagrangian theorem, the constrained optimization problem can be transformed into a dual problem following the same steps as in the SVM [9]. Thus, the dual representation of the SCM is given as max α α 1 2 =1 s.t. y y α α κθ, θ ) { 0 α π C, =1,...,K K α y =0. 4) The SCM has the same optimization formulation as the SVM except that in the SCM the Lagrange multipliers α are bounded by C multiplied by the weight π.

3 764 IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 6, NO. 4, OCTOBER ) Prediction Classification of Unlabeled Samples): Atest sample x can be treated as an extreme case of Gaussian θ x when its covariance matrix vanishes, i.e., θ x =π x =1, μ x = x, Σ x = σ 2 xi,σ x 0). Given two Gaussians, θ and θ x, the ernel function 1) can be used to compute the similarity between a Gaussian and the test vector. If ρ =1, and putting θ x =1, x,σ 2 xi) to 1), we get the ernel value for the SCM prediction as follows: 1 κθ, θ x )=π 2π) D/2 det Σ exp D d=1 μ d) ) 2 xd) ) 2 2 σ d) = π N x μ, Σ ) 5) which is the posterior probability of x given θ =μ, Σ ). Similar to the SVM, the prediction function of the SCM is the linear combination of the ernels, but computed between the trained mixture components and the test pattern θ x = {1, x,σ 2 xi} as follows: fx) = α y κθ, θ x )+b. 6) Accordingly, a class label is assigned to a test pattern by { +1, if fx) 0 x =sgnfx)). 7) 1, otherwise TABLE I DISTRIBUTION OF ORIGINAL TRAINING AND TEST SAMPLES IN ROSIS UNIVERSITY DATA SET The training phase of the proposed approach is summarized in Algorithm 1. Algorithm 1 Training phase of the proposed algorithm Require: x i ) n+m i=1, y i) n i=1, G, C, ρ 1: for g =1,...,Gdo 2: Estimate the MoG model Θ g based on x i ) n+m i=1 with the number of component K g. 3: Assign the labels to the mixtures to obtain inputs to SCM, i.e., {Θ g, y g }. 4: Train SCM with 4) to obtain α g and b g. 5: end for 6: return {Θ g, y g, α g,b g }, g =1,...,G. C. Ensemble Strategy In the SCM, the data are represented by a mixture model. Usually, the number of components in such a model is initially fixed. In real applications, it is difficult to evaluate which one is best for the problem. Here, it is proposed to use an ensemble technique to overcome the problem. The number of mixture components goes from coarse to fine to generate different sets of MoGs. Accordingly, the input to different SCMs is {θ g,yg }, g = 1,...,G, where G is the number of classifiers. The prediction function for each classifier f g is the linear combination of the ernels computed between the trained mixture components and a test pattern θ x as follows: K g f g x) = α g yg κ θg, θ x)+b g. 8) Then, for the gth base classifier, a class label is assigned to the test pattern, i.e., x sgnf g x)). Finally, the winner-taes-all combination strategy is used to mae a final decision [10], i.e., x y m if y m = arg max c N c, c N c = G 9) where N c is the accumulated number that the base classifiers assign the y c labeling to the test pattern. III. EXPERIMENTAL RESULTS A. Data Set Description The data used in the experiments were collected using the optical sensor ROSIS 03 on the campus at the University of Pavia, Italy. Originally, there were 115 bands of ROSIS 03 sensor covering from 0.43 to 0.86 μm, and the image size is Some channels were removed due to noise and so the data contains 103 features. The tas is to discriminate among nine classes, i.e., Asphalt, Meadows, Gravel, Trees, Metal sheets Metal), Bare soil Soil), Bitumen, Brics, and Shadow. Some training data were removed due to zero features, and hence, the full data set contains 3895 training samples and test ones. The detail distribution can be found in Table I. From Table I, one can see that the number of original training samples for all classes is quite balanced. However, that is not the case for test patterns. In particular, the number of test patterns for class 2 Meadows) is , while those of the remaining classes vary from 897 to This means that the data distribution estimated by even all of labeled training samples without prior information cannot represent the distribution over all the region. Therefore, in this letter, we mainly focus the classification on these unbalanced classes, e.g., class 2 versus class 4, class 2 versus class 6. In order to investigate the impact of the number of labeled data on the classifier performance, the original training data were subsampled to obtain ten splits made up of around 2% original labeled data i.e., ten samples per class).

4 CHI et al.: ENSEMBLE CLASSIFICATION ALGORITHM FOR HYPERSPECTRAL REMOTE SENSING DATA 765 TABLE II CLASSIFICATION ACCURACIES USING THE SVM, SVM Light, AND THE PROPOSED ALGORITHM WITH THE TEN SUBSAMPLES FOR THE TRAINING DATA SETS, I.E., CLASSES 2VERSUS 4 AND CLASSES 2VERSUS 6 B. Experimental Setup In the SVM, if a Gaussian ernel is used, ernel parameter σ should be decided by model selection. In this letter, we used grid search, where C = 10 3,...,10 3 ), σ =2 3,...,2 3 ). Then, five cross validation is used to select the best model for prediction. In addition, we conducted experiments using the semisupervised SVM, i.e., SVM Light for comparison. 1 In the SCM, we can compute the parameter σ d) from data directly. In addition, the variance in different directions is different such that it is better and more flexible to capture the structure of data, such as cigar-shape data. Finally, only one parameter, the penalization parameter C should be decided in the SCM. In our experiments, it has been observed that the choice of C does not significantly affect results. Therefore, we fix it at C = 100 in all the following experiments. The range of K using the EM algorithm is set to 2, 3,..., 19) to construct 18 base classifiers. Note that, in the SVM and SVM Light, 49 models need to be estimated for model selection. Therefore, the computational complexity of the SCM is of the same magnitude as the SVM. C. Experimental Results For the ease of comparison, we also carried out experiments by a supervised SVM and semisupervised SVM, i.e., SVM Light, on ten splits containing 20 labeled training samples. The results are shown in Table II for each data set. For the class 2 versus 6 Meadows versus Bare Soil), the average accuracy is only 63%, varying significantly for each split and the SVM Light also obtains a significant improvement. However, the proposed approach obtains the best average classification accuracy with an increase of 16.31% from 63% to 79.31% and with much more stable results for individual splits. In particular, for Split 8, the proposed approach obtained the significant better result compared to the SVM and the SVM Light. This is possibly caused by the better and much more representative statistical results estimated using a large amount of unlabeled samples for the proposed approach. For the class 2 versus 4 Meadows versus Trees), the average classification accuracy is 81.73% over ten splits. Since the spectral characteristics of the classes Meadows and Trees are very 1 Available at: Fig. 1. Comparison among the original test samples, and the results provided by the SVM and the proposed. a) Test Map. b) Map by SVM. c) Map by Proposed. similar due to a similar spectral reflectance, we consider these classes more carefully. Looing closer to the class 2 versus 4, the average classification accuracies of the proposed algorithm is 88.47%, i.e., much higher than those by the SVM and the SVM Light. In particular, all the results per split by the proposed approach are significantly better than those by the SVM and the SVM Light. Furthermore, the ensemble classification result per split is comparable to or even better than that obtained by the SVM i.e., 89.11%) using all the training samples. This confirms the effectiveness of the proposed ensemble classification algorithm which is capable of increasing not only classification accuracies but also the robustness of classification results. Fig. 1 shows the comparison classification map using the SVM and the proposed approach compared to the original one for the split9 of the data set Meadows versus Trees. From Fig. 1, one can see that Meadows and Trees are more accurately classified using the proposed approach. The possible reason is that the data distribution can be better estimated by the use of large scale unlabeled samples. Therefore, the problem of smallsize training data set can be alleviated. Moreover, the ensemble strategy avoids model selection, which should be taen into account for most of supervised classification algorithms. IV. DISCUSSION AND CONCLUSIONS In this letter, an ensemble classification with generative/ discriminative models was proposed to classify hyperspectral remote sensing data. In the proposed approach, unlabeled samples, together with very small-size labeled samples, are used to generate generative models, i.e., MoGs. The number of components in MoG is difficult to determine. Therefore, the number of Gaussians is changed from fine to coarse in order to aviod this problem. Then, each MoG is used to define a base classifier for the discriminative classifier, i.e., the SCM [5]. Different generative models can lead to a diversity of classification results for each base classifier. Finally, the results from different base classifiers are combined to obtain a better and more robust classification accuracy. The experiments were carried out on real hyperspectral data collected by the ROSIS 03 sensor over an area around the University of Pavia, Italy.

5 766 IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 6, NO. 4, OCTOBER 2009 The results obtained by the proposed ensemble classification approach gave both better classification accuracies and more robustness compared to the state-of-the-art classifiers. In our future research, we will extend the research on multiple class problems. Furthermore, components of a mixture model without labeled information for learning will be taen further into account. ACKNOWLEDGMENT The authors would lie to than Dr. P. Gamba of the University of Pavia, Italy, for providing the data set. REFERENCES [1] B. M. Shahshahani and D. A. Landgrebe, The effect of unlabeled samples in reducing the small sample size problem and mitigating the Hughes phenomenon, IEEE Trans. Geosci. Remote Sens., vol. 32, no. 5, pp , Sep [2] M. Chi and L. Bruzzone, Semi-supervised classification of hyperspectral images by SVMs optimized in the primal, IEEE Trans. Geosci. Remote Sens., vol. 45, no. 6, pp , Jun [3] T. Bandos, D. Zhou, and G. Camps-Valls, Semi-supervised hyperspectral image classification with graphs, in Proc. IEEE IGARSS, Denver, CO, Jul. 2006, pp [4] F. Ainori, U. Naonori, and S. Kazumi, Semisupervised learning for hybrid generative-discriminative classifier based on the maximum entropy principle, IEEE Trans. Pattern Anal. Mach. Intell., vol. 30, no. 3, pp , Mar [5] B. Li, M. Chi, J. Fan, and X. Xue, Support cluster machine, in Proc. 24th Int. Conf. Mach. Learn., Corvallis, OR, Jun. 2007, pp [6] T. Jebara, R. Kondor, and A. Howard, Probability product ernels, J. Mach. Learn. Res., vol. 5, pp , [7] B. Jeon and D. Landgrebe, A new supervised absolute classifier, in Proc. IEEE IGARSS, May 1990, pp [8] A. Dempster, N. Laird, and D. Rubin, Maximum lielihood from incomplete data via the EM algorithm, J. R. Stat. Soc., ser. B, vol. 39, no. 1, pp. 1 38, [9] B. Schölopf and A. Smola, Learning With Kernels. Cambridge, MA: MIT Press, [10] G. Briem, J. A. Beneditsson, and J. R. Sveinsson, Multiple classifiers in classification of multisource remote sensing data, IEEE Trans. Geosci. Remote Sens., vol. 40, no. 10, pp , Oct

GRAPH-BASED SEMI-SUPERVISED HYPERSPECTRAL IMAGE CLASSIFICATION USING SPATIAL INFORMATION

GRAPH-BASED SEMI-SUPERVISED HYPERSPECTRAL IMAGE CLASSIFICATION USING SPATIAL INFORMATION GRAPH-BASED SEMI-SUPERVISED HYPERSPECTRAL IMAGE CLASSIFICATION USING SPATIAL INFORMATION Nasehe Jamshidpour a, Saeid Homayouni b, Abdolreza Safari a a Dept. of Geomatics Engineering, College of Engineering,

More information

THE RECENT development of hyperspectral remote sensing

THE RECENT development of hyperspectral remote sensing 870 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 45, NO. 6, JUNE 2007 Semisupervised Classification of Hyperspectral Images by SVMs Optimized in the Primal Mingmin Chi, Member, IEEE, and Lorenzo

More information

Data Analysis 3. Support Vector Machines. Jan Platoš October 30, 2017

Data Analysis 3. Support Vector Machines. Jan Platoš October 30, 2017 Data Analysis 3 Support Vector Machines Jan Platoš October 30, 2017 Department of Computer Science Faculty of Electrical Engineering and Computer Science VŠB - Technical University of Ostrava Table of

More information

Classification of hyperspectral images by using extended morphological attribute profiles and independent component analysis

Classification of hyperspectral images by using extended morphological attribute profiles and independent component analysis Classification of hyperspectral images by using extended morphological attribute profiles and independent component analysis Mauro Dalla Mura, Alberto Villa, Jon Atli Benediktsson, Jocelyn Chanussot, Lorenzo

More information

HYPERSPECTRAL imagery (HSI) records hundreds of

HYPERSPECTRAL imagery (HSI) records hundreds of IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 11, NO. 1, JANUARY 2014 173 Classification Based on 3-D DWT and Decision Fusion for Hyperspectral Image Analysis Zhen Ye, Student Member, IEEE, Saurabh

More information

Support Vector Machines

Support Vector Machines Support Vector Machines SVM Discussion Overview. Importance of SVMs. Overview of Mathematical Techniques Employed 3. Margin Geometry 4. SVM Training Methodology 5. Overlapping Distributions 6. Dealing

More information

Note Set 4: Finite Mixture Models and the EM Algorithm

Note Set 4: Finite Mixture Models and the EM Algorithm Note Set 4: Finite Mixture Models and the EM Algorithm Padhraic Smyth, Department of Computer Science University of California, Irvine Finite Mixture Models A finite mixture model with K components, for

More information

Support Vector Machines.

Support Vector Machines. Support Vector Machines srihari@buffalo.edu SVM Discussion Overview. Importance of SVMs. Overview of Mathematical Techniques Employed 3. Margin Geometry 4. SVM Training Methodology 5. Overlapping Distributions

More information

HYPERSPECTRAL imaging sensors measure the energy

HYPERSPECTRAL imaging sensors measure the energy 736 IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 7, NO. 4, OCTOBER 2010 SVM- and MRF-Based Method for Accurate Classification of Hyperspectral Images Yuliya Tarabalka, Student Member, IEEE, Mathieu

More information

Non-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines

Non-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines Non-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Spring 2007 c 2007,

More information

Title: A Batch Mode Active Learning Technique Based on Multiple Uncertainty for SVM Classifier

Title: A Batch Mode Active Learning Technique Based on Multiple Uncertainty for SVM Classifier 2011 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising

More information

IEEE GEOSCIENCE AND REMOTE SENSING LETTERS 1

IEEE GEOSCIENCE AND REMOTE SENSING LETTERS 1 IEEE GEOSCIENCE AND REMOTE SENSING LETTERS 1 Encoding Invariances in Remote Sensing Image Classification With SVM Emma Izquierdo-Verdiguier, Student Member, IEEE, Valero Laparra, Luis Gómez-Chova, Member,

More information

Adapting SVM Classifiers to Data with Shifted Distributions

Adapting SVM Classifiers to Data with Shifted Distributions Adapting SVM Classifiers to Data with Shifted Distributions Jun Yang School of Computer Science Carnegie Mellon University Pittsburgh, PA 523 juny@cs.cmu.edu Rong Yan IBM T.J.Watson Research Center 9 Skyline

More information

IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 12, NO. 2, FEBRUARY

IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 12, NO. 2, FEBRUARY IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 12, NO. 2, FEBRUARY 2015 349 Subspace-Based Support Vector Machines for Hyperspectral Image Classification Lianru Gao, Jun Li, Member, IEEE, Mahdi Khodadadzadeh,

More information

Learning with infinitely many features

Learning with infinitely many features Learning with infinitely many features R. Flamary, Joint work with A. Rakotomamonjy F. Yger, M. Volpi, M. Dalla Mura, D. Tuia Laboratoire Lagrange, Université de Nice Sophia Antipolis December 2012 Example

More information

Spectral and Spatial Classification of Hyperspectral Data Using SVMs and Morphological Profiles

Spectral and Spatial Classification of Hyperspectral Data Using SVMs and Morphological Profiles Spectral and Spatial Classification of Hyperspectral Data Using SVMs and Morphological Profiles Mathieu Fauvel, Jon Atli Benediktsson, Jocelyn Chanussot, Johannes R. Sveinsson To cite this version: Mathieu

More information

Support Vector Machines (a brief introduction) Adrian Bevan.

Support Vector Machines (a brief introduction) Adrian Bevan. Support Vector Machines (a brief introduction) Adrian Bevan email: a.j.bevan@qmul.ac.uk Outline! Overview:! Introduce the problem and review the various aspects that underpin the SVM concept.! Hard margin

More information

Overview Citation. ML Introduction. Overview Schedule. ML Intro Dataset. Introduction to Semi-Supervised Learning Review 10/4/2010

Overview Citation. ML Introduction. Overview Schedule. ML Intro Dataset. Introduction to Semi-Supervised Learning Review 10/4/2010 INFORMATICS SEMINAR SEPT. 27 & OCT. 4, 2010 Introduction to Semi-Supervised Learning Review 2 Overview Citation X. Zhu and A.B. Goldberg, Introduction to Semi- Supervised Learning, Morgan & Claypool Publishers,

More information

Spatial Information Based Image Classification Using Support Vector Machine

Spatial Information Based Image Classification Using Support Vector Machine Spatial Information Based Image Classification Using Support Vector Machine P.Jeevitha, Dr. P. Ganesh Kumar PG Scholar, Dept of IT, Regional Centre of Anna University, Coimbatore, India. Assistant Professor,

More information

Kernel-based online machine learning and support vector reduction

Kernel-based online machine learning and support vector reduction Kernel-based online machine learning and support vector reduction Sumeet Agarwal 1, V. Vijaya Saradhi 2 andharishkarnick 2 1- IBM India Research Lab, New Delhi, India. 2- Department of Computer Science

More information

An Endowed Takagi-Sugeno-type Fuzzy Model for Classification Problems

An Endowed Takagi-Sugeno-type Fuzzy Model for Classification Problems Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 11, November 2014,

More information

732A54/TDDE31 Big Data Analytics

732A54/TDDE31 Big Data Analytics 732A54/TDDE31 Big Data Analytics Lecture 10: Machine Learning with MapReduce Jose M. Peña IDA, Linköping University, Sweden 1/27 Contents MapReduce Framework Machine Learning with MapReduce Neural Networks

More information

Application of Principal Components Analysis and Gaussian Mixture Models to Printer Identification

Application of Principal Components Analysis and Gaussian Mixture Models to Printer Identification Application of Principal Components Analysis and Gaussian Mixture Models to Printer Identification Gazi. Ali, Pei-Ju Chiang Aravind K. Mikkilineni, George T. Chiu Edward J. Delp, and Jan P. Allebach School

More information

Image Segmentation Using Iterated Graph Cuts Based on Multi-scale Smoothing

Image Segmentation Using Iterated Graph Cuts Based on Multi-scale Smoothing Image Segmentation Using Iterated Graph Cuts Based on Multi-scale Smoothing Tomoyuki Nagahashi 1, Hironobu Fujiyoshi 1, and Takeo Kanade 2 1 Dept. of Computer Science, Chubu University. Matsumoto 1200,

More information

12 Classification using Support Vector Machines

12 Classification using Support Vector Machines 160 Bioinformatics I, WS 14/15, D. Huson, January 28, 2015 12 Classification using Support Vector Machines This lecture is based on the following sources, which are all recommended reading: F. Markowetz.

More information

An Efficient Model Selection for Gaussian Mixture Model in a Bayesian Framework

An Efficient Model Selection for Gaussian Mixture Model in a Bayesian Framework IEEE SIGNAL PROCESSING LETTERS, VOL. XX, NO. XX, XXX 23 An Efficient Model Selection for Gaussian Mixture Model in a Bayesian Framework Ji Won Yoon arxiv:37.99v [cs.lg] 3 Jul 23 Abstract In order to cluster

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Maximum Margin Methods Varun Chandola Computer Science & Engineering State University of New York at Buffalo Buffalo, NY, USA chandola@buffalo.edu Chandola@UB CSE 474/574

More information

Support Vector Machines

Support Vector Machines Support Vector Machines RBF-networks Support Vector Machines Good Decision Boundary Optimization Problem Soft margin Hyperplane Non-linear Decision Boundary Kernel-Trick Approximation Accurancy Overtraining

More information

Lecture 10: SVM Lecture Overview Support Vector Machines The binary classification problem

Lecture 10: SVM Lecture Overview Support Vector Machines The binary classification problem Computational Learning Theory Fall Semester, 2012/13 Lecture 10: SVM Lecturer: Yishay Mansour Scribe: Gitit Kehat, Yogev Vaknin and Ezra Levin 1 10.1 Lecture Overview In this lecture we present in detail

More information

Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation

Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation IJCSNS International Journal of Computer Science and Network Security, VOL.13 No.11, November 2013 1 Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial

More information

Hyperspectral Data Classification via Sparse Representation in Homotopy

Hyperspectral Data Classification via Sparse Representation in Homotopy Hyperspectral Data Classification via Sparse Representation in Homotopy Qazi Sami ul Haq,Lixin Shi,Linmi Tao,Shiqiang Yang Key Laboratory of Pervasive Computing, Ministry of Education Department of Computer

More information

Parallel Implementation of a Random Search Procedure: An Experimental Study

Parallel Implementation of a Random Search Procedure: An Experimental Study Parallel Implementation of a Random Search Procedure: An Experimental Study NIKOLAI K. KRIVULIN Faculty of Mathematics and Mechanics St. Petersburg State University 28 University Ave., St. Petersburg,

More information

CS 229 Midterm Review

CS 229 Midterm Review CS 229 Midterm Review Course Staff Fall 2018 11/2/2018 Outline Today: SVMs Kernels Tree Ensembles EM Algorithm / Mixture Models [ Focus on building intuition, less so on solving specific problems. Ask

More information

Bagging and Boosting Algorithms for Support Vector Machine Classifiers

Bagging and Boosting Algorithms for Support Vector Machine Classifiers Bagging and Boosting Algorithms for Support Vector Machine Classifiers Noritaka SHIGEI and Hiromi MIYAJIMA Dept. of Electrical and Electronics Engineering, Kagoshima University 1-21-40, Korimoto, Kagoshima

More information

Classification of Hyperspectral Data over Urban. Areas Using Directional Morphological Profiles and. Semi-supervised Feature Extraction

Classification of Hyperspectral Data over Urban. Areas Using Directional Morphological Profiles and. Semi-supervised Feature Extraction IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL.X, NO.X, Y 1 Classification of Hyperspectral Data over Urban Areas Using Directional Morphological Profiles and Semi-supervised

More information

Quickest Search Over Multiple Sequences with Mixed Observations

Quickest Search Over Multiple Sequences with Mixed Observations Quicest Search Over Multiple Sequences with Mixed Observations Jun Geng Worcester Polytechnic Institute Email: geng@wpi.edu Weiyu Xu Univ. of Iowa Email: weiyu-xu@uiowa.edu Lifeng Lai Worcester Polytechnic

More information

Image retrieval based on bag of images

Image retrieval based on bag of images University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 2009 Image retrieval based on bag of images Jun Zhang University of Wollongong

More information

DEEP LEARNING TO DIVERSIFY BELIEF NETWORKS FOR REMOTE SENSING IMAGE CLASSIFICATION

DEEP LEARNING TO DIVERSIFY BELIEF NETWORKS FOR REMOTE SENSING IMAGE CLASSIFICATION DEEP LEARNING TO DIVERSIFY BELIEF NETWORKS FOR REMOTE SENSING IMAGE CLASSIFICATION S.Dhanalakshmi #1 #PG Scholar, Department of Computer Science, Dr.Sivanthi Aditanar college of Engineering, Tiruchendur

More information

Efficient Tuning of SVM Hyperparameters Using Radius/Margin Bound and Iterative Algorithms

Efficient Tuning of SVM Hyperparameters Using Radius/Margin Bound and Iterative Algorithms IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 13, NO. 5, SEPTEMBER 2002 1225 Efficient Tuning of SVM Hyperparameters Using Radius/Margin Bound and Iterative Algorithms S. Sathiya Keerthi Abstract This paper

More information

Client Dependent GMM-SVM Models for Speaker Verification

Client Dependent GMM-SVM Models for Speaker Verification Client Dependent GMM-SVM Models for Speaker Verification Quan Le, Samy Bengio IDIAP, P.O. Box 592, CH-1920 Martigny, Switzerland {quan,bengio}@idiap.ch Abstract. Generative Gaussian Mixture Models (GMMs)

More information

Supplementary material: Strengthening the Effectiveness of Pedestrian Detection with Spatially Pooled Features

Supplementary material: Strengthening the Effectiveness of Pedestrian Detection with Spatially Pooled Features Supplementary material: Strengthening the Effectiveness of Pedestrian Detection with Spatially Pooled Features Sakrapee Paisitkriangkrai, Chunhua Shen, Anton van den Hengel The University of Adelaide,

More information

Application of Support Vector Machine Algorithm in Spam Filtering

Application of Support Vector Machine Algorithm in  Spam Filtering Application of Support Vector Machine Algorithm in E-Mail Spam Filtering Julia Bluszcz, Daria Fitisova, Alexander Hamann, Alexey Trifonov, Advisor: Patrick Jähnichen Abstract The problem of spam classification

More information

Table of Contents. Recognition of Facial Gestures... 1 Attila Fazekas

Table of Contents. Recognition of Facial Gestures... 1 Attila Fazekas Table of Contents Recognition of Facial Gestures...................................... 1 Attila Fazekas II Recognition of Facial Gestures Attila Fazekas University of Debrecen, Institute of Informatics

More information

A Taxonomy of Semi-Supervised Learning Algorithms

A Taxonomy of Semi-Supervised Learning Algorithms A Taxonomy of Semi-Supervised Learning Algorithms Olivier Chapelle Max Planck Institute for Biological Cybernetics December 2005 Outline 1 Introduction 2 Generative models 3 Low density separation 4 Graph

More information

2070 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 46, NO. 7, JULY 2008

2070 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 46, NO. 7, JULY 2008 2070 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 46, NO. 7, JULY 2008 A Novel Approach to Unsupervised Change Detection Based on a Semisupervised SVM and a Similarity Measure Francesca Bovolo,

More information

Facial expression recognition using shape and texture information

Facial expression recognition using shape and texture information 1 Facial expression recognition using shape and texture information I. Kotsia 1 and I. Pitas 1 Aristotle University of Thessaloniki pitas@aiia.csd.auth.gr Department of Informatics Box 451 54124 Thessaloniki,

More information

Clustering Lecture 5: Mixture Model

Clustering Lecture 5: Mixture Model Clustering Lecture 5: Mixture Model Jing Gao SUNY Buffalo 1 Outline Basics Motivation, definition, evaluation Methods Partitional Hierarchical Density-based Mixture model Spectral methods Advanced topics

More information

Support Vector Machines

Support Vector Machines Support Vector Machines . Importance of SVM SVM is a discriminative method that brings together:. computational learning theory. previously known methods in linear discriminant functions 3. optimization

More information

A NEW MULTIPLE CLASSIFIER SYSTEM FOR SEMI-SUPERVISED ANALYSIS OF HYPERSPECTRAL IMAGES

A NEW MULTIPLE CLASSIFIER SYSTEM FOR SEMI-SUPERVISED ANALYSIS OF HYPERSPECTRAL IMAGES A NEW MULTIPLE CLASSIFIER SYSTEM FOR SEMI-SUPERVISED ANALYSIS OF HYPERSPECTRAL IMAGES Jun Li 1, Prashanth Reddy Marpu 2, Antonio Plaza 1, Jose Manuel Bioucas Dias 3 and Jon Atli Benediktsson 2 1 Hyperspectral

More information

KERNEL-based methods, such as support vector machines

KERNEL-based methods, such as support vector machines 48 IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 12, NO. 1, JANUARY 2015 Kernel Collaborative Representation With Tikhonov Regularization for Hyperspectral Image Classification Wei Li, Member, IEEE,QianDu,Senior

More information

Mixture Models and EM

Mixture Models and EM Table of Content Chapter 9 Mixture Models and EM -means Clustering Gaussian Mixture Models (GMM) Expectation Maximiation (EM) for Mixture Parameter Estimation Introduction Mixture models allows Complex

More information

Kernel principal component analysis for the classification of hyperspectral remote sensing data over urban areas

Kernel principal component analysis for the classification of hyperspectral remote sensing data over urban areas Kernel principal component analysis for the classification of hyperspectral remote sensing data over urban areas Mathieu Fauvel, Jocelyn Chanussot and Jón Atli Benediktsson GIPSA-lab, Departement Image

More information

Image Segmentation Using Iterated Graph Cuts BasedonMulti-scaleSmoothing

Image Segmentation Using Iterated Graph Cuts BasedonMulti-scaleSmoothing Image Segmentation Using Iterated Graph Cuts BasedonMulti-scaleSmoothing Tomoyuki Nagahashi 1, Hironobu Fujiyoshi 1, and Takeo Kanade 2 1 Dept. of Computer Science, Chubu University. Matsumoto 1200, Kasugai,

More information

Bagging for One-Class Learning

Bagging for One-Class Learning Bagging for One-Class Learning David Kamm December 13, 2008 1 Introduction Consider the following outlier detection problem: suppose you are given an unlabeled data set and make the assumptions that one

More information

IMAGE RESTORATION VIA EFFICIENT GAUSSIAN MIXTURE MODEL LEARNING

IMAGE RESTORATION VIA EFFICIENT GAUSSIAN MIXTURE MODEL LEARNING IMAGE RESTORATION VIA EFFICIENT GAUSSIAN MIXTURE MODEL LEARNING Jianzhou Feng Li Song Xiaog Huo Xiaokang Yang Wenjun Zhang Shanghai Digital Media Processing Transmission Key Lab, Shanghai Jiaotong University

More information

Mixture Models and the EM Algorithm

Mixture Models and the EM Algorithm Mixture Models and the EM Algorithm Padhraic Smyth, Department of Computer Science University of California, Irvine c 2017 1 Finite Mixture Models Say we have a data set D = {x 1,..., x N } where x i is

More information

IN THE context of risk management and hazard assessment

IN THE context of risk management and hazard assessment 606 IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 6, NO. 3, JULY 2009 Support Vector Reduction in SVM Algorithm for Abrupt Change Detection in Remote Sensing Tarek Habib, Member, IEEE, Jordi Inglada,

More information

EM algorithm with GMM and Naive Bayesian to Implement Missing Values

EM algorithm with GMM and Naive Bayesian to Implement Missing Values , pp.1-5 http://dx.doi.org/10.14257/astl.2014.46.01 EM algorithm with GMM and aive Bayesian to Implement Missing Values Xi-Yu Zhou 1, Joon S. Lim 2 1 I.T. College Gachon University Seongnam, South Korea,

More information

A Distance-Based Classifier Using Dissimilarity Based on Class Conditional Probability and Within-Class Variation. Kwanyong Lee 1 and Hyeyoung Park 2

A Distance-Based Classifier Using Dissimilarity Based on Class Conditional Probability and Within-Class Variation. Kwanyong Lee 1 and Hyeyoung Park 2 A Distance-Based Classifier Using Dissimilarity Based on Class Conditional Probability and Within-Class Variation Kwanyong Lee 1 and Hyeyoung Park 2 1. Department of Computer Science, Korea National Open

More information

Classification by Support Vector Machines

Classification by Support Vector Machines Classification by Support Vector Machines Florian Markowetz Max-Planck-Institute for Molecular Genetics Computational Molecular Biology Berlin Practical DNA Microarray Analysis 2003 1 Overview I II III

More information

Generative and discriminative classification techniques

Generative and discriminative classification techniques Generative and discriminative classification techniques Machine Learning and Category Representation 013-014 Jakob Verbeek, December 13+0, 013 Course website: http://lear.inrialpes.fr/~verbeek/mlcr.13.14

More information

10-701/15-781, Fall 2006, Final

10-701/15-781, Fall 2006, Final -7/-78, Fall 6, Final Dec, :pm-8:pm There are 9 questions in this exam ( pages including this cover sheet). If you need more room to work out your answer to a question, use the back of the page and clearly

More information

High-Resolution Image Classification Integrating Spectral-Spatial-Location Cues by Conditional Random Fields

High-Resolution Image Classification Integrating Spectral-Spatial-Location Cues by Conditional Random Fields IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 25, NO. 9, SEPTEMBER 2016 4033 High-Resolution Image Classification Integrating Spectral-Spatial-Location Cues by Conditional Random Fields Ji Zhao, Student

More information

Some questions of consensus building using co-association

Some questions of consensus building using co-association Some questions of consensus building using co-association VITALIY TAYANOV Polish-Japanese High School of Computer Technics Aleja Legionow, 4190, Bytom POLAND vtayanov@yahoo.com Abstract: In this paper

More information

Classification by Support Vector Machines

Classification by Support Vector Machines Classification by Support Vector Machines Florian Markowetz Max-Planck-Institute for Molecular Genetics Computational Molecular Biology Berlin Practical DNA Microarray Analysis 2003 1 Overview I II III

More information

All lecture slides will be available at CSC2515_Winter15.html

All lecture slides will be available at  CSC2515_Winter15.html CSC2515 Fall 2015 Introduc3on to Machine Learning Lecture 9: Support Vector Machines All lecture slides will be available at http://www.cs.toronto.edu/~urtasun/courses/csc2515/ CSC2515_Winter15.html Many

More information

Robust Event Boundary Detection in Sensor Networks A Mixture Model Based Approach

Robust Event Boundary Detection in Sensor Networks A Mixture Model Based Approach Robust Event Boundary Detection in Sensor Networks A Mixture Model Based Approach Min Ding Department of Computer Science The George Washington University Washington DC 20052, USA Email: minding@gwu.edu

More information

Binary Hierarchical Classifier for Hyperspectral Data Analysis

Binary Hierarchical Classifier for Hyperspectral Data Analysis Binary Hierarchical Classifier for Hyperspectral Data Analysis Hafrún Hauksdóttir A intruduction to articles written by Joydeep Gosh and Melba M. Crawford Binary Hierarchical Classifierfor Hyperspectral

More information

Feature scaling in support vector data description

Feature scaling in support vector data description Feature scaling in support vector data description P. Juszczak, D.M.J. Tax, R.P.W. Duin Pattern Recognition Group, Department of Applied Physics, Faculty of Applied Sciences, Delft University of Technology,

More information

Shared Kernel Models for Class Conditional Density Estimation

Shared Kernel Models for Class Conditional Density Estimation IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 5, SEPTEMBER 2001 987 Shared Kernel Models for Class Conditional Density Estimation Michalis K. Titsias and Aristidis C. Likas, Member, IEEE Abstract

More information

MULTI/HYPERSPECTRAL imagery has the potential to

MULTI/HYPERSPECTRAL imagery has the potential to IEEE GEOSCIENCE AND REMOTE SENSING ETTERS, VO. 11, NO. 12, DECEMBER 2014 2183 Three-Dimensional Wavelet Texture Feature Extraction and Classification for Multi/Hyperspectral Imagery Xian Guo, Xin Huang,

More information

Content-based image and video analysis. Machine learning

Content-based image and video analysis. Machine learning Content-based image and video analysis Machine learning for multimedia retrieval 04.05.2009 What is machine learning? Some problems are very hard to solve by writing a computer program by hand Almost all

More information

Segmentation: Clustering, Graph Cut and EM

Segmentation: Clustering, Graph Cut and EM Segmentation: Clustering, Graph Cut and EM Ying Wu Electrical Engineering and Computer Science Northwestern University, Evanston, IL 60208 yingwu@northwestern.edu http://www.eecs.northwestern.edu/~yingwu

More information

JPEG compression of monochrome 2D-barcode images using DCT coefficient distributions

JPEG compression of monochrome 2D-barcode images using DCT coefficient distributions Edith Cowan University Research Online ECU Publications Pre. JPEG compression of monochrome D-barcode images using DCT coefficient distributions Keng Teong Tan Hong Kong Baptist University Douglas Chai

More information

IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 7, NO. 6, JUNE

IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 7, NO. 6, JUNE IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, VOL. 7, NO. 6, JUNE 2014 2147 Automatic Framework for Spectral Spatial Classification Based on Supervised Feature Extraction

More information

Hyperspectral Image Classification by Using Pixel Spatial Correlation

Hyperspectral Image Classification by Using Pixel Spatial Correlation Hyperspectral Image Classification by Using Pixel Spatial Correlation Yue Gao and Tat-Seng Chua School of Computing, National University of Singapore, Singapore {gaoyue,chuats}@comp.nus.edu.sg Abstract.

More information

Support Vector Machines.

Support Vector Machines. Support Vector Machines srihari@buffalo.edu SVM Discussion Overview 1. Overview of SVMs 2. Margin Geometry 3. SVM Optimization 4. Overlapping Distributions 5. Relationship to Logistic Regression 6. Dealing

More information

Multiple Model Estimation : The EM Algorithm & Applications

Multiple Model Estimation : The EM Algorithm & Applications Multiple Model Estimation : The EM Algorithm & Applications Princeton University COS 429 Lecture Nov. 13, 2007 Harpreet S. Sawhney hsawhney@sarnoff.com Recapitulation Problem of motion estimation Parametric

More information

Normalized Texture Motifs and Their Application to Statistical Object Modeling

Normalized Texture Motifs and Their Application to Statistical Object Modeling Normalized Texture Motifs and Their Application to Statistical Obect Modeling S. D. Newsam B. S. Manunath Center for Applied Scientific Computing Electrical and Computer Engineering Lawrence Livermore

More information

An Optimized Pixel-Wise Weighting Approach For Patch-Based Image Denoising

An Optimized Pixel-Wise Weighting Approach For Patch-Based Image Denoising An Optimized Pixel-Wise Weighting Approach For Patch-Based Image Denoising Dr. B. R.VIKRAM M.E.,Ph.D.,MIEEE.,LMISTE, Principal of Vijay Rural Engineering College, NIZAMABAD ( Dt.) G. Chaitanya M.Tech,

More information

Dimensionality Reduction using Hybrid Support Vector Machine and Discriminant Independent Component Analysis for Hyperspectral Image

Dimensionality Reduction using Hybrid Support Vector Machine and Discriminant Independent Component Analysis for Hyperspectral Image Dimensionality Reduction using Hybrid Support Vector Machine and Discriminant Independent Component Analysis for Hyperspectral Image Murinto 1, Nur Rochmah Dyah PA 2 1,2 Department of Informatics Engineering

More information

Semi-Supervised Clustering with Partial Background Information

Semi-Supervised Clustering with Partial Background Information Semi-Supervised Clustering with Partial Background Information Jing Gao Pang-Ning Tan Haibin Cheng Abstract Incorporating background knowledge into unsupervised clustering algorithms has been the subject

More information

Contextual High-Resolution Image Classification by Markovian Data Fusion, Adaptive Texture Extraction, and Multiscale Segmentation

Contextual High-Resolution Image Classification by Markovian Data Fusion, Adaptive Texture Extraction, and Multiscale Segmentation IGARSS-2011 Vancouver, Canada, July 24-29, 29, 2011 Contextual High-Resolution Image Classification by Markovian Data Fusion, Adaptive Texture Extraction, and Multiscale Segmentation Gabriele Moser Sebastiano

More information

Lab 2: Support vector machines

Lab 2: Support vector machines Artificial neural networks, advanced course, 2D1433 Lab 2: Support vector machines Martin Rehn For the course given in 2006 All files referenced below may be found in the following directory: /info/annfk06/labs/lab2

More information

Speaker Diarization System Based on GMM and BIC

Speaker Diarization System Based on GMM and BIC Speaer Diarization System Based on GMM and BIC Tantan Liu 1, Xiaoxing Liu 1, Yonghong Yan 1 1 ThinIT Speech Lab, Institute of Acoustics, Chinese Academy of Sciences Beijing 100080 {tliu, xliu,yyan}@hccl.ioa.ac.cn

More information

Hyperspectral and Multispectral Image Fusion Using Local Spatial-Spectral Dictionary Pair

Hyperspectral and Multispectral Image Fusion Using Local Spatial-Spectral Dictionary Pair Hyperspectral and Multispectral Image Fusion Using Local Spatial-Spectral Dictionary Pair Yifan Zhang, Tuo Zhao, and Mingyi He School of Electronics and Information International Center for Information

More information

Automatic Shadow Removal by Illuminance in HSV Color Space

Automatic Shadow Removal by Illuminance in HSV Color Space Computer Science and Information Technology 3(3): 70-75, 2015 DOI: 10.13189/csit.2015.030303 http://www.hrpub.org Automatic Shadow Removal by Illuminance in HSV Color Space Wenbo Huang 1, KyoungYeon Kim

More information

Remote Sensed Image Classification based on Spatial and Spectral Features using SVM

Remote Sensed Image Classification based on Spatial and Spectral Features using SVM RESEARCH ARTICLE OPEN ACCESS Remote Sensed Image Classification based on Spatial and Spectral Features using SVM Mary Jasmine. E PG Scholar Department of Computer Science and Engineering, University College

More information

IJREAT International Journal of Research in Engineering & Advanced Technology, Volume 1, Issue 5, Oct-Nov, 2013 ISSN:

IJREAT International Journal of Research in Engineering & Advanced Technology, Volume 1, Issue 5, Oct-Nov, 2013 ISSN: Semi Automatic Annotation Exploitation Similarity of Pics in i Personal Photo Albums P. Subashree Kasi Thangam 1 and R. Rosy Angel 2 1 Assistant Professor, Department of Computer Science Engineering College,

More information

Kernel Combination Versus Classifier Combination

Kernel Combination Versus Classifier Combination Kernel Combination Versus Classifier Combination Wan-Jui Lee 1, Sergey Verzakov 2, and Robert P.W. Duin 2 1 EE Department, National Sun Yat-Sen University, Kaohsiung, Taiwan wrlee@water.ee.nsysu.edu.tw

More information

ONE of the fundamental problems in machine learning

ONE of the fundamental problems in machine learning 966 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 17, NO. 4, JULY 2006 An Incremental Training Method for the Probabilistic RBF Network Constantinos Constantinopoulos and Aristidis Likas, Senior Member, IEEE

More information

The Comparative Study of Machine Learning Algorithms in Text Data Classification*

The Comparative Study of Machine Learning Algorithms in Text Data Classification* The Comparative Study of Machine Learning Algorithms in Text Data Classification* Wang Xin School of Science, Beijing Information Science and Technology University Beijing, China Abstract Classification

More information

HYPERSPECTRAL remote sensing images (HSI) with

HYPERSPECTRAL remote sensing images (HSI) with 1 A Semi-supervised Spatial Spectral Regularized Manifold Local Scaling Cut With HGF for Dimensionality Reduction of Hyperspectral Images Ramanarayan Mohanty, Student Member, IEEE, S L Happy, Member, IEEE,

More information

Does Normalization Methods Play a Role for Hyperspectral Image Classification?

Does Normalization Methods Play a Role for Hyperspectral Image Classification? Does Normalization Methods Play a Role for Hyperspectral Image Classification? Faxian Cao 1, Zhijing Yang 1*, Jinchang Ren 2, Mengying Jiang 1, Wing-Kuen Ling 1 1 School of Information Engineering, Guangdong

More information

HYPERSPECTRAL sensors provide a rich source of

HYPERSPECTRAL sensors provide a rich source of Fast Hyperspectral Feature Reduction Using Piecewise Constant Function Approximations Are C. Jensen, Student member, IEEE and Anne Schistad Solberg, Member, IEEE Abstract The high number of spectral bands

More information

A Novel Model for Semantic Learning and Retrieval of Images

A Novel Model for Semantic Learning and Retrieval of Images A Novel Model for Semantic Learning and Retrieval of Images Zhixin Li, ZhiPing Shi 2, ZhengJun Tang, Weizhong Zhao 3 College of Computer Science and Information Technology, Guangxi Normal University, Guilin

More information

DECISION-TREE-BASED MULTICLASS SUPPORT VECTOR MACHINES. Fumitake Takahashi, Shigeo Abe

DECISION-TREE-BASED MULTICLASS SUPPORT VECTOR MACHINES. Fumitake Takahashi, Shigeo Abe DECISION-TREE-BASED MULTICLASS SUPPORT VECTOR MACHINES Fumitake Takahashi, Shigeo Abe Graduate School of Science and Technology, Kobe University, Kobe, Japan (E-mail: abe@eedept.kobe-u.ac.jp) ABSTRACT

More information

Hyperspectral Image Classification Using Gradient Local Auto-Correlations

Hyperspectral Image Classification Using Gradient Local Auto-Correlations Hyperspectral Image Classification Using Gradient Local Auto-Correlations Chen Chen 1, Junjun Jiang 2, Baochang Zhang 3, Wankou Yang 4, Jianzhong Guo 5 1. epartment of Electrical Engineering, University

More information

Voxel selection algorithms for fmri

Voxel selection algorithms for fmri Voxel selection algorithms for fmri Henryk Blasinski December 14, 2012 1 Introduction Functional Magnetic Resonance Imaging (fmri) is a technique to measure and image the Blood- Oxygen Level Dependent

More information

Unsupervised Learning and Clustering

Unsupervised Learning and Clustering Unsupervised Learning and Clustering Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Spring 2008 CS 551, Spring 2008 c 2008, Selim Aksoy (Bilkent University)

More information