Combining Neural Networks Based on Dempster-Shafer Theory for Classifying Data with Imperfect Labels
|
|
- Janis Cunningham
- 6 years ago
- Views:
Transcription
1 Combining Neural Networks Based on Dempster-Shafer Theory for Classifying Data with Imperfect Labels Mahdi Tabassian 1,2, Reza Ghaderi 1, and Reza Ebrahimpour 2,3 1 Faculty of Electrical and Computer Engineering, Babol University of Technology, Babol, Iran r_ghaderi@nit.ac.ir 2 School of Cognitive Sciences, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran, P.O. Box {tabasian,ebrahimpour}@ipm.ir 3 Department of Electrical and Computer Egineering, Shahid Raaee Teacher Training University, Tehran, Iran Abstract. This paper addresses the supervised learning in which the class membership of training data are subect to uncertainty. This problem is tackled in the framework of the Dempster-Shafer theory. In order to properly estimate the class labels, different types of features are extracted from the data. The initial labels of the training data are ignored and by utilizing the main classes' prototypes, each training pattern, in each of the feature spaces, is reassigned to one class or a subset of the main classes based on the level of ambiguity concerning its class label. Multilayer perceptrons neural network is used as base classifier and for a given test sample, its outputs are considered as basic belief assignment. Finally, the decisions of the base classifiers are combined using Dempster's rule of combination. Experiments with artificial and real data demonstrate that considering ambiguity in class labels can provide better results than classifiers trained with imperfect labels. Keywords: Data with imperfect labels, Dempster-Shafer theory, Classifier combination, Neural network. 1 Introduction In the classical supervised classification framework, a classifier is trained based on a learning set in the form of labeled patterns. However, in some applications, unambiguous label assignment may be difficult, imprecise and expensive. Such situations can occur when differentiating between two or more classes is not easy due to lack of information required for specifying certain labels to data or the difficulty of labeling complicated data of the problem at hand. Combination of multiple classifiers' decisions induced from different information sources has proved to be a promising approach for improving the performance of a classification system that deals with imprecise and uncertain information. The Dempster-Shafer (D-S) theory of evidence [1] is a well suited framework for representation of partial knowledge. Compared to the Bayesian approach, it provides a more flexible G. Sidorov et al. (Eds.): MICAI 2010, Part II, LNAI 6438, pp , Springer-Verlag Berlin Heidelberg 2010
2 234 M. Tabassian, R. Ghaderi, and R. Ebrahimpour mathematical tool for dealing with imperfect information. It offers various tools for combining several items of evidence and, as understood in the transferable belief model (TBM) [2], allows to make decision about the class of a given pattern through transforming a belief function into a probability function. Thanks to its flexible characteristics to represent different kind of knowledge, the D-S theory provides a suitable theoretical framework for combining classifiers especially those learned using imprecise and/or uncertain data [3-7]. In this paper we propose a new approach within the belief functions and combining classifiers frameworks for dealing with supervised classification problems in which the labels of the learning data are imperfect. Several representations of the data are used and an approach is suggested, based on the supervised information, for detecting inconsistencies in the labels of the learning data and assigning crisp and soft labels to them. Multilayer perceptrons (MLP) neural network is used as base classifier and its outputs are interpreted as belief function. Final decision about the class of a test pattern is made by combining the beliefs produced by the base classifiers using Dempster's rule of combination. The paper is organized as follows. In Section 2 basic concepts of the D-S theory are reviewed. Details of our proposed method are described in Section 3. It is followed by the experimental results on artificial and real data in Section 4 and finally, Section 5 draws conclusion and summarizes the paper. 2 Dempster-Shafer Theory The Dempster-Shafer (D-S) theory of evidence [1] is a theoretical framework for reasoning with uncertain and partial information. Several models for uncertain reasoning has been proposed based on the D-S theory. An example is the transferable belief model (TBM) proposed by Smets [2]. In this section, the main notions of the D-S theory are briefly reviewed. Let Ω = {ω 1,...,ω M} be a finite set of mutually exclusive and exhaustive hypotheses called the frame of discernment. A basic belief assignment (BBA) is a function m: 2 Ω [0,1] verifying m (A) = 1. A BBA m such that m( φ ) = 0 is called normal. The subsets A of Ω with nonzero masses are called the focal elements of m and A Ω m(a) indicates the degree of belief that is assigned to the exact set of A and not to any of its subsets. There are belief and plausibility functions associated with a BBA and are defined respectively, as follow: Bel( A) = m ( B ) B A Pl ( A ) = m ( B ) A B φ Bel(A) represents the total amount of probability that must be allocated to A, while Pl(A) can be interpreted as the maximum amount of support that could be given to A. Let m 1 and m 2 be two BBAs induced by two independent items of evidence. These pieces of evidence can be combined using Dempster s rule of combination which is defined as: (1) (2)
3 Combining Neural Networks Based on Dempster-Shafer Theory 235 m 1( A ) m 2( B ) A B = C m ( C ) = 1 m 1( A ) m 2( B ) A B = φ (3) Combining BBAs using Dempster's rule of combination is possible only if the sources of belief are not totally contradictory which means that there exist two subsets A Ω and B Ω with A B φ such that m ( A ) > 0 and m ( B ) > Discounting When an information source S is used in the belief function framework, its reliability can be taking into account by the discounting operation in which the original BBA m α is weakened by a discounting rate α [0,1]. The resulting BBA m is defined by Shafer [1] and Smets [8]: α ma ( ) = (1 α) ma ( ) A Ω, A Ω α m( Ω= ) α+ (1 α) m( Ω) (4) The coefficient (1- α) cab be regarded as a confidence degree one has in the source of information. If α = 1, it means that S is not reliable and the information provided by this source is discarded. On the other hand, α = 0 indicates that the S is fully reliable and the belief function remains unchanged. 2.2 Decision Making The approach adopted in this paper for decision making is based on the pignistic transformation which is defined in the TBM. By uniformly distributing the mass of belief m(a) among its elements for all A Ω, a pignistic probability distribution is defined as: 1 m(a) BetP(ω) =, ω Ω (5) A 1 m( φ) { A Ω,ω A } where A is the cardinality of subset A and for normal BBAs, would be ma ( ). Finally a test sample is assigned to class with the largest pignistic probability. 3 The Proposed Method Fig. 1 shows the architecture of our proposed method. There are two main phases involved in the implementation of the method including relabeling the learning data and classifying an input test sample by combining decisions of neural networks trained on the learning data with new labels.
4 236 M. Tabassian, R. Ghaderi, and R. Ebrahimpour Fig. 1. Architecture of the proposed classification scheme. In the training phase, crisp and soft labels are assigned to the learning data and MLPs are trained on the data with new labels. In the test phase, the outputs of the MLPs are converted in the form of BBAs by making use of softmax operator. Final decision on a given test sample is made by combining the experts' beliefs with Dempster's rule of combination and using pignistic probability. 3.1 Relabeling Let Ω = {ω 1,...,ω M} be a set of M classes and X = [x 1,...,x n ] be a data point described by n features in the training set which is associated to one class in Ω with certainty. The goals of this stage are (i) to detect inconsistencies in the labels of the training data using the supervised information carried by the data and, (ii) to reassign each train sample to ust one main class or any subset of the main classes based on the level of ambiguity concerning the class membership of that sample. Let P be a M n matrix containing prototype vectors of the main classes and let Di = [ di1,..., dim] be the set of distances between train sample X i and the M prototypes according to some distance measure (e.g. the Euclidian one). Initial label of X i is ignored and by utilizing the information provided by the vector D i, uncertainty detection and class reassignment for this sample are performed in a three-step procedure: Step 1: The minimum distance between X i and the class prototypes is taken from the vector D i, d = min( d ), k = 1,..., M (6) i ik and d i is called d min. Step 2: A value 0< μ k 1 is calculated for each of M classes using the following function: d min + β μk ( X i) =, k = 1,..., M (7) d ik + β in which 0 < β < 1 is a small constant value and ensures that the utilized function allocates a value greater than zero to each of M classes even if d min = 0. μ is a k
5 Combining Neural Networks Based on Dempster-Shafer Theory 237 decreasing function of the difference between d min and d k and has values close to 1 for small differences. Step 3: A threshold value 0< τ <1 is defined and based on the level of ambiguity regarding the class membership of the train sample X i, this sample may be assigned to (i) a set of classes if the corresponding values of μ for these classes are greater than k or equal to τ or (ii) ust one main class, which has the closest prototype to X i and μ = 1, if X i be far away from other main classes prototypes. k Close distances between a train pattern and some of the class prototypes can be interpreted as an indication of ambiguity in the pattern s label and in such cases a soft label is assigned to that train pattern. The above procedure is repeated for all training data and for all feature spaces. Since several representations of the data are employed to provide complementary information, it can be expected that if a soft label has been assigned to a train sample in one of the feature spaces, this sample could belong to one main class or a subset of the main classes with less uncertainty in the other feature spaces. In this way, the negative effects of crisp but imperfect labels of some learning samples on the classification performance can be reduced. 3.2 Training and Classification MLP neural network is used as base classifier. The learning set of each feature space, which consists data with new crisp or soft labels, is employed to train a base classifier. Since different types of features are extracted from the data, each feature space has its own level of uncertainty in class labels. As a result, after the relabeling procedure, the number and type of classes in each feature space could be different from the other one and base classifiers with different models (different number of output nodes) are trained on these feature spaces. In this way, diversity among the base classifiers can be achieved. In the test phase, same types of features as the training stage are extracted from the test data and they are applied to their corresponding trained base classifiers. The output values of each base classifier can be interpreted as measures of confidence associated with different decisions. For combining the evidences induced by the feature spaces using Dempster s rule of combination, the decision of each base classifier should be converted in the form of BBA. This can be done through normalizing the outputs of the base classifiers by a softmax operator as: exp( O i ) mi({ ω }) =, = 1,..., C (8) C exp( O ) = 1 i where O i is the th output value of the ith base classifier, C is the number of classes in the ith feature space after the relabeling stage and m i({ ω }) is the mass of belief given to class ω.
6 238 M. Tabassian, R. Ghaderi, and R. Ebrahimpour Note that, although our method allows to compute a BBA m i that could has any focal set over the set of the main classes, the BBA contains only a set of states which have corresponding classes in the ith feature space after the relabeling stage. Before combining the opinions of different evidences, the belief function of each classifier is weakened by its discounting rate (the method of evaluation of the discounting factor will be explained in Section 3.2.1) and the resulting BBAs are merged using Dempster s rule of combination. The decision about the class of a given test sample is then made using the pignistic probability derived from the final BBA by the pignistic transformation. Note that, although the main contribution of our method is to classify data with imperfect labels, its application can be extended to classification problems which involve heavily overlapping class distributions and nonlinear class boundaries Evaluation of the Discounting Factor To assess the reliability of information sources provided by different feature spaces, the method proposed by Eloudi et.al. [9] is used. This method is based on the TBM and for finding the discounting factor, minimizes the distance between the pignistic probabilities computed from the discounted beliefs and the actual values of data. The outline of this approach is summarized below. Let Ω= {ω 1,...,ω M} be a set of classes and χ = {X 1,...,X n} be a set of n samples. Let BBA m (X ) be the normalized belief function for sample X which is defined on the set of classes. The class of each sample is known and c Ω denotes the class of sample X. Let BetP α (X ) denotes the pignistic probability obtained from the dis- counted BBA α m ( X ). For each sample X, an indicator function δ, i 1 if c = ωi, = 0 otherwise, δ,i is defined as: and sum of the Euclidian distances between the pignistic probabilities computed from the discounted BBAs and the indicator functions, is calculated as follows: n M 2 TotalDist = ( BetP α (X )( ωi) δ, i) (10) =1 i=1 The discounting factor α that minimizes the above distance is given by: α n M ( δ - BetP(X )( ω )) BetP(X )( ω ), i i i =1 i=1 = min 1,max 0, (11) n M 2 n/m - ( BetP(X )( ωi )) =1 i=1 (9) 4 Experimental Results In this section, we report experimental results on artificial and real datasets to highlight the main aspects of our proposed method. In our experiments, the centers of the
7 Combining Neural Networks Based on Dempster-Shafer Theory 239 main classes are taken by averaging the learning data of each class and the resulting vectors are considered as the main classes prototypes. A value of β = 0.01 was adopted in the relabeling stage and validation sets were used to calculate the discounting factor. 4.1 Artificial Data We used a two-dimensional data so that the results could be easily represented and interpreted. The dataset was made of three classes with equal sample size, Gaussian distribution and common identity covariance matrix and 150, 300 and 1500 samples were generated independently for training, validation and testing sets, respectively. The center of each class was located at one of the vertices of an equilateral triangle. In order to use different representations of the data, the classes were transferred to their near vertices in clockwise direction and in this fashion two other feature spaces were generated. In each feature space, a unique subset of each class overlapped with data of other classes. It means that the high uncertainty pertaining to class membership of a pattern in one of the feature spaces can be reduced because this pattern may be located in non-overlapping or less ambiguous areas in the other feature spaces. To evaluate the performance of the proposed method in classification tasks with different levels of uncertainty in class labels, the length of the equilateral triangle (the distance between the neighborhood classes) was varied in {1,2,3} and in this way, three cases from strongly overlapping to almost separated classes were studied. Fig. 2 represents graphically the explained above procedure for generating the artificial data. Fig. 2. Generic representation of the artificial data. For generating new feature spaces, the classes are transferred to their near corners in the clockwise direction. To demonstrate how different parts of a class overlap with other classes in different feature spaces, class 1 is divided to four parts and the positions of these parts in the three feature spaces are shown Soft and Crisp Label Generation There are two main issues in the relabeling stage including choice of the main classes prototypes and selecting the threshold value (τ ). Although an appropriate selection of prototype can play an important role in accurate assignments of crisp or soft labels to data, we focus our attention on the influence of considering uncertainty in the labels of data on the performance of a classification method that confronts data with imperfect class labels or highly overlapping class distributions.
8 240 M. Tabassian, R. Ghaderi, and R. Ebrahimpour In order to examine how training samples with different levels of uncertainty in their labels are treated in the relabeling procedure, the results of partitioning the first training set with τ = 0.8 is demonstrated in Fig. 3. Each partition is represented by a convex hull and its class label indicates that the samples of that partition were assigned to which subset of the main classes. It can be seen that soft labels were assigned to samples situated in the boundaries of the main classes or those located at ambiguous regions. Fig. 3. Representation of partitioning the first learning set into crisp and soft subsets by the proposed relabeling approach with τ = 0.8. a) first feature space, b) second feature space, c) third feature space Performance Comparison The performance of our proposed method was compared to three MLP neural networks trained separately on one of the feature spaces and ensemble networks constructed by merging the decisions of the single MLPs using three fixed combining methods (Averaging, Product and Max rules). The single MLPs and the ensemble networks discarded the possible uncertainties in class labels and employed data with initial certain labels. The MLPs were trained by Levenberg-Marquardt algorithm with default parameters and 80 epochs of training and had one hidden layer. Each neural network was trained 50 times with random initializations. To evaluate the performance of our proposed method based on different threshold values, we generated 19 training sets from each original training data by varying τ from 0.05 to 0.95 with a step size of The average test error rates of the employed classifiers on the three datasets and for different number of hidden nodes are represented in Fig. 4. Note that, the classification results of our proposed method with the best τ for each dataset are shown. As can be seen, our method yields considerably better classification results than the other classifiers when the first dataset, as the most difficult set with highest level of ambiguity in class labels, was used (Fig. 4 (a)). As shown in Fig. 4(b) and Fig. 4(c), the differences between the test performances of our proposed scheme and the three ensemble networks on the second and third datasets are small. So, it can be concluded that there is not much benefit to be gained from considering uncertainty in perfectly labeled data or in a classification problem with separated classes.
9 Combining Neural Networks Based on Dempster-Shafer Theory 241 Fig. 4. Classification error rates as a function of number of hidden nodes for single MLPs, ensemble networks constructed by combining the MLPs using fixed combining method and our proposed method. a) first training set ( τ = 0.8 for our method), b) second training set ( τ = 0.75 for our method), c) third feature space ( τ = 0.7 for our method). 4.2 Real Data We applied our proposed method to the problem of classifying circular knitted fabric defects. The data consisted of five classes of knitted defect samples and defect-free fabric images (Fig. 5). Fig. 5. Samples of fabric images in our six-class problem. a) Vertical Strip, b) Horizontal Strip, c) Soil freckle, d) Crack, e) Hole, f) Defect-free. As can be seen from Fig. 5, four classes belong to either horizontal or vertical defects which means that the extracted features from samples of these classes can make overlapping areas in feature space. Moreover, little differences between samples of some defect classes and defect-free images may also cause vagueness in information provided by the learning data.
10 242 M. Tabassian, R. Ghaderi, and R. Ebrahimpour The feature vectors were computed from pixels gray level images in three stages including wavelet decomposition, binary thresholding and morphological processing. Three wavelet filters of Daubechies family (db2, db5, db10) and three levels of decomposition were used based on extensive experiments. Horizontal and vertical detailed subimages at level 3 were used for further analysis since most of the defective fabrics had either horizontal or vertical defects. Detailed subimages were then converted to binary, binary level can vary from 0 to 1 and it was chosen to be 0.2 in our work, and a morphological filter (Opening) was applied to them to eliminate irrelevant parts which could be considered as defective areas. White areas in the final subimages were considered as defects and total number of white pixels in the final horizontal and vertical images made the first and second dimensions of the feature vector, respectively. Fig.6 illustrates three two-dimensional feature spaces which were achieved using the above feature extraction approach. Fig. 6. Representations of the fabric samples in the two-dimensional feature spaces obtained by making use of three wavelet filters. a) db2, b) db5, c) db10. The dataset involved 90 samples and all classes had equal number of patterns. Overall classification performance was evaluated using a K-fold cross validation method with K = 5. The data was divided into five subsets and in each round, three of the five subsets were employed for training while the fourth and fifth were used for validation and testing, respectively. The above procedure was repeated for all five subsets and the average classification rate on the test patterns was considered as a figure of merit. Similar to experiments carried out using the artificial dataset, we compared the performance of our proposed method on the real data with those of single and ensemble neural networks that relied on the initial imperfect labels. All MLPs were trained 100 times with random initial weights and 50 epochs of training and their architectures were similar to the former experiment. Table 1 gives the average test error rates of the proposed method (with the best threshold value ( τ = 0.75)), the ensemble networks and the single MLPs as a function of number of hidden nodes.
11 Combining Neural Networks Based on Dempster-Shafer Theory 243 Table 1. Error rates as a function of number of hidden nodes for single MLPs, ensemble networks and our method ( τ = 0.75 ). The minimum error rate of each network is typed in bold. No. of hidden neurons Average error rate (%) First MLP Second MLP Third MLP Averaging Product Max Proposed method Small differences between error rates of the proposed method and the ensemble networks can be interpreted by considering the poor choice of the main classes' prototypes (center of classes) in the relabeling stage of our method. As mentioned earlier, our maor interest is to investigate the effect of considering uncertainty in class labels in improving the classification results and utilizing complementariness between information sources in the framework of the D-S theory. So, the expectation is that by employing more complex algorithm for generating prototypes of the main classes, which is an open area of research, better classification results would be achieved using the proposed method We used the real data to examine the effect of utilizing discounting strategy on the performance of the proposed method and to explore how diverse classifiers were generated from the feature spaces using our proposed relabeling approach. Error reduction rates obtained by employing the discounting factor in our method for different threshold values and the best number of hidden neurons are presented in Table 2. Overall, discounting brought better performances compared with the situation that all information sources assumed to be fully reliable. Table 2. Error reduction rates (%) achieved by employing the discounting factor in the proposed method as a function of the threshold value for the best number of hidden nodes (9 hidden nodes) Threshold value (τ) Error reduction rate (%) Table 3 lists the generated classes from each feature space after the relabeling stage for the best threshold value ( τ = 0.75 ). Here, we randomly selected one of the training sets produced by the five-fold cross validation. It can be seen that the number and type of generated soft class labels for each training set are almost different from another one which indicates that the ensemble network was made by a set of diverse classifiers. Apart from the influence of τ on the label generation procedure, form of the produced class labels is related to the number of overlapping areas and amount of uncertainty pertaining to samples of each area.
12 244 M. Tabassian, R. Ghaderi, and R. Ebrahimpour Table 3. Generated classes from the three feature space after the relabeling stage for the best threshold value ( 0.75 τ = ) Produced classes Feature Set 1 ω 1 ω 2 ω 3 ω 4 ω 5 ω 6 ω 2,4 ω 3,6 ω 1,3,6 ω 3,5,6 - Feature Set 2 ω 1 ω 2 ω 3 ω 4 ω 5 ω 6 ω 2,6 ω 3,5 ω 3,6 - - Feature Set 3 ω 1 ω 2 ω 3 ω 4 ω 5 ω 6 ω 2,4 ω 2,6 ω 3,6 ω 2,3,6 ω 1,2,3,6 5 Conclusion In this paper, a method for handling imperfect labels using belief functions has been presented. By extracting different types of features from data, the proposed method takes advantage of information redundancy and complementariness between sources. In each feature space and by making use of the proposed relabeling technique, the initial labels of the learning data are ignored and each train pattern is then reassigned to a class with crisp or soft label based on its closeness to prototypes of the main classes. MLP neural network is used as base classifier and its outputs are interpreted as BBA and in this way, partial knowledge about the class of a test pattern is encoded. The BBAs are then discounted based on the reliability of the base classifiers in identifying validation patterns and are pooled using Dempster's rule of combination. Experiments carried out on controlled simulated data and a dataset of knitted fabric defects. It was shown that by considering the ambiguity in labels of the data, our method can outperform classifiers that rely on the initial imperfect labels. References 1. Shafer, G.: A Mathematical Theory of Evidence. Princeton University Press, Princeton (1976) 2. Smets, P., Kennes, R.: The Transferable Belief Model. Artif. Intell. 66, (1994) 3. Rogova, G.: Combining the result of several neural network classifiers. Neural Networks 7(5), (1994) 4. Denoeux, T.: A k-nearest Neighbor Classification Rule Based on Dempster-Shafer Theory. IEEE Trans. Syst., Man, Cybern. 25(3), (1995) 5. Denoeux, T.: A Neural Network Classifier Based on Dempster-Shafer Theory. IEEE Trans. Syst., Man, Cybern. A, Syst., Humans 30, (2000) 6. Basir, O., Karray, F., Zhu, H.: Connectionist-Based Dempster-Shafer Evidential Reasoning for Data Fusion. IEEE Trans. on Neural Net. 16, (2005) 7. Quost, B., Denoeux, T., Masson, M.-H.: Pairwise classifier combination using belief functions. Pattern Recognition Letters 28, (2007) 8. Smets, P.: Belif Functions: The Disunctive Rule of Combination and the Generalized Bayesian Theorem. International Journal of Approximate Reasoning 9, 1 35 (1993) 9. Elouedi, Z., Mellouli, K., Smets, P.: Assessing Sensor Reliability for Multisensor Data Fusion Within the Transferable Belief Model. IEEE Trans. Syst., Man, Cybern. B 34, (2004)
Evidential Editing K-Nearest Neighbor Classifier
Evidential Editing -Nearest Neighbor Classifier Lianmeng Jiao 1,2, Thierry Denœux 1, and Quan Pan 2 1 Sorbonne Universités, Université de Technologie de Compiègne, CNRS, Heudiasyc UMR 7253, Compiègne,
More informationDecision Fusion using Dempster-Schaffer Theory
Decision Fusion using Dempster-Schaffer Theory Prof. D. J. Parish High Speed networks Group Department of Electronic and Electrical Engineering D.J.Parish@lboro.ac.uk Loughborough University Overview Introduction
More informationBelief Hierarchical Clustering
Belief Hierarchical Clustering Wiem Maalel, Kuang Zhou, Arnaud Martin and Zied Elouedi Abstract In the data mining field many clustering methods have been proposed, yet standard versions do not take base
More informationAn evidential classifier based on feature selection and two-step classification strategy
An evidential classifier based on feature selection and two-step classification strategy Chunfeng Lian a,b, Su Ruan b,, Thierry Denœux a a Sorbonne universités, Université de technologie de Compiègne,
More informationSATELLITE IMAGE COMPRESSION TECHNIQUE BASED ON THE EVIDENCE THEORY
SATELLITE IMAGE COMPRESSION TECHNIQUE BASED ON THE EVIDENCE THEORY Khaled Sahnoun, Noureddine Benabadji Department of Physics, University of Sciences and Technology of Oran- Algeria Laboratory of Analysis
More informationA new parameterless credal method to track-to-track assignment problem
A new parameterless credal method to track-to-track assignment problem Samir Hachour, François Delmotte, and David Mercier Univ. Lille Nord de France, UArtois, EA 3926 LGI2A, Béthune, France Abstract.
More informationRobustness of Selective Desensitization Perceptron Against Irrelevant and Partially Relevant Features in Pattern Classification
Robustness of Selective Desensitization Perceptron Against Irrelevant and Partially Relevant Features in Pattern Classification Tomohiro Tanno, Kazumasa Horie, Jun Izawa, and Masahiko Morita University
More informationECG782: Multidimensional Digital Signal Processing
ECG782: Multidimensional Digital Signal Processing Object Recognition http://www.ee.unlv.edu/~b1morris/ecg782/ 2 Outline Knowledge Representation Statistical Pattern Recognition Neural Networks Boosting
More informationClassification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University
Classification Vladimir Curic Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University Outline An overview on classification Basics of classification How to choose appropriate
More informationEstablishing Virtual Private Network Bandwidth Requirement at the University of Wisconsin Foundation
Establishing Virtual Private Network Bandwidth Requirement at the University of Wisconsin Foundation by Joe Madden In conjunction with ECE 39 Introduction to Artificial Neural Networks and Fuzzy Systems
More informationFingerprint Indexing using Minutiae and Pore Features
Fingerprint Indexing using Minutiae and Pore Features R. Singh 1, M. Vatsa 1, and A. Noore 2 1 IIIT Delhi, India, {rsingh, mayank}iiitd.ac.in 2 West Virginia University, Morgantown, USA, afzel.noore@mail.wvu.edu
More informationSVM-based Filter Using Evidence Theory and Neural Network for Image Denosing
Journal of Software Engineering and Applications 013 6 106-110 doi:10.436/sea.013.63b03 Published Online March 013 (http://www.scirp.org/ournal/sea) SVM-based Filter Using Evidence Theory and Neural Network
More informationk-evclus: Clustering Large Dissimilarity Data in the Belief Function Framework
k-evclus: Clustering Large Dissimilarity Data in the Belief Function Framework Orakanya Kanjanatarakul 1, Songsak Sriboonchitta 2 and Thierry Denoeux 3 1 Faculty of Management Sciences, Chiang Mai Rajabhat
More informationA new evidential c-means clustering method
A new evidential c-means clustering method Zhun-ga Liu, Jean Dezert, Quan Pan, Yong-mei Cheng. School of Automation, Northwestern Polytechnical University, Xi an, China. Email: liuzhunga@gmail.com. ONERA
More informationPattern Recognition ( , RIT) Exercise 1 Solution
Pattern Recognition (4005-759, 20092 RIT) Exercise 1 Solution Instructor: Prof. Richard Zanibbi The following exercises are to help you review for the upcoming midterm examination on Thursday of Week 5
More informationCollaborative Rough Clustering
Collaborative Rough Clustering Sushmita Mitra, Haider Banka, and Witold Pedrycz Machine Intelligence Unit, Indian Statistical Institute, Kolkata, India {sushmita, hbanka r}@isical.ac.in Dept. of Electrical
More informationTexture Classification by Combining Local Binary Pattern Features and a Self-Organizing Map
Texture Classification by Combining Local Binary Pattern Features and a Self-Organizing Map Markus Turtinen, Topi Mäenpää, and Matti Pietikäinen Machine Vision Group, P.O.Box 4500, FIN-90014 University
More informationSimultaneous Perturbation Stochastic Approximation Algorithm Combined with Neural Network and Fuzzy Simulation
.--- Simultaneous Perturbation Stochastic Approximation Algorithm Combined with Neural Networ and Fuzzy Simulation Abstract - - - - Keywords: Many optimization problems contain fuzzy information. Possibility
More informationCluster analysis of 3D seismic data for oil and gas exploration
Data Mining VII: Data, Text and Web Mining and their Business Applications 63 Cluster analysis of 3D seismic data for oil and gas exploration D. R. S. Moraes, R. P. Espíndola, A. G. Evsukoff & N. F. F.
More informationComparison of supervised self-organizing maps using Euclidian or Mahalanobis distance in classification context
6 th. International Work Conference on Artificial and Natural Neural Networks (IWANN2001), Granada, June 13-15 2001 Comparison of supervised self-organizing maps using Euclidian or Mahalanobis distance
More informationFine Classification of Unconstrained Handwritten Persian/Arabic Numerals by Removing Confusion amongst Similar Classes
2009 10th International Conference on Document Analysis and Recognition Fine Classification of Unconstrained Handwritten Persian/Arabic Numerals by Removing Confusion amongst Similar Classes Alireza Alaei
More informationAnalytic Hierarchy Process using Belief Function Theory: Belief AHP Approach
Analytic Hierarchy Process using Belief Function Theory: Belief AHP Approach Elaborated by: Amel ENNACEUR Advisors: Zied ELOUEDI (ISG, Tunisia) Eric Lefevre (univ-artois, France) 2010/2011 Acknowledgements
More informationAPPLICATION OF DATA FUSION THEORY AND SUPPORT VECTOR MACHINE TO X-RAY CASTINGS INSPECTION
APPLICATION OF DATA FUSION THEORY AND SUPPORT VECTOR MACHINE TO X-RAY CASTINGS INSPECTION Ahmad Osman 1*, Valérie Kaftandjian 2, Ulf Hassler 1 1 Fraunhofer Development Center X-ray Technologies, A Cooperative
More informationCLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS
CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CHAPTER 4 CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS 4.1 Introduction Optical character recognition is one of
More informationA New Parameterless Credal Method to Track-to-Track Assignment Problem
A New Parameterless Credal Method to Track-to-Track Assignment Problem Samir Hachour, François Delmotte, and David Mercier Univ. Lille Nord de France, UArtois, EA 3926 LGI2A, Béthune, France Abstract.
More informationEvidential relational clustering using medoids
Evidential relational clustering using medoids Kuang Zhou a,b, Arnaud Martin b, Quan Pan a, and Zhun-ga Liu a a. School of Automation, Northwestern Polytechnical University, Xi an, Shaanxi 70072, PR China.
More informationNeural Network based textural labeling of images in multimedia applications
Neural Network based textural labeling of images in multimedia applications S.A. Karkanis +, G.D. Magoulas +, and D.A. Karras ++ + University of Athens, Dept. of Informatics, Typa Build., Panepistimiopolis,
More informationEfficient Rule Set Generation using K-Map & Rough Set Theory (RST)
International Journal of Engineering & Technology Innovations, Vol. 2 Issue 3, May 2015 www..com 6 Efficient Rule Set Generation using K-Map & Rough Set Theory (RST) Durgesh Srivastava 1, Shalini Batra
More informationPattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition
Pattern Recognition Kjell Elenius Speech, Music and Hearing KTH March 29, 2007 Speech recognition 2007 1 Ch 4. Pattern Recognition 1(3) Bayes Decision Theory Minimum-Error-Rate Decision Rules Discriminant
More informationFuzzy Sets and Systems. Lecture 1 (Introduction) Bu- Ali Sina University Computer Engineering Dep. Spring 2010
Fuzzy Sets and Systems Lecture 1 (Introduction) Bu- Ali Sina University Computer Engineering Dep. Spring 2010 Fuzzy sets and system Introduction and syllabus References Grading Fuzzy sets and system Syllabus
More informationFabric Defect Detection Based on Computer Vision
Fabric Defect Detection Based on Computer Vision Jing Sun and Zhiyu Zhou College of Information and Electronics, Zhejiang Sci-Tech University, Hangzhou, China {jings531,zhouzhiyu1993}@163.com Abstract.
More informationA Decision-Theoretic Rough Set Model
A Decision-Theoretic Rough Set Model Yiyu Yao and Jingtao Yao Department of Computer Science University of Regina Regina, Saskatchewan, Canada S4S 0A2 {yyao,jtyao}@cs.uregina.ca Special Thanks to Professor
More informationImage Enhancement Using Fuzzy Morphology
Image Enhancement Using Fuzzy Morphology Dillip Ranjan Nayak, Assistant Professor, Department of CSE, GCEK Bhwanipatna, Odissa, India Ashutosh Bhoi, Lecturer, Department of CSE, GCEK Bhawanipatna, Odissa,
More informationTable of Contents. Recognition of Facial Gestures... 1 Attila Fazekas
Table of Contents Recognition of Facial Gestures...................................... 1 Attila Fazekas II Recognition of Facial Gestures Attila Fazekas University of Debrecen, Institute of Informatics
More informationAn evidential k-nearest neighbor classification method with weighted attributes
An evidential -nearest neighbor classification method with weighted attributes Lianmeng Jiao, Quan Pan, iaoxue Feng, Feng ang To cite this version: Lianmeng Jiao, Quan Pan, iaoxue Feng, Feng ang. An evidential
More informationROUGH MEMBERSHIP FUNCTIONS: A TOOL FOR REASONING WITH UNCERTAINTY
ALGEBRAIC METHODS IN LOGIC AND IN COMPUTER SCIENCE BANACH CENTER PUBLICATIONS, VOLUME 28 INSTITUTE OF MATHEMATICS POLISH ACADEMY OF SCIENCES WARSZAWA 1993 ROUGH MEMBERSHIP FUNCTIONS: A TOOL FOR REASONING
More informationMinimal Test Cost Feature Selection with Positive Region Constraint
Minimal Test Cost Feature Selection with Positive Region Constraint Jiabin Liu 1,2,FanMin 2,, Shujiao Liao 2, and William Zhu 2 1 Department of Computer Science, Sichuan University for Nationalities, Kangding
More informationarxiv: v2 [cs.lg] 11 Sep 2015
A DEEP analysis of the META-DES framework for dynamic selection of ensemble of classifiers Rafael M. O. Cruz a,, Robert Sabourin a, George D. C. Cavalcanti b a LIVIA, École de Technologie Supérieure, University
More informationNovel Paradigm for Constructing Masses in Dempster-Shafer Evidence Theory for Wireless Sensor Network s Multisource Data Fusion
Sensors 2014, 14, 7049-7065; doi:10.3390/s140407049 Article OPEN ACCESS sensors ISSN 1424-8220 www.mdpi.com/ournal/sensors Novel Paradigm for Constructing Masses in Dempster-Shafer Evidence Theory for
More informationInformation Fusion Dr. B. K. Panigrahi
Information Fusion By Dr. B. K. Panigrahi Asst. Professor Department of Electrical Engineering IIT Delhi, New Delhi-110016 01/12/2007 1 Introduction Classification OUTLINE K-fold cross Validation Feature
More information3.2 Level 1 Processing
SENSOR AND DATA FUSION ARCHITECTURES AND ALGORITHMS 57 3.2 Level 1 Processing Level 1 processing is the low-level processing that results in target state estimation and target discrimination. 9 The term
More informationInformation Granulation and Approximation in a Decision-theoretic Model of Rough Sets
Information Granulation and Approximation in a Decision-theoretic Model of Rough Sets Y.Y. Yao Department of Computer Science University of Regina Regina, Saskatchewan Canada S4S 0A2 E-mail: yyao@cs.uregina.ca
More informationA modified and fast Perceptron learning rule and its use for Tag Recommendations in Social Bookmarking Systems
A modified and fast Perceptron learning rule and its use for Tag Recommendations in Social Bookmarking Systems Anestis Gkanogiannis and Theodore Kalamboukis Department of Informatics Athens University
More informationSemi-Supervised Clustering with Partial Background Information
Semi-Supervised Clustering with Partial Background Information Jing Gao Pang-Ning Tan Haibin Cheng Abstract Incorporating background knowledge into unsupervised clustering algorithms has been the subject
More informationCHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS
CHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS This chapter presents a computational model for perceptual organization. A figure-ground segregation network is proposed based on a novel boundary
More informationHidden Loop Recovery for Handwriting Recognition
Hidden Loop Recovery for Handwriting Recognition David Doermann Institute of Advanced Computer Studies, University of Maryland, College Park, USA E-mail: doermann@cfar.umd.edu Nathan Intrator School of
More informationREDUNDANCY OF MULTISET TOPOLOGICAL SPACES
Iranian Journal of Fuzzy Systems Vol. 14, No. 4, (2017) pp. 163-168 163 REDUNDANCY OF MULTISET TOPOLOGICAL SPACES A. GHAREEB Abstract. In this paper, we show the redundancies of multiset topological spaces.
More informationPost-processing the hybrid method for addressing uncertainty in risk assessments. Technical Note for the Journal of Environmental Engineering
Post-processing the hybrid method for addressing uncertainty in risk assessments By: Cédric Baudrit 1, Dominique Guyonnet 2, Didier Dubois 1 1 : Math. Spec., Université Paul Sabatier, 31063 Toulouse, France
More informationTraffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers
Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers A. Salhi, B. Minaoui, M. Fakir, H. Chakib, H. Grimech Faculty of science and Technology Sultan Moulay Slimane
More informationOn Generalizing Rough Set Theory
On Generalizing Rough Set Theory Y.Y. Yao Department of Computer Science, University of Regina Regina, Saskatchewan, Canada S4S 0A2 E-mail: yyao@cs.uregina.ca Abstract. This paper summarizes various formulations
More informationFusion for Evaluation of Image Classification in Uncertain Environments
Fusion for Evaluation of Image Classification in Uncertain Environments Arnaud Martin To cite this version: Arnaud Martin. Fusion for Evaluation of Image Classification in Uncertain Environments. Information
More informationCHAPTER 8 COMPOUND CHARACTER RECOGNITION USING VARIOUS MODELS
CHAPTER 8 COMPOUND CHARACTER RECOGNITION USING VARIOUS MODELS 8.1 Introduction The recognition systems developed so far were for simple characters comprising of consonants and vowels. But there is one
More informationCluster Analysis. Mu-Chun Su. Department of Computer Science and Information Engineering National Central University 2003/3/11 1
Cluster Analysis Mu-Chun Su Department of Computer Science and Information Engineering National Central University 2003/3/11 1 Introduction Cluster analysis is the formal study of algorithms and methods
More informationCombining Gabor Features: Summing vs.voting in Human Face Recognition *
Combining Gabor Features: Summing vs.voting in Human Face Recognition * Xiaoyan Mu and Mohamad H. Hassoun Department of Electrical and Computer Engineering Wayne State University Detroit, MI 4822 muxiaoyan@wayne.edu
More informationCHAPTER 4 FUZZY LOGIC, K-MEANS, FUZZY C-MEANS AND BAYESIAN METHODS
CHAPTER 4 FUZZY LOGIC, K-MEANS, FUZZY C-MEANS AND BAYESIAN METHODS 4.1. INTRODUCTION This chapter includes implementation and testing of the student s academic performance evaluation to achieve the objective(s)
More informationClassification with Diffuse or Incomplete Information
Classification with Diffuse or Incomplete Information AMAURY CABALLERO, KANG YEN Florida International University Abstract. In many different fields like finance, business, pattern recognition, communication
More informationUnit V. Neural Fuzzy System
Unit V Neural Fuzzy System 1 Fuzzy Set In the classical set, its characteristic function assigns a value of either 1 or 0 to each individual in the universal set, There by discriminating between members
More informationRank Measures for Ordering
Rank Measures for Ordering Jin Huang and Charles X. Ling Department of Computer Science The University of Western Ontario London, Ontario, Canada N6A 5B7 email: fjhuang33, clingg@csd.uwo.ca Abstract. Many
More information3 Feature Selection & Feature Extraction
3 Feature Selection & Feature Extraction Overview: 3.1 Introduction 3.2 Feature Extraction 3.3 Feature Selection 3.3.1 Max-Dependency, Max-Relevance, Min-Redundancy 3.3.2 Relevance Filter 3.3.3 Redundancy
More informationRough Set Approach to Unsupervised Neural Network based Pattern Classifier
Rough Set Approach to Unsupervised Neural based Pattern Classifier Ashwin Kothari, Member IAENG, Avinash Keskar, Shreesha Srinath, and Rakesh Chalsani Abstract Early Convergence, input feature space with
More informationAn Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting.
An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting. Mohammad Mahmudul Alam Mia, Shovasis Kumar Biswas, Monalisa Chowdhury Urmi, Abubakar
More informationTitle: Optimized multilayer perceptrons for molecular classification and diagnosis using genomic data
Supplementary material for Manuscript BIOINF-2005-1602 Title: Optimized multilayer perceptrons for molecular classification and diagnosis using genomic data Appendix A. Testing K-Nearest Neighbor and Support
More informationInstantaneously trained neural networks with complex inputs
Louisiana State University LSU Digital Commons LSU Master's Theses Graduate School 2003 Instantaneously trained neural networks with complex inputs Pritam Rajagopal Louisiana State University and Agricultural
More informationA Class of Instantaneously Trained Neural Networks
A Class of Instantaneously Trained Neural Networks Subhash Kak Department of Electrical & Computer Engineering, Louisiana State University, Baton Rouge, LA 70803-5901 May 7, 2002 Abstract This paper presents
More informationTHE EVIDENCE THEORY FOR COLOR SATELLITE IMAGE COMPRESSION
THE EVIDENCE THEORY FOR COLOR SATELLITE IMAGE COMPRESSION Khaled SAHNOUN and Noureddine BENABADJI Laboratory of Analysis and Application of Radiation (LAAR) Department of Physics, University of Sciences
More informationAdaptive Wavelet Image Denoising Based on the Entropy of Homogenus Regions
International Journal of Electrical and Electronic Science 206; 3(4): 9-25 http://www.aascit.org/journal/ijees ISSN: 2375-2998 Adaptive Wavelet Image Denoising Based on the Entropy of Homogenus Regions
More informationBoosting Algorithms for Parallel and Distributed Learning
Distributed and Parallel Databases, 11, 203 229, 2002 c 2002 Kluwer Academic Publishers. Manufactured in The Netherlands. Boosting Algorithms for Parallel and Distributed Learning ALEKSANDAR LAZAREVIC
More information6. Dicretization methods 6.1 The purpose of discretization
6. Dicretization methods 6.1 The purpose of discretization Often data are given in the form of continuous values. If their number is huge, model building for such data can be difficult. Moreover, many
More informationA Fast Personal Palm print Authentication based on 3D-Multi Wavelet Transformation
A Fast Personal Palm print Authentication based on 3D-Multi Wavelet Transformation * A. H. M. Al-Helali, * W. A. Mahmmoud, and * H. A. Ali * Al- Isra Private University Email: adnan_hadi@yahoo.com Abstract:
More informationGranular Computing based on Rough Sets, Quotient Space Theory, and Belief Functions
Granular Computing based on Rough Sets, Quotient Space Theory, and Belief Functions Yiyu (Y.Y.) Yao 1, Churn-Jung Liau 2, Ning Zhong 3 1 Department of Computer Science, University of Regina Regina, Saskatchewan,
More information4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used.
1 4.12 Generalization In back-propagation learning, as many training examples as possible are typically used. It is hoped that the network so designed generalizes well. A network generalizes well when
More informationNeural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani
Neural Networks CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Biological and artificial neural networks Feed-forward neural networks Single layer
More informationCOMBINING NEURAL NETWORKS FOR SKIN DETECTION
COMBINING NEURAL NETWORKS FOR SKIN DETECTION Chelsia Amy Doukim 1, Jamal Ahmad Dargham 1, Ali Chekima 1 and Sigeru Omatu 2 1 School of Engineering and Information Technology, Universiti Malaysia Sabah,
More informationPrototype Selection for Handwritten Connected Digits Classification
2009 0th International Conference on Document Analysis and Recognition Prototype Selection for Handwritten Connected Digits Classification Cristiano de Santana Pereira and George D. C. Cavalcanti 2 Federal
More informationColor-Based Classification of Natural Rock Images Using Classifier Combinations
Color-Based Classification of Natural Rock Images Using Classifier Combinations Leena Lepistö, Iivari Kunttu, and Ari Visa Tampere University of Technology, Institute of Signal Processing, P.O. Box 553,
More informationMINING ASSOCIATION RULES WITH UNCERTAIN ITEM RELATIONSHIPS
MINING ASSOCIATION RULES WITH UNCERTAIN ITEM RELATIONSHIPS Mei-Ling Shyu 1, Choochart Haruechaiyasak 1, Shu-Ching Chen, and Kamal Premaratne 1 1 Department of Electrical and Computer Engineering University
More informationDempster s Rule for Evidence Ordered in a Complete Directed Acyclic Graph
Dempster s Rule for Evidence Ordered in a Complete Directed Acyclic Graph Ulla Bergsten and Johan Schubert Division of Applied Mathematics and Scientific Data Processing, Department of Weapon Systems,
More informationGeneralized proportional conflict redistribution rule applied to Sonar imagery and Radar targets classification
Arnaud Martin 1, Christophe Osswald 2 1,2 ENSIETA E 3 I 2 Laboratory, EA 3876, 2, rue Francois Verny, 29806 Brest Cedex 9, France Generalized proportional conflict redistribution rule applied to Sonar
More informationOn the Max Coloring Problem
On the Max Coloring Problem Leah Epstein Asaf Levin May 22, 2010 Abstract We consider max coloring on hereditary graph classes. The problem is defined as follows. Given a graph G = (V, E) and positive
More informationMachine Learning Classifiers and Boosting
Machine Learning Classifiers and Boosting Reading Ch 18.6-18.12, 20.1-20.3.2 Outline Different types of learning problems Different types of learning algorithms Supervised learning Decision trees Naïve
More informationSupervised Learning. CS 586 Machine Learning. Prepared by Jugal Kalita. With help from Alpaydin s Introduction to Machine Learning, Chapter 2.
Supervised Learning CS 586 Machine Learning Prepared by Jugal Kalita With help from Alpaydin s Introduction to Machine Learning, Chapter 2. Topics What is classification? Hypothesis classes and learning
More informationA *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-"&"3 -"(' ( +-" " " % '.+ % ' -0(+$,
The structure is a very important aspect in neural network design, it is not only impossible to determine an optimal structure for a given problem, it is even impossible to prove that a given structure
More informationChapter 7 UNSUPERVISED LEARNING TECHNIQUES FOR MAMMOGRAM CLASSIFICATION
UNSUPERVISED LEARNING TECHNIQUES FOR MAMMOGRAM CLASSIFICATION Supervised and unsupervised learning are the two prominent machine learning algorithms used in pattern recognition and classification. In this
More informationEfficient Object Extraction Using Fuzzy Cardinality Based Thresholding and Hopfield Network
Efficient Object Extraction Using Fuzzy Cardinality Based Thresholding and Hopfield Network S. Bhattacharyya U. Maulik S. Bandyopadhyay Dept. of Information Technology Dept. of Comp. Sc. and Tech. Machine
More informationAutomatic Detection of Texture Defects using Texture-Periodicity and Gabor Wavelets
Abstract Automatic Detection of Texture Defects using Texture-Periodicity and Gabor Wavelets V Asha 1, N U Bhajantri, and P Nagabhushan 3 1 New Horizon College of Engineering, Bangalore, Karnataka, India
More information1) Give decision trees to represent the following Boolean functions:
1) Give decision trees to represent the following Boolean functions: 1) A B 2) A [B C] 3) A XOR B 4) [A B] [C Dl Answer: 1) A B 2) A [B C] 1 3) A XOR B = (A B) ( A B) 4) [A B] [C D] 2 2) Consider the following
More informationMultiple Classifier Fusion using k-nearest Localized Templates
Multiple Classifier Fusion using k-nearest Localized Templates Jun-Ki Min and Sung-Bae Cho Department of Computer Science, Yonsei University Biometrics Engineering Research Center 134 Shinchon-dong, Sudaemoon-ku,
More informationCS 5540 Spring 2013 Assignment 3, v1.0 Due: Apr. 24th 11:59PM
1 Introduction In this programming project, we are going to do a simple image segmentation task. Given a grayscale image with a bright object against a dark background and we are going to do a binary decision
More informationMulti-label classification using rule-based classifier systems
Multi-label classification using rule-based classifier systems Shabnam Nazmi (PhD candidate) Department of electrical and computer engineering North Carolina A&T state university Advisor: Dr. A. Homaifar
More informationINFORMATION RETRIEVAL SYSTEM USING FUZZY SET THEORY - THE BASIC CONCEPT
ABSTRACT INFORMATION RETRIEVAL SYSTEM USING FUZZY SET THEORY - THE BASIC CONCEPT BHASKAR KARN Assistant Professor Department of MIS Birla Institute of Technology Mesra, Ranchi The paper presents the basic
More informationInternational Journal of Scientific Research & Engineering Trends Volume 4, Issue 6, Nov-Dec-2018, ISSN (Online): X
Analysis about Classification Techniques on Categorical Data in Data Mining Assistant Professor P. Meena Department of Computer Science Adhiyaman Arts and Science College for Women Uthangarai, Krishnagiri,
More informationA Hierarchial Model for Visual Perception
A Hierarchial Model for Visual Perception Bolei Zhou 1 and Liqing Zhang 2 1 MOE-Microsoft Laboratory for Intelligent Computing and Intelligent Systems, and Department of Biomedical Engineering, Shanghai
More informationClimate Precipitation Prediction by Neural Network
Journal of Mathematics and System Science 5 (205) 207-23 doi: 0.7265/259-529/205.05.005 D DAVID PUBLISHING Juliana Aparecida Anochi, Haroldo Fraga de Campos Velho 2. Applied Computing Graduate Program,
More informationDeepest Neural Networks
Deepest Neural Networks arxiv:707.0267v [cs.ne] 9 Jul 207 Raúl Rojas Dahlem Center for Machine Learning and Robotics Freie Universität Berlin July 207 Abstract This paper shows that a long chain of perceptrons
More informationModel Assessment and Selection. Reference: The Elements of Statistical Learning, by T. Hastie, R. Tibshirani, J. Friedman, Springer
Model Assessment and Selection Reference: The Elements of Statistical Learning, by T. Hastie, R. Tibshirani, J. Friedman, Springer 1 Model Training data Testing data Model Testing error rate Training error
More informationObservational Learning with Modular Networks
Observational Learning with Modular Networks Hyunjung Shin, Hyoungjoo Lee and Sungzoon Cho {hjshin72, impatton, zoon}@snu.ac.kr Department of Industrial Engineering, Seoul National University, San56-1,
More informationA Novel Fuzzy Rough Granular Neural Network for Classification
International Journal of Computational Intelligence Systems, Vol. 4, No. 5 (September, 2011), 1042-1051 A Novel Fuzzy Rough Granular Neural Network for Classification Avatharam Ganivada, Sankar K. Pal
More informationCOSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor
COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality
More informationStructural and Syntactic Pattern Recognition
Structural and Syntactic Pattern Recognition Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Fall 2017 CS 551, Fall 2017 c 2017, Selim Aksoy (Bilkent
More informationVisual object classification by sparse convolutional neural networks
Visual object classification by sparse convolutional neural networks Alexander Gepperth 1 1- Ruhr-Universität Bochum - Institute for Neural Dynamics Universitätsstraße 150, 44801 Bochum - Germany Abstract.
More information