Fuzzy Partitioning Using Mathematical Morphology in a Learning Scheme

Size: px
Start display at page:

Download "Fuzzy Partitioning Using Mathematical Morphology in a Learning Scheme"

Transcription

1 Fuzzy Partitioning Using Mathematical Morphology in a Learning Scheme Christophe Marsala Bernadette Bouchon-Meunier LAFORIA-IBP, Université Pierre et Marie Curie, Case 69, 4 place Jussieu, Paris Cedex 05, France. marsala, Abstract In this paper, we present an algorithm to infer a fuzzy partition over a set of numerical values. This algorithm is based on mathematical morphology and is expressed in theory of formal language. It is usable when the numerical values are associated with a class. We use this algorithm during the construction of a fuzzy decision tree, when no fuzzy partition is available for a numerical attribute.. Background We consider an inductive learning scheme. From a given training set, we want to find general laws which describe data pertaining to this set. is composed of examples represented by a description (a set of pairs [attribute, value]) and a class. Our goal is to assign a class to any new example for which we only know a description. This generalization is done by means of rules which will be induced from. The problem has been extensively studied in the case of attributes with symbolic values ([], [7], [2]). Problems occur when the set of the values of a given attribute is imprecise or numerical. A solution is to use a fuzzy representation of these values ([3], [5], [3],[5]). However, this solution gives rise to the complex problem of the generation of a fuzzy representation. A natural idea is to obtain it from experts of the studied domain. But it could be difficult to find experts or expertises for particular kinds of data. One possible means to obtain a fuzzy representation of the values of an attribute is to infer a fuzzy partition from the data of the training set. There exist various methods to infer a fuzzy partition from a set of data []. We consider the construction of a decision tree. The fuzzy partition could be inferred in an initial step, before the construction of the decision tree, and there exist various methods to do this. However, we prefer to use an automatic method which infers a fuzzy partition at each step of the tree construction. Thus, the induced fuzzy partition can be related to the training set of the current step of the decision tree construction. We do not want a highly elaborated method which covers the whole space of training examples as the Krishnapuram s method [0], the neural network method, or genetic algorithm method. These methods cluster the space covered by all the attributes of the data, in one step. We prefer to use an algorithm which builds a fuzzy decision tree to cluster this space. Thus, we propose to infer a fuzzy partition in an automatic way, in an intermediate step of the construction of the decision tree. This method is easy to implement and gives good results. In our approach, we have to compare fuzzy partitions on several attributes. There is no problem even if the class depends on several numerical attributes: the final decision tree will take into account all the attributes involved in the recognition of a class and dependencies of these attributes. In this paper, we present a solution based on the use of mathematical morphology operators, and formalized by means of tools from theory of formal language. The implementation of this solution is described in [4]. In Section 2, we recall the basics of mathematical morphology. In Section 3, we present rewriting systems that implement mathematical morphology operators to smooth a training set. In Section 4, we propose an algorithm to infer a fuzzy partition over a smoothed training set. Section 5 gives an example. In Section 6, we present application of this method to the construction of decision trees in two practical cases. And, finally, we conclude with new ideas that we are planning to add to the method.

2 2. Mathematical morphology operators In this section, we present the fundamental operators of the mathematical morphology. The theory of these operators can be found in [4] or [8]. 2.. The basic operators: erosion and dilatation Dilatation Erosion We consider a space of morphological bodies. These two basic operators enable us to modify a morphological body. This modification is related to a. The erosion is a particular substraction of in, and the dilatation is a particular addition of in. Erosion Dilatation Figure. Mathematical morphology operators 2.2. The operators opening and closure Each of these operators is a combination of the two basic operators. Figure 3. Closure operator 2.3. The open-close filter A filter is a combination of openings and closures. It is composed by successive openings followed by successive closures ( IN) applied to all bodies of the space, with the same. Thus it enables the destruction of small bodies present in the space with respect to the chosen. Simultaneously, it enables the filling of small vacuum places occurring in bodies. The value of enables us to control of such modification. 3. Smoothing a set of values In this section, we introduce the representation of the training set as a word describing the distribution of classes in, according to the values of a given attribute which is supposed to take values in an ordered set. We propose the use of rewriting systems ([9]), from formal language theory, to smooth this word in order to obtain fuzzy modalities of the attributes. Erosion Dilatation Figure 2. Opening operator The opening is the combination of an erosion followed by a dilatation applied to a morphological body, with the same. It enables destruction of small bodies in the space, with respect to the size of the chosen. The closure is the combination of a dilatation followed by an erosion applied to a morphological body, with the same. It enables the destruction of small vacuum places occurring in a body, with respect to the size of the chosen. 3.. The training set as a word We first transform a training set into a word, for a given attribute with values in. Let be an alphabet, each letter of representing one of the classes in. We construct the alphabet with. The letter is a particular letter in the system, we will use it to determine uncertain sequences. For any alphabet, denotes the set of all possible words composed by letters from. For example, let the training set be (!, cheap), ("$#, expensive), ("&%, expensive), ('&', expensive), (#(", expensive) with cheap, expensive as set of classes and )*,+-"#.+-"&%/+0''/+#&"2 as set of values of an attribute (e.g. the attribute size ). Let 3 c+ e, 4 representing cheap and 5 representing expensive. The word defined in 6 by the training set is 475*5!5*5. After this transformation of the training set into a word, we define various ways of using this word.

3 3.2. Different operators to smooth a training set We want to obtain sequences of letters from, as homogenous as possible, in order to associate them with fuzzy subsets of (constituting a fuzzy partitioning of ). Each subset will represent a linguistic modality of the attribute (for instance small and big with the previous example). We present several techniques to alter a word. Our goal is to erase non-representative values within a word in order to smooth it. We use operators inspired by mathematical morphology. Each rewriting system is given as a transduction, a particular kind of automaton ([9]). a. Transductions Let us recall that a transduction is a 6-tuple 89+;:<+-=>+0?/+0@A+B&C where is the input alphabet, : is the output alphabet, = is a (finite) set of states,?de= is the set of initial states of the is the set of terminal states of the transduction and BIHJ=LKMNKO:PNK3= is the transition function. A transduction reads a word QN M and rewrites it in a corresponding word RS :. It proceeds sequentially from the first letter of Q to the last one, as a reading head moves letter by letter. The rewriting rules to generate R are based on B. Let TVU7W7+-X,+ZY!+-U[*\ B+ with U7W-+-U[ ]=>+-X^HO and Y6H :P, T_Ù W7+0X/+aY!+-U;[!\ is called a transition of the transduction. If U7W is the current state and we can read X in Q (i.e. X is composed by the successive letters coming just after the reading head), we replace it by Y and the current state becomes U [. A convention is to use b to match the end of the input word, and c (the null word) is introduced when nothing has to be written. For example, let d.+-e*(+: f,+*)+>= S+ S2+ S3(+g? S)+h@ S3 and B )T S+-d.+-f,+ S\`+iT S+0e$+c(+ S2\7+iT S2+-d.+-f,+ S\7+iT S2+-e+*&+ S2\`+ T S+0b/+0b/+ S3\7+,T S2+-b,+-b,+ S3\j b / b / ε S2 a / 0 S $ / $ a / 0 $ / $ S3 Initial state Terminal state a / 0 read a and replace it by 0 Figure 4. Example of transduction more to read in Q, the rewriting is done. b. A transduction for the erosion Let us define a with pq:rs. This 4 x u 2 $ u$ x ε x / y y uy 3 Initial state Terminal state read x and replace it by y Figure 5. Transduction for the erosion related to tu u with v^ u vu kt rewriting system is used for the erosion of a word with a particular letter tn u as. It corresponds to the reduction of sequences of t in the word. From the word Q,+wQx y i, we obtain the z 6. For example, with the word QJt{v&vlv&v, to o TVQ&\ we use the transduction given in Fig. 5, and we o T_t{v&v&vlv&\.vlv&vlv. c. A transduction for the dilatation Now, let us define }~o, another rewriting system with G :3z and tn. This system will dilate a sequence in a word when this sequence is surrounded with letters. It can be proven that for any given word, the computed terminal word is unique [2]. Thus, we are sure o T_Ql\ and }~ o T_Q&\ exist for all word QN 6 and for all letter t, and therefore, for all training sets. Moreover, for any word Q I 6, we o T_Ql\ (resp. }~9o T_Q&\ ), with u zf, the word obtained from Q after consecutive erosions (resp. dilatations). Now, with these two rewriting systems we will define the two usual operations from the mathematical morphology: the opening and the closure. A simple visual representation of this transduction is a graphic form (Fig. 4). Let us rewrite Q d(e7e7d(d(e7d(d)e with this transduction. After the sequence of states (S, S, S2, S2, S, S, S2, S, S, S2, S3), we have RIkf/!f&flf&f and, as the current state is S3, a terminal state, and we have nothing d. The opening operator The opening is the composition }~onƒh@nm7o of the two previous operators (i.e. rewriting systems). The -opening ( IN) of a word Q E i with respect to t E is defined as iˆ o TVQ&\ }~ o T_@nm o T_Q&\;\. The -opening of a

4 W ª W ª Ÿ u ε u x 3 2 x y uy 5 u ε 4 $ u$ u u x / y Initial state Terminal state read x and replace it by y Figure 6. Transduction for the dilatation related to tn u with v^ <vu kt word allows to erase small sequences with length smaller than "l. The advantage of this operation is that we can erase all sequences in Q with length smaller than a fixed value. For example, with the word QŠ vlv&t{t.t{v&t{v&t, we have iˆ o TVQ&\vlv&t{t.t{v&{v& and Žˆ. o T_Ql\ }~V o T_@m o T_Q&\;\ }~ o T_vlv&{.{v&{v&{\izvlv&{.{v&{v&. e. A closure operator The closure is the o ƒ^}~ o of the two operators of erosion and dilatation. This composition allows to join sequences of letters in a word if these sequences are separated by less than two letters. The -closure ( IN) with h respect to t of the word Q 6 is defined as o T_Ql\Pz@m* o T_}~o TVQ&\;\. With this operator, two sequences separated by less than "l letters are unified. For example, with the word Q^k{.t{t{.{.t{t{.t{.{ h, we have o T_Ql\P.{t.t{{.{t.t{t{t.{, h and o TVQ&\wL.{t{t.t{t.t{t{t.t{t.{{. Finally, we introduce the filter operator which transforms a word into a sequence of homogeneous series of letters of. In the framework of the utilization of a training set, a filter allows to smooth the training set to deduce a fuzzy partition. f. A filter operator A filter is a composition of the previously described wordtransforming operators. Let Q^ u i, tn u and u IN. The -filter of the word Q with respect to t is defined as: if N : n~ o T_Ql\w h o T_ Žˆ o TVQ&\;\ if ] : n~ o T_Ql\P h o T_ Žˆ,o TV n~ o, T_Q&\;\\ The particular combination of these operators has some interesting properties. A filter will allow to smooth a word. First, the sequences with a length of "& letters are erased (i. e., replaced by the letter ), then we unify sequences separated by "l letters. In the next section, we will use all these operators to describe our main algorithm. 4. Fuzzy partitioning of a set of values In this Section, we present an algorithm to infer a fuzzy partition on a set of numerical values, after the use of the previous rewriting system. When we apply a filter to the word induced by a training set, we are able to translate small sequences of classes into uncertain sequences. To smooth a training set, we apply a -filter to it. Then, we obtain a word with large sequences (with a length larger than "& ). is a value, empirically fixed, given to the system (we use š*f&œ of the size of the training set in our application ). The sequences of represent uncertain sequences where the classes are highly mixed. The sequences consisting of a single letter tž+0t I. These sequences describe crudely speaking a single class; these sequences do not contain any u character. We call them certain sequences, whatever t may be. We will use these two kinds of sequences to build a fuzzy partition of the training set that is related to an attribute. Certain sequences of letter t correspond to the kernels of the fuzzy sets of the partition. Let m be the number of fuzzy modalities we want for the attribute. We select the m largest certain sequences containing one class (Fig. 7). To each sequence, we assign intervals from, for instance Ÿw W +w i o and Ÿw W +;w i o when m s". In the case where we cannot find m such sequences, we can either reduce the number of applied filters, or select fewer sequences. We summarize this in the following algorithm FPMM, Fuzzy Partitioning using Mathematical Morphology, with mhz" : Algorithm (FPMM) To fuzzify a training set with respect to an attribute defined on in " fuzzy subsets. Tranform into a word Q. 2. For fixed, smooth Q. 3. Find the two larger certain sequences and. (resp.w i o 4. We denote by W ) the value associated with the first (resp. last) letter of W in, ~u l+0". Ÿ W + 6 o (resp. Ÿw W +;w 6 o \ with w i o w W. 5. The fuzzy partition is defined as a family of two fuzzy subsets. The kernel of the first one is Aª +;w i o and its support is «yª +;w W. The kernel of the second one is Ÿw i o +- Ÿ and its support is Ÿw W +0

5 ³ ³ u + + u u u u u u u - - uu u u sequence sequence 2 0 ±.áµ ²±.áµ Figure 7. Induced fuzzy partition Thus, we have an algorithm to induce a fuzzy partition from a training set. This algorithm smoothes a word induced by a training set and, from this smoothed word, we propose a way to define a fuzzy partition. 5. An illustration of the algorithm Let be a training set with a numeric attribute (e. g. the age), and two classes + and -. (5, +), (7, +), (8, +), (3, +), (4, -), (7, +), (20, -), (2, +), (22, +), (25, -), (29, -), (30, -), (35, -), (36, +), (38, -), (40, -). We represent in a graphical form (Fig. 8) Age Age Figure 0. A fuzzy partition of the training set The system is based on an extension of the ID3 Algorithm with a fuzzy measure of entropy, the entropy-star [3],[3]. This measure is usable when a set of numerical values is associated with a fuzzy partition over it. Usually, this fuzzy partition is given by an expert of the considered domain. In a project on learning methods, we were supposed to build a fuzzy decision tree for the Breiman s waveform problem [6]. With these data, no expert knowledge was available. Thus, we had to define the fuzzy partitions from the training set and we used the algorithm FPMM. It appears that the results are more interesting with the fuzzy decision tree than with a traditional ID3-based method: fuzzy trees are shorter and generalize better for new cases. We obtain fuzzy trees with an average size of 2 paths and an average number of node ¹#.º¼». In generalization with these trees, we obtain a 72.5% rate of good classified examples. A more detailed description of the application can be found in [4]. Figure 8. A training set To infer a fuzzy partition on the universe of the values of the age, we apply the algorithm FPMM. First, is transformed into a word. Let 3s +, -, the word associated with is Q^ Petal length 3.0 cms cms u + + u u u u u u u - - uu u u Age Figure 9. The filtered word We filter Q with yš, and we obtain the filtered word on (Fig. 9). Two non uncertain sequences appear in the filtered word, a sequence of + and a sequence of -. We use them as the basis of kernels of two fuzzy subsets (Fig. 0). 6. Application We implement the algorithm to infer fuzzy partition for numerical attributes during the construction of a decision tree. Figure. The root node of the decision tree constructed from the iris training base Other tests were conducted on the Iris data of Fischer (Fig. ) (available at the UCI repository, ftp://ftp.ics.uci.edu). With a cross validation test, we obtain fuzzy trees with an average size of 0 paths and an average number of node ½#.ºž". In generalization with these trees, we obtain a 95.33% rate of good classified examples. 7. Conclusion In this paper, we propose a new algorithm to infer a fuzzy partition for the universe of a set of numerical values, when

6 each of these values is associated with a class. This algorithm is based on several rewriting systems which are represented as transductions. Each of these rewriting systems is based on a mathematical morphology operator. We define an algorithm to reduce an arbitrary sequence of letters and an algorithm to enlarge a sequence of letters. When we compose these two algorithms, then, depending on the order of the composition, we obtain two general operators. These two operators allow us to control how small mixed sequences are erased. Finally, we obtain an algorithm to smooth a word induced by a training set, and, from this word we propose a way to define a fuzzy partition. This method is implemented in our decision tree builder program to help the construction of decision trees when no fuzzy partition from an expert is available. In our future work, we propose to take into account another dimension of the set of values, which is the distance between two consecutive values, and to adapt the fuzzy mathematical morphology to this kind of problems. [] J. R. Quinlan. Induction of decision trees. Machine Learning, ():86 06, 986. [2] J. R. Quinlan. C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo, Ca, 993. [3] M. Ramdani. Système d induction formelle à base de connaissances imprécises. Thèse de l Université P. et M. Curie, Rapport TH94/, LAFORIA-IBP, 994. [4] J. Serra. Image Analysis and Mathematical Morphology. Academic Press, New York, 982. [5] Y. Yuan and M. J. Shaw. Induction of fuzzy decision trees. Fuzzy Sets and systems, 69:25 39, 995. References [] N. Aladenise and B. Bouchon-Meunier. Acquisition de connaissances imparfaites : mise en évidence d une fonction d appartenance. Revue Internationale de Systémique, 996. [2] B. Bouchon-Meunier, C. Marsala, and M. Ramdani. Arbres de décision et théorie des sous-ensembles flous. In Actes des 5èmes journées du PRC-GDR d Intelligence Artificielle, pages 50 53, 995. [3] B. Bouchon-Meunier, C. Marsala, and M. Ramdani. Inductive learning and fuzziness. Scientia Iranica, 996. [4] B. Bouchon-Meunier, C. Marsala, and M. Ramdani. Learning from uncertain and imprecise examples. In M. E. Cohen and D. L. Hudson, editors, Proceedings of the th Int. Conf. on Computers and their Applications, pages 75 78, San Francisco, March 996. [5] B. Bouchon-Meunier, C. Marsala, and M. Ramdani. Learning from imperfect data. In H. P. D. Dubois and R. R. Yager, editors, Fuzzy Sets Methods in Information Engineering: a Guided Tour of Applications. John Wileys and Sons, 996, to appear. [6] L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone. Classification And Regression Trees. Chapman and Hall, New York, 984. [7] B. Cestnlk, I. Kononenko, and Y. Bratko. Assistant86: a knowledge elicitation tool for sophisticated users. In Y. Bratko and N. Lavrac, editors, Progress in Machine Learning, Proceedings of EWSL, pages 3 45, 987. [8] M. Coster and J. L. Chermant. Précis d analyse d images. Presses du CNRS, 989. [9] S. Ginsburg. The Mathematical Theory of Context Free Languages. McGraw-Hill, New-York, 966. [0] R. Krishnapuram. Generation of membership functions via possibilistic clustering. In Proceedings of the 3rd IEEE Int. Conf. on Fuzzy Systems, volume 2, pages , Orlando, Florida, June 994.

ENSEMBLE of fuzzy decision trees is a relatively new

ENSEMBLE of fuzzy decision trees is a relatively new Data Mining with Ensembles of Fuzzy Decision Trees Christophe Marsala Abstract In this paper, a study is presented to explore ensembles of fuzzy decision trees. First of all, a quick recall of the state

More information

Qualitative classification and evaluation in possibilistic decision trees

Qualitative classification and evaluation in possibilistic decision trees Qualitative classification evaluation in possibilistic decision trees Nahla Ben Amor Institut Supérieur de Gestion de Tunis, 41 Avenue de la liberté, 2000 Le Bardo, Tunis, Tunisia E-mail: nahlabenamor@gmxfr

More information

Fuzzy Partitioning with FID3.1

Fuzzy Partitioning with FID3.1 Fuzzy Partitioning with FID3.1 Cezary Z. Janikow Dept. of Mathematics and Computer Science University of Missouri St. Louis St. Louis, Missouri 63121 janikow@umsl.edu Maciej Fajfer Institute of Computing

More information

FUZZY BOOLEAN ALGEBRAS AND LUKASIEWICZ LOGIC. Angel Garrido

FUZZY BOOLEAN ALGEBRAS AND LUKASIEWICZ LOGIC. Angel Garrido Acta Universitatis Apulensis ISSN: 1582-5329 No. 22/2010 pp. 101-111 FUZZY BOOLEAN ALGEBRAS AND LUKASIEWICZ LOGIC Angel Garrido Abstract. In this paper, we analyze the more adequate tools to solve many

More information

ON THE CHOICE OF MEMBERSHIP FUNCTIONS IN A MAMDANI-TYPE FUZZY CONTROLLER

ON THE CHOICE OF MEMBERSHIP FUNCTIONS IN A MAMDANI-TYPE FUZZY CONTROLLER ON THE CHOICE OF MEMBERSHIP FUNCTIONS IN A MAMDANI-TYPE FUZZY CONTROLLER Bernadette Bouchon-Meunier*, Mariagrazia Dotoli**, Bruno Maione*** * LAFORIA-IBP, UPMC, Case 169 4 place Jussieu 75252 Paris Cédex

More information

Performance analysis of a MLP weight initialization algorithm

Performance analysis of a MLP weight initialization algorithm Performance analysis of a MLP weight initialization algorithm Mohamed Karouia (1,2), Régis Lengellé (1) and Thierry Denœux (1) (1) Université de Compiègne U.R.A. CNRS 817 Heudiasyc BP 49 - F-2 Compiègne

More information

SIMILARITY MEASURES FOR MULTI-VALUED ATTRIBUTES FOR DATABASE CLUSTERING

SIMILARITY MEASURES FOR MULTI-VALUED ATTRIBUTES FOR DATABASE CLUSTERING SIMILARITY MEASURES FOR MULTI-VALUED ATTRIBUTES FOR DATABASE CLUSTERING TAE-WAN RYU AND CHRISTOPH F. EICK Department of Computer Science, University of Houston, Houston, Texas 77204-3475 {twryu, ceick}@cs.uh.edu

More information

Chapter 4 Fuzzy Logic

Chapter 4 Fuzzy Logic 4.1 Introduction Chapter 4 Fuzzy Logic The human brain interprets the sensory information provided by organs. Fuzzy set theory focus on processing the information. Numerical computation can be performed

More information

Bipolar Fuzzy Line Graph of a Bipolar Fuzzy Hypergraph

Bipolar Fuzzy Line Graph of a Bipolar Fuzzy Hypergraph BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 13, No 1 Sofia 2013 Print ISSN: 1311-9702; Online ISSN: 1314-4081 DOI: 10.2478/cait-2013-0002 Bipolar Fuzzy Line Graph of a

More information

Random Forest A. Fornaser

Random Forest A. Fornaser Random Forest A. Fornaser alberto.fornaser@unitn.it Sources Lecture 15: decision trees, information theory and random forests, Dr. Richard E. Turner Trees and Random Forests, Adele Cutler, Utah State University

More information

Look-Ahead Based Fuzzy Decision Tree Induction

Look-Ahead Based Fuzzy Decision Tree Induction IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL. 9, NO. 3, JUNE 2001 461 Look-Ahead Based Fuzzy Decision Tree Induction Ming Dong, Student Member, IEEE, and Ravi Kothari, Senior Member, IEEE Abstract Decision

More information

Adaptive Metric Nearest Neighbor Classification

Adaptive Metric Nearest Neighbor Classification Adaptive Metric Nearest Neighbor Classification Carlotta Domeniconi Jing Peng Dimitrios Gunopulos Computer Science Department Computer Science Department Computer Science Department University of California

More information

Improving Quality of Products in Hard Drive Manufacturing by Decision Tree Technique

Improving Quality of Products in Hard Drive Manufacturing by Decision Tree Technique www.ijcsi.org 29 Improving Quality of Products in Hard Drive Manufacturing by Decision Tree Technique Anotai Siltepavet 1, Sukree Sinthupinyo 2 and Prabhas Chongstitvatana 3 1 Computer Engineering, Chulalongkorn

More information

CONCEPT FORMATION AND DECISION TREE INDUCTION USING THE GENETIC PROGRAMMING PARADIGM

CONCEPT FORMATION AND DECISION TREE INDUCTION USING THE GENETIC PROGRAMMING PARADIGM 1 CONCEPT FORMATION AND DECISION TREE INDUCTION USING THE GENETIC PROGRAMMING PARADIGM John R. Koza Computer Science Department Stanford University Stanford, California 94305 USA E-MAIL: Koza@Sunburn.Stanford.Edu

More information

Classification with Diffuse or Incomplete Information

Classification with Diffuse or Incomplete Information Classification with Diffuse or Incomplete Information AMAURY CABALLERO, KANG YEN Florida International University Abstract. In many different fields like finance, business, pattern recognition, communication

More information

Improving Quality of Products in Hard Drive Manufacturing by Decision Tree Technique

Improving Quality of Products in Hard Drive Manufacturing by Decision Tree Technique Improving Quality of Products in Hard Drive Manufacturing by Decision Tree Technique Anotai Siltepavet 1, Sukree Sinthupinyo 2 and Prabhas Chongstitvatana 3 1 Computer Engineering, Chulalongkorn University,

More information

Image Enhancement Using Fuzzy Morphology

Image Enhancement Using Fuzzy Morphology Image Enhancement Using Fuzzy Morphology Dillip Ranjan Nayak, Assistant Professor, Department of CSE, GCEK Bhwanipatna, Odissa, India Ashutosh Bhoi, Lecturer, Department of CSE, GCEK Bhawanipatna, Odissa,

More information

On Generalizing Rough Set Theory

On Generalizing Rough Set Theory On Generalizing Rough Set Theory Y.Y. Yao Department of Computer Science, University of Regina Regina, Saskatchewan, Canada S4S 0A2 E-mail: yyao@cs.uregina.ca Abstract. This paper summarizes various formulations

More information

Adaptive Building of Decision Trees by Reinforcement Learning

Adaptive Building of Decision Trees by Reinforcement Learning Proceedings of the 7th WSEAS International Conference on Applied Informatics and Communications, Athens, Greece, August 24-26, 2007 34 Adaptive Building of Decision Trees by Reinforcement Learning MIRCEA

More information

- A Study of Value-Added Tax -The Cases of National Taxation Bureau in the Central Area, Ministry of Finance

- A Study of Value-Added Tax -The Cases of National Taxation Bureau in the Central Area, Ministry of Finance 2014 6 14-32 - A Study of Value-Added Tax -The Cases of National Taxation Bureau in the Central Area, Ministry of Finance Chong-Si You 100 101 187 145 10 N01 101 99 101 99 1,565,847,055 100 1,703,988,545

More information

Dynamic Load Balancing of Unstructured Computations in Decision Tree Classifiers

Dynamic Load Balancing of Unstructured Computations in Decision Tree Classifiers Dynamic Load Balancing of Unstructured Computations in Decision Tree Classifiers A. Srivastava E. Han V. Kumar V. Singh Information Technology Lab Dept. of Computer Science Information Technology Lab Hitachi

More information

Research on Applications of Data Mining in Electronic Commerce. Xiuping YANG 1, a

Research on Applications of Data Mining in Electronic Commerce. Xiuping YANG 1, a International Conference on Education Technology, Management and Humanities Science (ETMHS 2015) Research on Applications of Data Mining in Electronic Commerce Xiuping YANG 1, a 1 Computer Science Department,

More information

Hybrid Feature Selection for Modeling Intrusion Detection Systems

Hybrid Feature Selection for Modeling Intrusion Detection Systems Hybrid Feature Selection for Modeling Intrusion Detection Systems Srilatha Chebrolu, Ajith Abraham and Johnson P Thomas Department of Computer Science, Oklahoma State University, USA ajith.abraham@ieee.org,

More information

Genetic Tuning for Improving Wang and Mendel s Fuzzy Database

Genetic Tuning for Improving Wang and Mendel s Fuzzy Database Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics San Antonio, TX, USA - October 2009 Genetic Tuning for Improving Wang and Mendel s Fuzzy Database E. R. R. Kato, O.

More information

A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection (Kohavi, 1995)

A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection (Kohavi, 1995) A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection (Kohavi, 1995) Department of Information, Operations and Management Sciences Stern School of Business, NYU padamopo@stern.nyu.edu

More information

Metric Dimension in Fuzzy Graphs. A Novel Approach

Metric Dimension in Fuzzy Graphs. A Novel Approach Applied Mathematical Sciences, Vol. 6, 2012, no. 106, 5273-5283 Metric Dimension in Fuzzy Graphs A Novel Approach B. Praba 1, P. Venugopal 1 and * N. Padmapriya 1 1 Department of Mathematics SSN College

More information

Global Alliance Partition in Trees

Global Alliance Partition in Trees Global Alliance Partition in Trees Linda Eroh Department of Mathematics University of Wisconsin Oshkosh, Oshkosh, WI, 54901 eroh@uwoshedu and Ralucca Gera Department of Applied Mathematics Naval Postgraduate

More information

NOVEL HYBRID GENETIC ALGORITHM WITH HMM BASED IRIS RECOGNITION

NOVEL HYBRID GENETIC ALGORITHM WITH HMM BASED IRIS RECOGNITION NOVEL HYBRID GENETIC ALGORITHM WITH HMM BASED IRIS RECOGNITION * Prof. Dr. Ban Ahmed Mitras ** Ammar Saad Abdul-Jabbar * Dept. of Operation Research & Intelligent Techniques ** Dept. of Mathematics. College

More information

Approximation of Relations. Andrzej Skowron. Warsaw University. Banacha 2, Warsaw, Poland. Jaroslaw Stepaniuk

Approximation of Relations. Andrzej Skowron. Warsaw University. Banacha 2, Warsaw, Poland.   Jaroslaw Stepaniuk Approximation of Relations Andrzej Skowron Institute of Mathematics Warsaw University Banacha 2, 02-097 Warsaw, Poland e-mail: skowron@mimuw.edu.pl Jaroslaw Stepaniuk Institute of Computer Science Technical

More information

An adjustable p-exponential clustering algorithm

An adjustable p-exponential clustering algorithm An adjustable p-exponential clustering algorithm Valmir Macario 12 and Francisco de A. T. de Carvalho 2 1- Universidade Federal Rural de Pernambuco - Deinfo Rua Dom Manoel de Medeiros, s/n - Campus Dois

More information

ICA as a preprocessing technique for classification

ICA as a preprocessing technique for classification ICA as a preprocessing technique for classification V.Sanchez-Poblador 1, E. Monte-Moreno 1, J. Solé-Casals 2 1 TALP Research Center Universitat Politècnica de Catalunya (Catalonia, Spain) enric@gps.tsc.upc.es

More information

A new hybrid fusion method for diagnostic systems

A new hybrid fusion method for diagnostic systems A new hybrid fusion method for diagnostic systems Zemirline A. ITI Department ENST Bretagne 29285 Brest, France LATIM, INSERM-U650 abdelhamid.zemirline@enstbretagne.fr Lecornu L. ITI Department ENST Bretagne

More information

Bumptrees for Efficient Function, Constraint, and Classification Learning

Bumptrees for Efficient Function, Constraint, and Classification Learning umptrees for Efficient Function, Constraint, and Classification Learning Stephen M. Omohundro International Computer Science Institute 1947 Center Street, Suite 600 erkeley, California 94704 Abstract A

More information

Mining High Order Decision Rules

Mining High Order Decision Rules Mining High Order Decision Rules Y.Y. Yao Department of Computer Science, University of Regina Regina, Saskatchewan, Canada S4S 0A2 e-mail: yyao@cs.uregina.ca Abstract. We introduce the notion of high

More information

CT79 SOFT COMPUTING ALCCS-FEB 2014

CT79 SOFT COMPUTING ALCCS-FEB 2014 Q.1 a. Define Union, Intersection and complement operations of Fuzzy sets. For fuzzy sets A and B Figure Fuzzy sets A & B The union of two fuzzy sets A and B is a fuzzy set C, written as C=AUB or C=A OR

More information

Life and motion configuration

Life and motion configuration Life and motion configuration Pierre Hallot, Roland Billen 1 1 Geomatics Unit, University of Liège, 17 allée du 6 Août, B-4000 Liège, Belgium {P.Hallot, rbillen}@ulg.ac.be 1 INTRODUCTION Spatio-temporality

More information

Efficient SQL-Querying Method for Data Mining in Large Data Bases

Efficient SQL-Querying Method for Data Mining in Large Data Bases Efficient SQL-Querying Method for Data Mining in Large Data Bases Nguyen Hung Son Institute of Mathematics Warsaw University Banacha 2, 02095, Warsaw, Poland Abstract Data mining can be understood as a

More information

ONE-STACK AUTOMATA AS ACCEPTORS OF CONTEXT-FREE LANGUAGES *

ONE-STACK AUTOMATA AS ACCEPTORS OF CONTEXT-FREE LANGUAGES * ONE-STACK AUTOMATA AS ACCEPTORS OF CONTEXT-FREE LANGUAGES * Pradip Peter Dey, Mohammad Amin, Bhaskar Raj Sinha and Alireza Farahani National University 3678 Aero Court San Diego, CA 92123 {pdey, mamin,

More information

Granular Computing based on Rough Sets, Quotient Space Theory, and Belief Functions

Granular Computing based on Rough Sets, Quotient Space Theory, and Belief Functions Granular Computing based on Rough Sets, Quotient Space Theory, and Belief Functions Yiyu (Y.Y.) Yao 1, Churn-Jung Liau 2, Ning Zhong 3 1 Department of Computer Science, University of Regina Regina, Saskatchewan,

More information

Induction of Multivariate Decision Trees by Using Dipolar Criteria

Induction of Multivariate Decision Trees by Using Dipolar Criteria Induction of Multivariate Decision Trees by Using Dipolar Criteria Leon Bobrowski 1,2 and Marek Krȩtowski 1 1 Institute of Computer Science, Technical University of Bia lystok, Poland 2 Institute of Biocybernetics

More information

Comparing Univariate and Multivariate Decision Trees *

Comparing Univariate and Multivariate Decision Trees * Comparing Univariate and Multivariate Decision Trees * Olcay Taner Yıldız, Ethem Alpaydın Department of Computer Engineering Boğaziçi University, 80815 İstanbul Turkey yildizol@cmpe.boun.edu.tr, alpaydin@boun.edu.tr

More information

Generating the Reduced Set by Systematic Sampling

Generating the Reduced Set by Systematic Sampling Generating the Reduced Set by Systematic Sampling Chien-Chung Chang and Yuh-Jye Lee Email: {D9115009, yuh-jye}@mail.ntust.edu.tw Department of Computer Science and Information Engineering National Taiwan

More information

A Two Stage Zone Regression Method for Global Characterization of a Project Database

A Two Stage Zone Regression Method for Global Characterization of a Project Database A Two Stage Zone Regression Method for Global Characterization 1 Chapter I A Two Stage Zone Regression Method for Global Characterization of a Project Database J. J. Dolado, University of the Basque Country,

More information

A Characterization of the Chomsky Hierarchy by String Turing Machines

A Characterization of the Chomsky Hierarchy by String Turing Machines A Characterization of the Chomsky Hierarchy by String Turing Machines Hans W. Lang University of Applied Sciences, Flensburg, Germany Abstract A string Turing machine is a variant of a Turing machine designed

More information

ANALYSIS AND REASONING OF DATA IN THE DATABASE USING FUZZY SYSTEM MODELLING

ANALYSIS AND REASONING OF DATA IN THE DATABASE USING FUZZY SYSTEM MODELLING ANALYSIS AND REASONING OF DATA IN THE DATABASE USING FUZZY SYSTEM MODELLING Dr.E.N.Ganesh Dean, School of Engineering, VISTAS Chennai - 600117 Abstract In this paper a new fuzzy system modeling algorithm

More information

A Model of Machine Learning Based on User Preference of Attributes

A Model of Machine Learning Based on User Preference of Attributes 1 A Model of Machine Learning Based on User Preference of Attributes Yiyu Yao 1, Yan Zhao 1, Jue Wang 2 and Suqing Han 2 1 Department of Computer Science, University of Regina, Regina, Saskatchewan, Canada

More information

Fuzzy Set-Theoretical Approach for Comparing Objects with Fuzzy Attributes

Fuzzy Set-Theoretical Approach for Comparing Objects with Fuzzy Attributes Fuzzy Set-Theoretical Approach for Comparing Objects with Fuzzy Attributes Y. Bashon, D. Neagu, M.J. Ridley Department of Computing University of Bradford Bradford, BD7 DP, UK e-mail: {Y.Bashon, D.Neagu,

More information

GENERATING FUZZY RULES FROM EXAMPLES USING GENETIC. Francisco HERRERA, Manuel LOZANO, Jose Luis VERDEGAY

GENERATING FUZZY RULES FROM EXAMPLES USING GENETIC. Francisco HERRERA, Manuel LOZANO, Jose Luis VERDEGAY GENERATING FUZZY RULES FROM EXAMPLES USING GENETIC ALGORITHMS Francisco HERRERA, Manuel LOZANO, Jose Luis VERDEGAY Dept. of Computer Science and Articial Intelligence University of Granada, 18071 - Granada,

More information

Hybrid Approach for Classification using Support Vector Machine and Decision Tree

Hybrid Approach for Classification using Support Vector Machine and Decision Tree Hybrid Approach for Classification using Support Vector Machine and Decision Tree Anshu Bharadwaj Indian Agricultural Statistics research Institute New Delhi, India anshu@iasri.res.in Sonajharia Minz Jawaharlal

More information

Applied Fuzzy C-means Clustering to Operation Evaluation for Gastric Cancer Patients

Applied Fuzzy C-means Clustering to Operation Evaluation for Gastric Cancer Patients Applied Fuzzy C-means Clustering to Operation Evaluation for Gastric Cancer Patients Hang Zettervall, Elisabeth Rakus-Andersson Department of Mathematics and Science Blekinge Institute of Technology 3779

More information

EFFICIENT ATTRIBUTE REDUCTION ALGORITHM

EFFICIENT ATTRIBUTE REDUCTION ALGORITHM EFFICIENT ATTRIBUTE REDUCTION ALGORITHM Zhongzhi Shi, Shaohui Liu, Zheng Zheng Institute Of Computing Technology,Chinese Academy of Sciences, Beijing, China Abstract: Key words: Efficiency of algorithms

More information

International Journal of Software and Web Sciences (IJSWS)

International Journal of Software and Web Sciences (IJSWS) International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research) ISSN (Print): 2279-0063 ISSN (Online): 2279-0071 International

More information

LEARNING WEIGHTS OF FUZZY RULES BY USING GRAVITATIONAL SEARCH ALGORITHM

LEARNING WEIGHTS OF FUZZY RULES BY USING GRAVITATIONAL SEARCH ALGORITHM International Journal of Innovative Computing, Information and Control ICIC International c 2013 ISSN 1349-4198 Volume 9, Number 4, April 2013 pp. 1593 1601 LEARNING WEIGHTS OF FUZZY RULES BY USING GRAVITATIONAL

More information

Improving the Random Forest Algorithm by Randomly Varying the Size of the Bootstrap Samples for Low Dimensional Data Sets

Improving the Random Forest Algorithm by Randomly Varying the Size of the Bootstrap Samples for Low Dimensional Data Sets Improving the Random Forest Algorithm by Randomly Varying the Size of the Bootstrap Samples for Low Dimensional Data Sets Md Nasim Adnan and Md Zahidul Islam Centre for Research in Complex Systems (CRiCS)

More information

Representing the Adverb Very in Fuzzy Set Theory

Representing the Adverb Very in Fuzzy Set Theory 9 Representing the Adverb Very in Fuzzy Set Theory Martine De Cock Applied Mathematics and Computer Science, University of Gent Krijgslaan 28 (S9), B-9 Gent, Belgium Martine.DeCock@rug.ac.be Abstract.

More information

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CHAPTER 4 CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS 4.1 Introduction Optical character recognition is one of

More information

Cluster homogeneity as a semi-supervised principle for feature selection using mutual information

Cluster homogeneity as a semi-supervised principle for feature selection using mutual information Cluster homogeneity as a semi-supervised principle for feature selection using mutual information Frederico Coelho 1 and Antonio Padua Braga 1 andmichelverleysen 2 1- Universidade Federal de Minas Gerais

More information

Estimating Missing Attribute Values Using Dynamically-Ordered Attribute Trees

Estimating Missing Attribute Values Using Dynamically-Ordered Attribute Trees Estimating Missing Attribute Values Using Dynamically-Ordered Attribute Trees Jing Wang Computer Science Department, The University of Iowa jing-wang-1@uiowa.edu W. Nick Street Management Sciences Department,

More information

Data Mining: Concepts and Techniques Classification and Prediction Chapter 6.7

Data Mining: Concepts and Techniques Classification and Prediction Chapter 6.7 Data Mining: Concepts and Techniques Classification and Prediction Chapter 6.7 March 1, 2007 CSE-4412: Data Mining 1 Chapter 6 Classification and Prediction 1. What is classification? What is prediction?

More information

Fuzzy-Rough Feature Significance for Fuzzy Decision Trees

Fuzzy-Rough Feature Significance for Fuzzy Decision Trees Fuzzy-Rough Feature Significance for Fuzzy Decision Trees Richard Jensen and Qiang Shen Department of Computer Science, The University of Wales, Aberystwyth {rkj,qqs}@aber.ac.uk Abstract Crisp decision

More information

A New Method For Forecasting Enrolments Combining Time-Variant Fuzzy Logical Relationship Groups And K-Means Clustering

A New Method For Forecasting Enrolments Combining Time-Variant Fuzzy Logical Relationship Groups And K-Means Clustering A New Method For Forecasting Enrolments Combining Time-Variant Fuzzy Logical Relationship Groups And K-Means Clustering Nghiem Van Tinh 1, Vu Viet Vu 1, Tran Thi Ngoc Linh 1 1 Thai Nguyen University of

More information

Formal Concept Analysis and Hierarchical Classes Analysis

Formal Concept Analysis and Hierarchical Classes Analysis Formal Concept Analysis and Hierarchical Classes Analysis Yaohua Chen, Yiyu Yao Department of Computer Science, University of Regina Regina, Saskatchewan, Canada S4S 0A2 E-mail: {chen115y, yyao}@cs.uregina.ca

More information

Using Text Learning to help Web browsing

Using Text Learning to help Web browsing Using Text Learning to help Web browsing Dunja Mladenić J.Stefan Institute, Ljubljana, Slovenia Carnegie Mellon University, Pittsburgh, PA, USA Dunja.Mladenic@{ijs.si, cs.cmu.edu} Abstract Web browsing

More information

USING CONVEX PSEUDO-DATA TO INCREASE PREDICTION ACCURACY

USING CONVEX PSEUDO-DATA TO INCREASE PREDICTION ACCURACY 1 USING CONVEX PSEUDO-DATA TO INCREASE PREDICTION ACCURACY Leo Breiman Statistics Department University of California Berkeley, CA 94720 leo@stat.berkeley.edu ABSTRACT A prediction algorithm is consistent

More information

Fuzzy Sets and Systems. Lecture 1 (Introduction) Bu- Ali Sina University Computer Engineering Dep. Spring 2010

Fuzzy Sets and Systems. Lecture 1 (Introduction) Bu- Ali Sina University Computer Engineering Dep. Spring 2010 Fuzzy Sets and Systems Lecture 1 (Introduction) Bu- Ali Sina University Computer Engineering Dep. Spring 2010 Fuzzy sets and system Introduction and syllabus References Grading Fuzzy sets and system Syllabus

More information

Linguistic Values on Attribute Subdomains in Vague Database Querying

Linguistic Values on Attribute Subdomains in Vague Database Querying Linguistic Values on Attribute Subdomains in Vague Database Querying CORNELIA TUDORIE Department of Computer Science and Engineering University "Dunărea de Jos" Domnească, 82 Galaţi ROMANIA Abstract: -

More information

IPMU July 2-7, 2006 Paris, France

IPMU July 2-7, 2006 Paris, France IPMU July 2-7, 2006 Paris, France Information Processing and Management of Uncertainty in Knowledge-Based Systems Conceptual Design and Implementation of the Salem Chakhar 1 and Abelkader Telmoudi 2 1

More information

Improving the Performance of Text Categorization using N-gram Kernels

Improving the Performance of Text Categorization using N-gram Kernels Improving the Performance of Text Categorization using N-gram Kernels Varsha K. V *., Santhosh Kumar C., Reghu Raj P. C. * * Department of Computer Science and Engineering Govt. Engineering College, Palakkad,

More information

FLEXIBLE AND OPTIMAL M5 MODEL TREES WITH APPLICATIONS TO FLOW PREDICTIONS

FLEXIBLE AND OPTIMAL M5 MODEL TREES WITH APPLICATIONS TO FLOW PREDICTIONS 6 th International Conference on Hydroinformatics - Liong, Phoon & Babovic (eds) 2004 World Scientific Publishing Company, ISBN 981-238-787-0 FLEXIBLE AND OPTIMAL M5 MODEL TREES WITH APPLICATIONS TO FLOW

More information

FuzzyDT- A Fuzzy Decision Tree Algorithm Based on C4.5

FuzzyDT- A Fuzzy Decision Tree Algorithm Based on C4.5 FuzzyDT- A Fuzzy Decision Tree Algorithm Based on C4.5 Marcos E. Cintra 1, Maria C. Monard 2, and Heloisa A. Camargo 3 1 Exact and Natural Sciences Dept. - Federal University of the Semi-arid - UFERSA

More information

Data Fusion for Magnetic Sensor Based on Fuzzy Logic Theory

Data Fusion for Magnetic Sensor Based on Fuzzy Logic Theory 2 Fourth International Conference on Intelligent Computation Technology and Automation Data Fusion for Magnetic Sensor Based on Fuzzy Logic Theory ZHU Jian, CAO Hongbing, SHEN Jie, LIU Haitao Shanghai

More information

Logistic Model Tree With Modified AIC

Logistic Model Tree With Modified AIC Logistic Model Tree With Modified AIC Mitesh J. Thakkar Neha J. Thakkar Dr. J.S.Shah Student of M.E.I.T. Asst.Prof.Computer Dept. Prof.&Head Computer Dept. S.S.Engineering College, Indus Engineering College

More information

Recent Progress on RAIL: Automating Clustering and Comparison of Different Road Classification Techniques on High Resolution Remotely Sensed Imagery

Recent Progress on RAIL: Automating Clustering and Comparison of Different Road Classification Techniques on High Resolution Remotely Sensed Imagery Recent Progress on RAIL: Automating Clustering and Comparison of Different Road Classification Techniques on High Resolution Remotely Sensed Imagery Annie Chen ANNIEC@CSE.UNSW.EDU.AU Gary Donovan GARYD@CSE.UNSW.EDU.AU

More information

Chapter Seven: Regular Expressions. Formal Language, chapter 7, slide 1

Chapter Seven: Regular Expressions. Formal Language, chapter 7, slide 1 Chapter Seven: Regular Expressions Formal Language, chapter 7, slide The first time a young student sees the mathematical constant π, it looks like just one more school artifact: one more arbitrary symbol

More information

Keywords - Fuzzy rule-based systems, clustering, system design

Keywords - Fuzzy rule-based systems, clustering, system design CHAPTER 7 Application of Fuzzy Rule Base Design Method Peter Grabusts In many classification tasks the final goal is usually to determine classes of objects. The final goal of fuzzy clustering is also

More information

Performance Analysis of Data Mining Classification Techniques

Performance Analysis of Data Mining Classification Techniques Performance Analysis of Data Mining Classification Techniques Tejas Mehta 1, Dr. Dhaval Kathiriya 2 Ph.D. Student, School of Computer Science, Dr. Babasaheb Ambedkar Open University, Gujarat, India 1 Principal

More information

Improving Tree-Based Classification Rules Using a Particle Swarm Optimization

Improving Tree-Based Classification Rules Using a Particle Swarm Optimization Improving Tree-Based Classification Rules Using a Particle Swarm Optimization Chi-Hyuck Jun *, Yun-Ju Cho, and Hyeseon Lee Department of Industrial and Management Engineering Pohang University of Science

More information

QUALITATIVE MODELING FOR MAGNETIZATION CURVE

QUALITATIVE MODELING FOR MAGNETIZATION CURVE Journal of Marine Science and Technology, Vol. 8, No. 2, pp. 65-70 (2000) 65 QUALITATIVE MODELING FOR MAGNETIZATION CURVE Pei-Hwa Huang and Yu-Shuo Chang Keywords: Magnetization curve, Qualitative modeling,

More information

2.1 Sets 2.2 Set Operations

2.1 Sets 2.2 Set Operations CSC2510 Theoretical Foundations of Computer Science 2.1 Sets 2.2 Set Operations Introduction to Set Theory A set is a structure, representing an unordered collection (group, plurality) of zero or more

More information

FPGA Implementation of a Nonlinear Two Dimensional Fuzzy Filter

FPGA Implementation of a Nonlinear Two Dimensional Fuzzy Filter Justin G. R. Delva, Ali M. Reza, and Robert D. Turney + + CORE Solutions Group, Xilinx San Jose, CA 9514-3450, USA Department of Electrical Engineering and Computer Science, UWM Milwaukee, Wisconsin 5301-0784,

More information

DOUBLE DOMINATION CRITICAL AND STABLE GRAPHS UPON VERTEX REMOVAL 1

DOUBLE DOMINATION CRITICAL AND STABLE GRAPHS UPON VERTEX REMOVAL 1 Discussiones Mathematicae Graph Theory 32 (2012) 643 657 doi:10.7151/dmgt.1633 DOUBLE DOMINATION CRITICAL AND STABLE GRAPHS UPON VERTEX REMOVAL 1 Soufiane Khelifi Laboratory LMP2M, Bloc of laboratories

More information

Fuzzy sensor for the perception of colour

Fuzzy sensor for the perception of colour Fuzzy sensor for the perception of colour Eric Benoit, Laurent Foulloy, Sylvie Galichet, Gilles Mauris To cite this version: Eric Benoit, Laurent Foulloy, Sylvie Galichet, Gilles Mauris. Fuzzy sensor for

More information

Complexity Theory. Compiled By : Hari Prasad Pokhrel Page 1 of 20. ioenotes.edu.np

Complexity Theory. Compiled By : Hari Prasad Pokhrel Page 1 of 20. ioenotes.edu.np Chapter 1: Introduction Introduction Purpose of the Theory of Computation: Develop formal mathematical models of computation that reflect real-world computers. Nowadays, the Theory of Computation can be

More information

Lecture 2 :: Decision Trees Learning

Lecture 2 :: Decision Trees Learning Lecture 2 :: Decision Trees Learning 1 / 62 Designing a learning system What to learn? Learning setting. Learning mechanism. Evaluation. 2 / 62 Prediction task Figure 1: Prediction task :: Supervised learning

More information

Introduction to Intelligent Control Part 2

Introduction to Intelligent Control Part 2 ECE 4951 - Spring 2010 Introduction to Intelligent Control Part 2 Prof. Marian S. Stachowicz Laboratory for Intelligent Systems ECE Department, University of Minnesota Duluth January 19-21, 2010 Human-in-the-loop

More information

Handling Missing Values via Decomposition of the Conditioned Set

Handling Missing Values via Decomposition of the Conditioned Set Handling Missing Values via Decomposition of the Conditioned Set Mei-Ling Shyu, Indika Priyantha Kuruppu-Appuhamilage Department of Electrical and Computer Engineering, University of Miami Coral Gables,

More information

International Training Workshop on FPGA Design for Scientific Instrumentation and Computing November 2013

International Training Workshop on FPGA Design for Scientific Instrumentation and Computing November 2013 2499-13 International Training Workshop on FPGA Design for Scientific Instrumentation and Computing 11-22 Digital CMOS Design Combinational and sequential circuits, contd. Pirouz Bazargan-Sabet Department

More information

Novel Intuitionistic Fuzzy C-Means Clustering for Linearly and Nonlinearly Separable Data

Novel Intuitionistic Fuzzy C-Means Clustering for Linearly and Nonlinearly Separable Data Novel Intuitionistic Fuzzy C-Means Clustering for Linearly and Nonlinearly Separable Data PRABHJOT KAUR DR. A. K. SONI DR. ANJANA GOSAIN Department of IT, MSIT Department of Computers University School

More information

C-NBC: Neighborhood-Based Clustering with Constraints

C-NBC: Neighborhood-Based Clustering with Constraints C-NBC: Neighborhood-Based Clustering with Constraints Piotr Lasek Chair of Computer Science, University of Rzeszów ul. Prof. St. Pigonia 1, 35-310 Rzeszów, Poland lasek@ur.edu.pl Abstract. Clustering is

More information

Explicit fuzzy modeling of shapes and positioning for handwritten Chinese character recognition

Explicit fuzzy modeling of shapes and positioning for handwritten Chinese character recognition 2009 0th International Conference on Document Analysis and Recognition Explicit fuzzy modeling of and positioning for handwritten Chinese character recognition Adrien Delaye - Eric Anquetil - Sébastien

More information

An Empirical Study of Hoeffding Racing for Model Selection in k-nearest Neighbor Classification

An Empirical Study of Hoeffding Racing for Model Selection in k-nearest Neighbor Classification An Empirical Study of Hoeffding Racing for Model Selection in k-nearest Neighbor Classification Flora Yu-Hui Yeh and Marcus Gallagher School of Information Technology and Electrical Engineering University

More information

Texture Image Segmentation using FCM

Texture Image Segmentation using FCM Proceedings of 2012 4th International Conference on Machine Learning and Computing IPCSIT vol. 25 (2012) (2012) IACSIT Press, Singapore Texture Image Segmentation using FCM Kanchan S. Deshmukh + M.G.M

More information

Cluster quality assessment by the modified Renyi-ClipX algorithm

Cluster quality assessment by the modified Renyi-ClipX algorithm Issue 3, Volume 4, 2010 51 Cluster quality assessment by the modified Renyi-ClipX algorithm Dalia Baziuk, Aleksas Narščius Abstract This paper presents the modified Renyi-CLIPx clustering algorithm and

More information

Supervised Variable Clustering for Classification of NIR Spectra

Supervised Variable Clustering for Classification of NIR Spectra Supervised Variable Clustering for Classification of NIR Spectra Catherine Krier *, Damien François 2, Fabrice Rossi 3, Michel Verleysen, Université catholique de Louvain, Machine Learning Group, place

More information

DECISION TREE INDUCTION USING ROUGH SET THEORY COMPARATIVE STUDY

DECISION TREE INDUCTION USING ROUGH SET THEORY COMPARATIVE STUDY DECISION TREE INDUCTION USING ROUGH SET THEORY COMPARATIVE STUDY Ramadevi Yellasiri, C.R.Rao 2,Vivekchan Reddy Dept. of CSE, Chaitanya Bharathi Institute of Technology, Hyderabad, INDIA. 2 DCIS, School

More information

CS402 - Theory of Automata Glossary By

CS402 - Theory of Automata Glossary By CS402 - Theory of Automata Glossary By Acyclic Graph : A directed graph is said to be acyclic if it contains no cycles. Algorithm : A detailed and unambiguous sequence of instructions that describes how

More information

Datasets Size: Effect on Clustering Results

Datasets Size: Effect on Clustering Results 1 Datasets Size: Effect on Clustering Results Adeleke Ajiboye 1, Ruzaini Abdullah Arshah 2, Hongwu Qin 3 Faculty of Computer Systems and Software Engineering Universiti Malaysia Pahang 1 {ajibraheem@live.com}

More information

Constraint Based Induction of Multi-Objective Regression Trees

Constraint Based Induction of Multi-Objective Regression Trees Constraint Based Induction of Multi-Objective Regression Trees Jan Struyf 1 and Sašo Džeroski 2 1 Katholieke Universiteit Leuven, Dept. of Computer Science Celestijnenlaan 200A, B-3001 Leuven, Belgium

More information

Fast algorithms for max independent set

Fast algorithms for max independent set Fast algorithms for max independent set N. Bourgeois 1 B. Escoffier 1 V. Th. Paschos 1 J.M.M. van Rooij 2 1 LAMSADE, CNRS and Université Paris-Dauphine, France {bourgeois,escoffier,paschos}@lamsade.dauphine.fr

More information

A New Approach for Handling the Iris Data Classification Problem

A New Approach for Handling the Iris Data Classification Problem International Journal of Applied Science and Engineering 2005. 3, : 37-49 A New Approach for Handling the Iris Data Classification Problem Shyi-Ming Chen a and Yao-De Fang b a Department of Computer Science

More information