size, runs an existing induction algorithm on the rst subset to obtain a rst set of rules, and then processes each of the remaining data subsets at a

Size: px
Start display at page:

Download "size, runs an existing induction algorithm on the rst subset to obtain a rst set of rules, and then processes each of the remaining data subsets at a"

Transcription

1 Multi-Layer Incremental Induction Xindong Wu and William H.W. Lo School of Computer Science and Software Ebgineering Monash University 900 Dandenong Road Melbourne, VIC 3145, Australia To appear in Proceedings of the 5th Pacic Rim International Conference on Articial Intelligence, Singapore, November Abstract. This paper describes a multi-layer incremental induction algorithm, MLII, which is linked to an existing nonincremental induction algorithm to learn incrementally from noisy data. MLII makes use of three operations: data partitioning, generalization and reduction. Generalization can either learn a set of rules from a (sub)set of examples, or rene a previous set of rules. The latter is achieved through a redescription operation called reduction: from a set of examples and a set of rules, we derive a new set of examples describing the behaviour of the rule set. New rules are extracted from these behavioral examples, and these rules can be seen as meta-rules, as they control previous rules in order to improve their predictive accuracy. Experimental results show that MLII achieves signicant improvement on the existing nonincremental algorithm HCV used for experiments in this paper, in terms of rule accuracy. 1 Introduction Existing machine learning algorithms can be generally distinguished into two categories [Langley 1996], nonincremental algorithms which process all training examples at once, and incremental algorithms which handle training examples one by one. When an example set is not a static repository of data, for example, an example set may be added, deleted, or changed over a span of time, the learning on the example set cannot be an one-time process, so nonincremental learning has a problem dealing with changing example populations. However, processing examples one by one in existing incremental algorithms is a very tedious process when the example set is extraordinary large. In addition, when some of the examples are noisy, the results learned from them must be reverted at a later stage. As stated in [Schlimmer and Fisher 1986], incremental learning provides predictive results that depend on the particular order of the data presentation. This paper designs a new incremental learning algorithm, multi-layer induction, which divides an initial training set into subsets of approximately equal

2 size, runs an existing induction algorithm on the rst subset to obtain a rst set of rules, and then processes each of the remaining data subsets at a time by incorporating the induction results from the previous subset(s). This way, multi-layer induction accumulates discovered rules from each data subset at each layer and produces a nal integrated output which represents the original data more accurately. Any noisy data contained in the original data set can be partitioned and diminished in multi-layer induction into the small data subsets, thus the eects of noise would be diluted and induction eciency can be increased. The existing algorithm used in this paper for experiments is HCV (Version 2.0) [Wu 1995], a nonincremental rule induction system that in many cases performs better than other induction algorithms in terms of rule complexity and predictive accuracy. 2 MLII: Multi-Layer Incremental Induction Multi-layer incremental induction (MLII) applies three learning operations, data partitioning, rule reduction and rule generalization into a self-developed process. Generalization and reduction work together with sequential incrementality in order to learn and rene rules incrementally. After data partitioning, MLII handles example subsets sequentially through the generalization-reduction process. The sequential incrementality is particularly useful in cases of huge amount of data, in order to avoid exponential explosion. 2.1 Algorithm Outline In the rst step, the initial data set is partitioned into a number of data subsets of approximately equal size in a random shued way. In the second step, a set of rules is learned from a rst subset of examples by a generalization algorithm. The only assumption we make here is that the generalization algorithm is able to produce deliberately under-optimal solutions (rules are redundant). This way, the learning problem is given an approximate rule set, and this rule set will be rened with other data subsets. The third step performs the transition toward another learning problem, namely the renement of the previous set of rules. This transition is performed by a redescription operator called reduction, which derives a new set of behavioral examples by examining the behavior of the rule set from Step 2 over a second data subset. From these behavioral examples, generalization can extract new rules, which are expected to correct defects and inconsistencies of previous rules. A sequence of rule sets is so gradually built. Successive applications of the above generalizationreduction process allow more accurate and more complex (because of disjunctive) rules to be discovered, by sequentially handling the subsets of examples. 2.2 Data Partitioning Data partitioning aects the quality of information in each data subset and in turn aects the performance of multi-layer induction. Our main design aim here

3 is to dilute the noise in the original data set and evenly distribute examples of dierent classes. The partitioning process is designed as follows. 1. Shue all examples in the training set randomly. 2. Put examples of each class into one separate group. 3. Count the number of examples in each class group and get the ratio of the numbers of each class. 4. Randomly select examples from each class group according to the above ratio and put them into a subset. This process performs for N times (where N is the number parameter adjusted by the user). In some cases, the example ratio from dierent class groups cannot be integers and for the last subset some class groups may still have examples while other class groups do not have any examples. In these cases, we do not form the last subset, but insert the remaining examples randomly into the existing subsets. 2.3 Generalization Generalization compresses initial information. It involves observing a (sub)set of training examples of some particular concept, identifying the essential features common to the positive examples in these training examples, and then formulating a concept denition based on these common features. The generalization process can thus be viewed as a search through a space of possible concept denitions for a correct denition of the concept to be learned. Because the space of possible concept denitions is vast, the heart of the generalization problem lies in utilizing whatever training data, assumptions and knowledge are available to constrain the search. In MLII, discriminant generalization by elimination [Tim 1993] is adapted. A discriminant description species an expression (or a logical disjunction of such expressions) that distinguishes a given class from a xed number of other classes. The minimal discriminant descriptions are the shortest expressions (i.e., with the minimum number of descriptors) distinguishing all objects in the given class from objects of other classes. Such descriptions specify the minimum information sucient to identify the given class among a xed number of other classes. These discriminant descriptions will be converted into generalization rules. A generalization rule is a transformation of a description into a more general description that tautologically implies the initial description. Generalization rules are not truth-preserving but falsity preserving, which means that if an event falsies some description, then it also falsies a more general description. This is immediately seen by observing that H ) F is equivalent to :F ) :H (the law of contraposition). Generalization by Elimination Generalization by elimination lies on the concept of the star methodology [Michalski 1984]. Its main originality is a logical pruning of counter-examples, based on the near-miss notion [Kodrato 1984].

4 Let s be an example of an example (sub)set A. Any counter-example t of A gives a constraint over the generalization of s: the descriptors which discriminate t from s cannot be dropped simultaneously. The constraint C(s, t) is a subset of integers, given by C(s; t) = fijattribute i discriminates s and tg: A counter-example t 0 is a maximal near-miss to s in A if the constraint C(s, t0) is minimal for the set inclusion, among all C(s, t). We search all maximal near-miss counter-examples to nd such an integer set M that intersects every constraint C(s,t). From M, a rule R sm is dened as follows: its premises are the conjunction of all conditions in s. We prove that R sm is a maximally discriminant generalization of s. By construction, for any counter-example t discriminated from s, there exists an element in C(s,t) which belongs to M: the corresponding attribute allows to discriminate s and t; this condition is kept from s to R sm, hence R sm still discriminates t. The search for M can be achieved by a graph exploration, which is exponential with respect to the number of constraints. However, it is enough for a subset M to intersect all C(s,t) for t maximally near-miss to s. This generalization by elimination therefore reduces the size of exponential exploration by a preliminary (polynomial) pruning. Predicate Calculus for Reduction On the discriminant rules obtained above, we apply predicate calculus [Leung 1992] to generate more general rules. The following is a list of formulae (where X; Y and Z each represent a conditional statement and : represents complement (not)) we have used in our MLII system. 1. :(:(X)) X 2. X V Y Y V X (the commutative law of conjunction) 3. X V (Y V Z) (X V Y ) V Z (the associative law of conjunction) 4. X W X X 5. X W Y Y W X (the commutative law of disjunction) 6. X W (Y V Z) (X W Y ) V (X W Z) (the distributive law) 7. X V (Y W Z) (X V Y ) W (X V Z) (the distribute law) 8. :(X W Y ) (:X) V (:Y ) (De Morgan's law) 9. :(X V Y ) (:X) W (:Y ) (De Morgan's law) These laws are useful to combine dierent conditional rules together by symbolic resolution in order to get generalized conditional rules (meta-rules).

5 2.4 Reduction Let denote the description space of the learning domain, B be the set of rules expressed within, and L be the number of rules in B. For any rule in B, we say an example in res the rule if the description of the example satises the premises of the rule. Denition 1. Reduction, denoted by B, is a redescription operator dened as follows. B:?! [0; 1] L B: s 2?! B(s) = [ r j (s), j = 1,, L] where the reduced descriptor r j is given by: 1 if s fires the j-th rule in B r j (s) = 0 otherwise The redescription transforms each example in into an L-dimension description. The class of the example does not change. Denition 2. From a learning set A and a rule set B, the reduced learning set, denoted by A B, is generated as follows. A B = [( B (s i ); Class(s i ))j(s i ; Class(s i )) 2 A] where Class(s i ) indicates the class information of the s i example. The reduced learning set A B describes the behaviour of B on the examples in A. It is expressed in boolean logic, whatever the initial representation of A and B are. Generalization can be carried out on the reduced learning set to produce a rened set of rules. The rened set of rules is applied to a new subset of the original training examples to obtain a new learning set for further generalization, and so on. The number of examples in the reduced learning set A B is generally less than the number of examples in the initial learning set A. But the reduced learning set must still contain enough information in order to enable a further generalization. So the number of examples in each data subset should not decrease too much. 2.5 Renement of Previous Rules At each learning layer, generalization on a reduced learning set A B renes the rule set B from previous layer(s). First, if a rule in B has a good predictive accuracy, this information is implicitly available from the reduced learning set A B : it is often red by the examples in A and consequently, the corresponding descriptor in A B takes the value 1, and the class of these examples is often the same as the rule class. Hence, there is a correlation in A B between a value of this descriptor and a value of the class information, and rules with a good predictive accuracy will be discovered again by next generalization. This process is stable, as good rules in B are carried on. Second, the same argument above ensures that irrelevant rules are dropped: if a rule is irrelevant, the associated reduced descriptor is irrelevant with respect to A B too. As generalization is supposed to detect and drop irrelevant descriptors, the rules learned from A B do not keep previous irrelevant rules.

6 Expermnt 1 Expermnt 2 Expermnt 3 Expermnt 4 Database Person 1 Person 2 Labor-Neg 1 Labor-Neg 2 No of training examples Number of attributes Number of classes Missing values Misclassications Level of noise low low high low No of test examples No of HCV rules Accuracy of HCV rules 88.92% 77.33% 81.71% 87.77% No of MLII rules Accuracy of MLII rules 98.88% 90.73% 92.57% 94.61% Table 1. Summary of Experiments 1-4. Third, generalization discovers links among descriptors and classes. In the reduced learning set A B, examples are described according to the rules in B they trigger. Hence, the triggering of rules can be generalized from A B : the generalization solves conicts arising among previous rules. 3 Experiments In this section, we set up a few experiments to compare the predictive accuracy of MLII rules with the HCV induction program [Wu 1995]. 3.1 Experiments 1 to 4 Table 1 provides a summary of the data sets used in our rst four experiments and the results. The 4 databases were all copied from the University of California at Irvine machine learning database repository [Murphy & Aha 95], and each contains certain level of noise. These databases have been selected because each of them consists of two standard components when created or collected by the original providers: a training set and a test set. The databases have been used \as is". Example ordering has not been changed, neither have examples been moved between the sets. For each database, we ran each of HCV (Version 2.0) and MLII (with 4 layers) 10 times on the training set, and the accuracies listed in Tables 1 are average results from the test set. For all the 4 databases, MLII (with 4 layers) performs better than HCV (Version 2.0). The accuracy dierence on each database between MLII and HCV is statistically signicant. Therefore, we conclude that with a carefully selected number of layers, MLII achieves signicant improvement on HCV in terms of rule accuracy.

7 Layer No. Rule Set Test Set Accuracy 1 B % 2 C % C % 3 D % D % D % 4 E % E % E % E % Table 2. Results of Experiment Experiment 5 The purpose of this experiment is to check the change of accuracy of MLII rules generated on each layer in a n-layer induction. The data set used is labor-neg1, the same as used in Experiment 3. Table 2 shows the results, and Figure 1 provides a visual illustration of the same results. From the graph in Figure 1, it is obvious that at the rst layer, the accuracy of the induced rules on the same test data decreases when the number of layers increases in MLII. The highest is HCV induction (just one layer) and lowest is the 4-layer MLII. The reason is that HCV uses the whole 300 training examples to generate rules, while MLII uses only a subset (one-nth of the training set) at the rst layer for HCV to generate an initial rule set. We have tried up to 4 layers with MLII, and the rule accuracy on the test set is always increasing at the last layer. A question arise here. What is the optimal number of layers for MLII on a training set? Based on various experiments we have carried out, it depends on the size and the noise level of the training set. If the size of the training set is very large (e.g examples or more) and there exists high-level noise, more layers of learning allows deeper rule renement to dilute the noise. Otherwise, if we use a large number of layers on a small data set, MLII can not gain enough information to generate approximate (redundant) rules for later successive renement, and this in turn aects the completeness and consistency of the nal generated rules. For each curve in Figure 1, we can nd that the test set accuracy increases signicantly from the rst to second layer rules, still increases from the second to third layer and the improvement decreases as the number of layers increases. This indicates that the approximate rules generated at the rst layer are successively rened at the following learning layers (i.e., rules become more consistent and

8 Fig. 1. Rule Accuracy at Each Induction Layer of Experiment 5. accurate) and nally achieve an optimal level and are no longer redundant. Therefore, the successive learning should be stopped and the optimal rules be taken as the nal rules. In general, it is not the case that the more layers we use, the more accurate nal rules we will get. 4 Conclusions Multi-layer induction learns accurate rules in an incremental manner. It handles subsets of training examples sequentially. Compared to handling training examples one by one in existing incremental learning algorithms, this sequential incrementality is more exible, because the size of data subsets is controlled by data partitioning in multi-layer induction. Multi-layer induction suits noisy domains, because data partitioning dilutes the eects of noise into data subsets. Five experiments were carried out in this paper to quantify the gains of MLII, and signicant improvement of rule accuracy has been achieved. Multi-layer induction is designed for handling large and/or noisy data sets. With medium-sized, noisefree data sets, we have not found much improvement of MLII on HCV induction in rule accuracy. The information quality of the data sets is a critical factor to determine the number of layers in MLII. The noise level, number of training examples, numbers of attributes and classes, and value domains of attributes are all contributing factors when applying MLII to a particular data set.

9 Future work will involve applying MLII to other induction programs, such as C4.5 [Quinlan 1993], extending the experiments to larger data sets and comparing with other incremental learning methods such as case-based learning [Ram 1990] which learns incrementally case by case and treats each case as a chunk of partially matched rules. References [Kodrato 1984] Kodrato, Y. (1984). Learning complex structural descriptions from examples. Computer vision, graphics and image processing 27. [Langley 1996] Langley, P. (1996). Elements of Machine Learning. Morgan Kaufmann. [Leung 1992] Leung, K. T. (1992). Elementary Set Theory (3 Ed.). Hong Kong University Press. [Michalski 1984] Michalski, R. S. (1984). A theory and methodology for inductive learning. Articial Intelligence 20 (2). [Michalski 1985] Michalski, R. S. (1985). Knowledge repair mechanisms: Evolution versus revolution. In Proceedings of the Third International Machine Learning Workshop, 116{119. Rutgers University. [Murphy & Aha 95] Murphy, P.M. & Aha, D.W. (1995). UCI Repository of Machine Learning Databases, Machine-Readable Data Repository. University of California, Department of Information and Computer Science, Irvine, CA. [Quinlan 1993] Quinlan, J. R. (1993). C4.5: Programs for Machine Learning. Morgan Kaufmann. [Ram 1990] Ram, A. (1990). Incremental learning of explanation patterns and their indices. In Proceedings of the Seventh International Conference on Machine Learning, 49{57. Morgan Kaufmann. [Schlimmer and Fisher 1986] Schlimmer, J. and Fisher, D. (1986). A case study of incremental concept induction. In Proceedings of the Fifth National Conference on Artical Intelligence, pp. 496{501. Morgan Kaufmann. [Tim 1993] Tim, N. (1993, Feb). Discriminant generalization in logic program. Knowledge Representation and Organization in Machine Learning 14 (3), 345{351. [Wu 1995] Wu, X. (1995). Knowledge Acquisition from Databases. Ablex.

Algebraic Properties of CSP Model Operators? Y.C. Law and J.H.M. Lee. The Chinese University of Hong Kong.

Algebraic Properties of CSP Model Operators? Y.C. Law and J.H.M. Lee. The Chinese University of Hong Kong. Algebraic Properties of CSP Model Operators? Y.C. Law and J.H.M. Lee Department of Computer Science and Engineering The Chinese University of Hong Kong Shatin, N.T., Hong Kong SAR, China fyclaw,jleeg@cse.cuhk.edu.hk

More information

APPLICATION OF THE FUZZY MIN-MAX NEURAL NETWORK CLASSIFIER TO PROBLEMS WITH CONTINUOUS AND DISCRETE ATTRIBUTES

APPLICATION OF THE FUZZY MIN-MAX NEURAL NETWORK CLASSIFIER TO PROBLEMS WITH CONTINUOUS AND DISCRETE ATTRIBUTES APPLICATION OF THE FUZZY MIN-MAX NEURAL NETWORK CLASSIFIER TO PROBLEMS WITH CONTINUOUS AND DISCRETE ATTRIBUTES A. Likas, K. Blekas and A. Stafylopatis National Technical University of Athens Department

More information

Gen := 0. Create Initial Random Population. Termination Criterion Satisfied? Yes. Evaluate fitness of each individual in population.

Gen := 0. Create Initial Random Population. Termination Criterion Satisfied? Yes. Evaluate fitness of each individual in population. An Experimental Comparison of Genetic Programming and Inductive Logic Programming on Learning Recursive List Functions Lappoon R. Tang Mary Elaine Cali Raymond J. Mooney Department of Computer Sciences

More information

[Ch 6] Set Theory. 1. Basic Concepts and Definitions. 400 lecture note #4. 1) Basics

[Ch 6] Set Theory. 1. Basic Concepts and Definitions. 400 lecture note #4. 1) Basics 400 lecture note #4 [Ch 6] Set Theory 1. Basic Concepts and Definitions 1) Basics Element: ; A is a set consisting of elements x which is in a/another set S such that P(x) is true. Empty set: notated {

More information

Fuzzy Partitioning with FID3.1

Fuzzy Partitioning with FID3.1 Fuzzy Partitioning with FID3.1 Cezary Z. Janikow Dept. of Mathematics and Computer Science University of Missouri St. Louis St. Louis, Missouri 63121 janikow@umsl.edu Maciej Fajfer Institute of Computing

More information

A taxonomy of race. D. P. Helmbold, C. E. McDowell. September 28, University of California, Santa Cruz. Santa Cruz, CA

A taxonomy of race. D. P. Helmbold, C. E. McDowell. September 28, University of California, Santa Cruz. Santa Cruz, CA A taxonomy of race conditions. D. P. Helmbold, C. E. McDowell UCSC-CRL-94-34 September 28, 1994 Board of Studies in Computer and Information Sciences University of California, Santa Cruz Santa Cruz, CA

More information

Computer Science Department, Brigham Young University. Provo, UT U.S.A.

Computer Science Department, Brigham Young University. Provo, UT U.S.A. The BBG Rule Induction Algorithm Kevin S. Van Horn and Tony R. Martinez Computer Science Department, Brigham Young University Provo, UT 84602 U.S.A. This paper appeared in Proceedings of the 6th Australian

More information

A Mixed Fragmentation Methodology For. Initial Distributed Database Design. Shamkant B. Navathe. Georgia Institute of Technology.

A Mixed Fragmentation Methodology For. Initial Distributed Database Design. Shamkant B. Navathe. Georgia Institute of Technology. A Mixed Fragmentation Methodology For Initial Distributed Database Design Shamkant B. Navathe Georgia Institute of Technology Kamalakar Karlapalem Hong Kong University of Science and Technology Minyoung

More information

Huan Liu. Kent Ridge, Singapore Tel: (+65) ; Fax: (+65) Abstract

Huan Liu. Kent Ridge, Singapore Tel: (+65) ; Fax: (+65) Abstract A Family of Ecient Rule Generators Huan Liu Department of Information Systems and Computer Science National University of Singapore Kent Ridge, Singapore 119260 liuh@iscs.nus.sg Tel: (+65)-772-6563; Fax:

More information

has to choose. Important questions are: which relations should be dened intensionally,

has to choose. Important questions are: which relations should be dened intensionally, Automated Design of Deductive Databases (Extended abstract) Hendrik Blockeel and Luc De Raedt Department of Computer Science, Katholieke Universiteit Leuven Celestijnenlaan 200A B-3001 Heverlee, Belgium

More information

An Average-Case Analysis of the k-nearest Neighbor Classifier for Noisy Domains

An Average-Case Analysis of the k-nearest Neighbor Classifier for Noisy Domains An Average-Case Analysis of the k-nearest Neighbor Classifier for Noisy Domains Seishi Okamoto Fujitsu Laboratories Ltd. 2-2-1 Momochihama, Sawara-ku Fukuoka 814, Japan seishi@flab.fujitsu.co.jp Yugami

More information

Research on outlier intrusion detection technologybased on data mining

Research on outlier intrusion detection technologybased on data mining Acta Technica 62 (2017), No. 4A, 635640 c 2017 Institute of Thermomechanics CAS, v.v.i. Research on outlier intrusion detection technologybased on data mining Liang zhu 1, 2 Abstract. With the rapid development

More information

WEIGHTED K NEAREST NEIGHBOR CLASSIFICATION ON FEATURE PROJECTIONS 1

WEIGHTED K NEAREST NEIGHBOR CLASSIFICATION ON FEATURE PROJECTIONS 1 WEIGHTED K NEAREST NEIGHBOR CLASSIFICATION ON FEATURE PROJECTIONS 1 H. Altay Güvenir and Aynur Akkuş Department of Computer Engineering and Information Science Bilkent University, 06533, Ankara, Turkey

More information

A Boolean Expression. Reachability Analysis or Bisimulation. Equation Solver. Boolean. equations.

A Boolean Expression. Reachability Analysis or Bisimulation. Equation Solver. Boolean. equations. A Framework for Embedded Real-time System Design? Jin-Young Choi 1, Hee-Hwan Kwak 2, and Insup Lee 2 1 Department of Computer Science and Engineering, Korea Univerity choi@formal.korea.ac.kr 2 Department

More information

Exemplar Learning in Fuzzy Decision Trees

Exemplar Learning in Fuzzy Decision Trees Exemplar Learning in Fuzzy Decision Trees C. Z. Janikow Mathematics and Computer Science University of Missouri St. Louis, MO 63121 Abstract Decision-tree algorithms provide one of the most popular methodologies

More information

Hyperplane Ranking in. Simple Genetic Algorithms. D. Whitley, K. Mathias, and L. Pyeatt. Department of Computer Science. Colorado State University

Hyperplane Ranking in. Simple Genetic Algorithms. D. Whitley, K. Mathias, and L. Pyeatt. Department of Computer Science. Colorado State University Hyperplane Ranking in Simple Genetic Algorithms D. Whitley, K. Mathias, and L. yeatt Department of Computer Science Colorado State University Fort Collins, Colorado 8523 USA whitley,mathiask,pyeatt@cs.colostate.edu

More information

Solving Hard Problems Incrementally

Solving Hard Problems Incrementally Rod Downey, Judith Egan, Michael Fellows, Frances Rosamond, Peter Shaw May 20, 2013 Introduction Dening the Right Problem Incremental Local Search Finding the right FPT subroutine INC-DS is in FPT INC

More information

Chordal graphs and the characteristic polynomial

Chordal graphs and the characteristic polynomial Discrete Mathematics 262 (2003) 211 219 www.elsevier.com/locate/disc Chordal graphs and the characteristic polynomial Elizabeth W. McMahon ;1, Beth A. Shimkus 2, Jessica A. Wolfson 3 Department of Mathematics,

More information

Induction of Strong Feature Subsets

Induction of Strong Feature Subsets Induction of Strong Feature Subsets Mohamed Quafafou and Moussa Boussouf IRIN, University of Nantes, 2 rue de la Houssiniere, BP 92208-44322, Nantes Cedex 03, France. quafafou9 Abstract The problem of

More information

Department of. Computer Science. Remapping Subpartitions of. Hyperspace Using Iterative. Genetic Search. Keith Mathias and Darrell Whitley

Department of. Computer Science. Remapping Subpartitions of. Hyperspace Using Iterative. Genetic Search. Keith Mathias and Darrell Whitley Department of Computer Science Remapping Subpartitions of Hyperspace Using Iterative Genetic Search Keith Mathias and Darrell Whitley Technical Report CS-4-11 January 7, 14 Colorado State University Remapping

More information

Networks for Control. California Institute of Technology. Pasadena, CA Abstract

Networks for Control. California Institute of Technology. Pasadena, CA Abstract Learning Fuzzy Rule-Based Neural Networks for Control Charles M. Higgins and Rodney M. Goodman Department of Electrical Engineering, 116-81 California Institute of Technology Pasadena, CA 91125 Abstract

More information

Research on Applications of Data Mining in Electronic Commerce. Xiuping YANG 1, a

Research on Applications of Data Mining in Electronic Commerce. Xiuping YANG 1, a International Conference on Education Technology, Management and Humanities Science (ETMHS 2015) Research on Applications of Data Mining in Electronic Commerce Xiuping YANG 1, a 1 Computer Science Department,

More information

Tilings of the Euclidean plane

Tilings of the Euclidean plane Tilings of the Euclidean plane Yan Der, Robin, Cécile January 9, 2017 Abstract This document gives a quick overview of a eld of mathematics which lies in the intersection of geometry and algebra : tilings.

More information

Using the Deformable Part Model with Autoencoded Feature Descriptors for Object Detection

Using the Deformable Part Model with Autoencoded Feature Descriptors for Object Detection Using the Deformable Part Model with Autoencoded Feature Descriptors for Object Detection Hyunghoon Cho and David Wu December 10, 2010 1 Introduction Given its performance in recent years' PASCAL Visual

More information

the number of states must be set in advance, i.e. the structure of the model is not t to the data, but given a priori the algorithm converges to a loc

the number of states must be set in advance, i.e. the structure of the model is not t to the data, but given a priori the algorithm converges to a loc Clustering Time Series with Hidden Markov Models and Dynamic Time Warping Tim Oates, Laura Firoiu and Paul R. Cohen Computer Science Department, LGRC University of Massachusetts, Box 34610 Amherst, MA

More information

Building Intelligent Learning Database Systems

Building Intelligent Learning Database Systems Building Intelligent Learning Database Systems 1. Intelligent Learning Database Systems: A Definition (Wu 1995, Wu 2000) 2. Induction: Mining Knowledge from Data Decision tree construction (ID3 and C4.5)

More information

BRACE: A Paradigm For the Discretization of Continuously Valued Data

BRACE: A Paradigm For the Discretization of Continuously Valued Data Proceedings of the Seventh Florida Artificial Intelligence Research Symposium, pp. 7-2, 994 BRACE: A Paradigm For the Discretization of Continuously Valued Data Dan Ventura Tony R. Martinez Computer Science

More information

Decision Trees Dr. G. Bharadwaja Kumar VIT Chennai

Decision Trees Dr. G. Bharadwaja Kumar VIT Chennai Decision Trees Decision Tree Decision Trees (DTs) are a nonparametric supervised learning method used for classification and regression. The goal is to create a model that predicts the value of a target

More information

Summary of Course Coverage

Summary of Course Coverage CS-227, Discrete Structures I Spring 2006 Semester Summary of Course Coverage 1) Propositional Calculus a) Negation (logical NOT) b) Conjunction (logical AND) c) Disjunction (logical inclusive-or) d) Inequalities

More information

A Simplied NP-complete MAXSAT Problem. Abstract. It is shown that the MAX2SAT problem is NP-complete even if every variable

A Simplied NP-complete MAXSAT Problem. Abstract. It is shown that the MAX2SAT problem is NP-complete even if every variable A Simplied NP-complete MAXSAT Problem Venkatesh Raman 1, B. Ravikumar 2 and S. Srinivasa Rao 1 1 The Institute of Mathematical Sciences, C. I. T. Campus, Chennai 600 113. India 2 Department of Computer

More information

Let v be a vertex primed by v i (s). Then the number f(v) of neighbours of v which have

Let v be a vertex primed by v i (s). Then the number f(v) of neighbours of v which have Let v be a vertex primed by v i (s). Then the number f(v) of neighbours of v which have been red in the sequence up to and including v i (s) is deg(v)? s(v), and by the induction hypothesis this sequence

More information

Finding a winning strategy in variations of Kayles

Finding a winning strategy in variations of Kayles Finding a winning strategy in variations of Kayles Simon Prins ICA-3582809 Utrecht University, The Netherlands July 15, 2015 Abstract Kayles is a two player game played on a graph. The game can be dened

More information

A Parallel Evolutionary Algorithm for Discovery of Decision Rules

A Parallel Evolutionary Algorithm for Discovery of Decision Rules A Parallel Evolutionary Algorithm for Discovery of Decision Rules Wojciech Kwedlo Faculty of Computer Science Technical University of Bia lystok Wiejska 45a, 15-351 Bia lystok, Poland wkwedlo@ii.pb.bialystok.pl

More information

Telecommunication and Informatics University of North Carolina, Technical University of Gdansk Charlotte, NC 28223, USA

Telecommunication and Informatics University of North Carolina, Technical University of Gdansk Charlotte, NC 28223, USA A Decoder-based Evolutionary Algorithm for Constrained Parameter Optimization Problems S lawomir Kozie l 1 and Zbigniew Michalewicz 2 1 Department of Electronics, 2 Department of Computer Science, Telecommunication

More information

of Perceptron. Perceptron CPU Seconds CPU Seconds Per Trial

of Perceptron. Perceptron CPU Seconds CPU Seconds Per Trial Accelerated Learning on the Connection Machine Diane J. Cook Lawrence B. Holder University of Illinois Beckman Institute 405 North Mathews, Urbana, IL 61801 Abstract The complexity of most machine learning

More information

Discrete Mathematics Lecture 4. Harper Langston New York University

Discrete Mathematics Lecture 4. Harper Langston New York University Discrete Mathematics Lecture 4 Harper Langston New York University Sequences Sequence is a set of (usually infinite number of) ordered elements: a 1, a 2,, a n, Each individual element a k is called a

More information

AST: Support for Algorithm Selection with a CBR Approach

AST: Support for Algorithm Selection with a CBR Approach AST: Support for Algorithm Selection with a CBR Approach Guido Lindner 1 and Rudi Studer 2 1 DaimlerChrysler AG, Research &Technology FT3/KL, PO: DaimlerChrysler AG, T-402, D-70456 Stuttgart, Germany guido.lindner@daimlerchrysler.com

More information

Two Problems - Two Solutions: One System - ECLiPSe. Mark Wallace and Andre Veron. April 1993

Two Problems - Two Solutions: One System - ECLiPSe. Mark Wallace and Andre Veron. April 1993 Two Problems - Two Solutions: One System - ECLiPSe Mark Wallace and Andre Veron April 1993 1 Introduction The constraint logic programming system ECL i PS e [4] is the successor to the CHIP system [1].

More information

2.2 Set Operations. Introduction DEFINITION 1. EXAMPLE 1 The union of the sets {1, 3, 5} and {1, 2, 3} is the set {1, 2, 3, 5}; that is, EXAMPLE 2

2.2 Set Operations. Introduction DEFINITION 1. EXAMPLE 1 The union of the sets {1, 3, 5} and {1, 2, 3} is the set {1, 2, 3, 5}; that is, EXAMPLE 2 2.2 Set Operations 127 2.2 Set Operations Introduction Two, or more, sets can be combined in many different ways. For instance, starting with the set of mathematics majors at your school and the set of

More information

Richard E. Korf. June 27, Abstract. divide them into two subsets, so that the sum of the numbers in

Richard E. Korf. June 27, Abstract. divide them into two subsets, so that the sum of the numbers in A Complete Anytime Algorithm for Number Partitioning Richard E. Korf Computer Science Department University of California, Los Angeles Los Angeles, Ca. 90095 korf@cs.ucla.edu June 27, 1997 Abstract Given

More information

Evaluation of Seed Selection Strategies for Vehicle to Vehicle Epidemic Information Dissemination

Evaluation of Seed Selection Strategies for Vehicle to Vehicle Epidemic Information Dissemination Evaluation of Seed Selection Strategies for Vehicle to Vehicle Epidemic Information Dissemination Richard Kershaw and Bhaskar Krishnamachari Ming Hsieh Department of Electrical Engineering, Viterbi School

More information

Synchronization Expressions: Characterization Results and. Implementation. Kai Salomaa y Sheng Yu y. Abstract

Synchronization Expressions: Characterization Results and. Implementation. Kai Salomaa y Sheng Yu y. Abstract Synchronization Expressions: Characterization Results and Implementation Kai Salomaa y Sheng Yu y Abstract Synchronization expressions are dened as restricted regular expressions that specify synchronization

More information

Using Decision Boundary to Analyze Classifiers

Using Decision Boundary to Analyze Classifiers Using Decision Boundary to Analyze Classifiers Zhiyong Yan Congfu Xu College of Computer Science, Zhejiang University, Hangzhou, China yanzhiyong@zju.edu.cn Abstract In this paper we propose to use decision

More information

The task of inductive learning from examples is to nd an approximate definition

The task of inductive learning from examples is to nd an approximate definition 1 Initializing Neural Networks using Decision Trees Arunava Banerjee 1.1 Introduction The task of inductive learning from examples is to nd an approximate definition for an unknown function f(x), given

More information

Propositional Calculus: Boolean Algebra and Simplification. CS 270: Mathematical Foundations of Computer Science Jeremy Johnson

Propositional Calculus: Boolean Algebra and Simplification. CS 270: Mathematical Foundations of Computer Science Jeremy Johnson Propositional Calculus: Boolean Algebra and Simplification CS 270: Mathematical Foundations of Computer Science Jeremy Johnson Propositional Calculus Topics Motivation: Simplifying Conditional Expressions

More information

An Evaluation of Information Retrieval Accuracy. with Simulated OCR Output. K. Taghva z, and J. Borsack z. University of Massachusetts, Amherst

An Evaluation of Information Retrieval Accuracy. with Simulated OCR Output. K. Taghva z, and J. Borsack z. University of Massachusetts, Amherst An Evaluation of Information Retrieval Accuracy with Simulated OCR Output W.B. Croft y, S.M. Harding y, K. Taghva z, and J. Borsack z y Computer Science Department University of Massachusetts, Amherst

More information

Results of an Experiment in Domain Knowledge Base Construction: A. Comparison of the Classic and Algernon Knowledge Representation.

Results of an Experiment in Domain Knowledge Base Construction: A. Comparison of the Classic and Algernon Knowledge Representation. Results of an Experiment in Domain Knowledge Base Construction: A Comparison of the Classic and Algernon Knowledge Representation Systems Abstract Jon Doyle and Ramesh Patil have recently argued that classication

More information

Simplicial Cells in Arrangements of Hyperplanes

Simplicial Cells in Arrangements of Hyperplanes Simplicial Cells in Arrangements of Hyperplanes Christoph Dätwyler 05.01.2013 This paper is a report written due to the authors presentation of a paper written by Shannon [1] in 1977. The presentation

More information

Finding Rough Set Reducts with SAT

Finding Rough Set Reducts with SAT Finding Rough Set Reducts with SAT Richard Jensen 1, Qiang Shen 1 and Andrew Tuson 2 {rkj,qqs}@aber.ac.uk 1 Department of Computer Science, The University of Wales, Aberystwyth 2 Department of Computing,

More information

Heap-on-Top Priority Queues. March Abstract. We introduce the heap-on-top (hot) priority queue data structure that combines the

Heap-on-Top Priority Queues. March Abstract. We introduce the heap-on-top (hot) priority queue data structure that combines the Heap-on-Top Priority Queues Boris V. Cherkassky Central Economics and Mathematics Institute Krasikova St. 32 117418, Moscow, Russia cher@cemi.msk.su Andrew V. Goldberg NEC Research Institute 4 Independence

More information

Classifier C-Net. 2D Projected Images of 3D Objects. 2D Projected Images of 3D Objects. Model I. Model II

Classifier C-Net. 2D Projected Images of 3D Objects. 2D Projected Images of 3D Objects. Model I. Model II Advances in Neural Information Processing Systems 7. (99) The MIT Press, Cambridge, MA. pp.949-96 Unsupervised Classication of 3D Objects from D Views Satoshi Suzuki Hiroshi Ando ATR Human Information

More information

The Global Standard for Mobility (GSM) (see, e.g., [6], [4], [5]) yields a

The Global Standard for Mobility (GSM) (see, e.g., [6], [4], [5]) yields a Preprint 0 (2000)?{? 1 Approximation of a direction of N d in bounded coordinates Jean-Christophe Novelli a Gilles Schaeer b Florent Hivert a a Universite Paris 7 { LIAFA 2, place Jussieu - 75251 Paris

More information

CSE 215: Foundations of Computer Science Recitation Exercises Set #9 Stony Brook University. Name: ID#: Section #: Score: / 4

CSE 215: Foundations of Computer Science Recitation Exercises Set #9 Stony Brook University. Name: ID#: Section #: Score: / 4 CSE 215: Foundations of Computer Science Recitation Exercises Set #9 Stony Brook University Name: ID#: Section #: Score: / 4 Unit 14: Set Theory: Definitions and Properties 1. Let C = {n Z n = 6r 5 for

More information

Kalev Kask and Rina Dechter. Department of Information and Computer Science. University of California, Irvine, CA

Kalev Kask and Rina Dechter. Department of Information and Computer Science. University of California, Irvine, CA GSAT and Local Consistency 3 Kalev Kask and Rina Dechter Department of Information and Computer Science University of California, Irvine, CA 92717-3425 fkkask,dechterg@ics.uci.edu Abstract It has been

More information

of m clauses, each containing the disjunction of boolean variables from a nite set V = fv 1 ; : : : ; vng of size n [8]. Each variable occurrence with

of m clauses, each containing the disjunction of boolean variables from a nite set V = fv 1 ; : : : ; vng of size n [8]. Each variable occurrence with A Hybridised 3-SAT Algorithm Andrew Slater Automated Reasoning Project, Computer Sciences Laboratory, RSISE, Australian National University, 0200, Canberra Andrew.Slater@anu.edu.au April 9, 1999 1 Introduction

More information

Adaptive Estimation of Distributions using Exponential Sub-Families Alan Gous Stanford University December 1996 Abstract: An algorithm is presented wh

Adaptive Estimation of Distributions using Exponential Sub-Families Alan Gous Stanford University December 1996 Abstract: An algorithm is presented wh Adaptive Estimation of Distributions using Exponential Sub-Families Alan Gous Stanford University December 1996 Abstract: An algorithm is presented which, for a large-dimensional exponential family G,

More information

Preliminary results from an agent-based adaptation of friendship games

Preliminary results from an agent-based adaptation of friendship games Preliminary results from an agent-based adaptation of friendship games David S. Dixon June 29, 2011 This paper presents agent-based model (ABM) equivalents of friendshipbased games and compares the results

More information

The element the node represents End-of-Path marker e The sons T

The element the node represents End-of-Path marker e The sons T A new Method to index and query Sets Jorg Homann Jana Koehler Institute for Computer Science Albert Ludwigs University homannjkoehler@informatik.uni-freiburg.de July 1998 TECHNICAL REPORT No. 108 Abstract

More information

CSE 20 DISCRETE MATH. Fall

CSE 20 DISCRETE MATH. Fall CSE 20 DISCRETE MATH Fall 2017 http://cseweb.ucsd.edu/classes/fa17/cse20-ab/ Final exam The final exam is Saturday December 16 11:30am-2:30pm. Lecture A will take the exam in Lecture B will take the exam

More information

CS Bootcamp Boolean Logic Autumn 2015 A B A B T T T T F F F T F F F F T T T T F T F T T F F F

CS Bootcamp Boolean Logic Autumn 2015 A B A B T T T T F F F T F F F F T T T T F T F T T F F F 1 Logical Operations 1.1 And The and operator is a binary operator, denoted as, &,, or sometimes by just concatenating symbols, is true only if both parameters are true. A B A B F T F F F F The expression

More information

described with a predened list of attributes (or variables). In many applications, a xed list of

described with a predened list of attributes (or variables). In many applications, a xed list of 228 22 Hierarchical Clustering of Composite Objects with a Variable Number of Components A. Ketterlin, P. Gancarski and J.J. Korczak LSIIT, URA 1871, CNRS 7, rue Rene Descartes Universite Louis Pasteur

More information

Fig. 1): The rule creation algorithm creates an initial fuzzy partitioning for each variable. This is given by a xed number of equally distributed tri

Fig. 1): The rule creation algorithm creates an initial fuzzy partitioning for each variable. This is given by a xed number of equally distributed tri Some Approaches to Improve the Interpretability of Neuro-Fuzzy Classiers Aljoscha Klose, Andreas Nurnberger, and Detlef Nauck Faculty of Computer Science (FIN-IWS), University of Magdeburg Universitatsplatz

More information

Logical Decision Rules: Teaching C4.5 to Speak Prolog

Logical Decision Rules: Teaching C4.5 to Speak Prolog Logical Decision Rules: Teaching C4.5 to Speak Prolog Kamran Karimi and Howard J. Hamilton Department of Computer Science University of Regina Regina, Saskatchewan Canada S4S 0A2 {karimi,hamilton}@cs.uregina.ca

More information

2 Keywords Backtracking Algorithms, Constraint Satisfaction Problem, Distributed Articial Intelligence, Iterative Improvement Algorithm, Multiagent Sy

2 Keywords Backtracking Algorithms, Constraint Satisfaction Problem, Distributed Articial Intelligence, Iterative Improvement Algorithm, Multiagent Sy 1 The Distributed Constraint Satisfaction Problem: Formalization and Algorithms IEEE Trans. on Knowledge and DATA Engineering, vol.10, No.5 September 1998 Makoto Yokoo, Edmund H. Durfee, Toru Ishida, and

More information

Towards a Reference Framework. Gianpaolo Cugola and Carlo Ghezzi. [cugola, P.za Leonardo da Vinci 32.

Towards a Reference Framework. Gianpaolo Cugola and Carlo Ghezzi. [cugola, P.za Leonardo da Vinci 32. Inconsistencies in Software Development: Towards a Reference Framework Gianpaolo Cugola and Carlo Ghezzi [cugola, ghezzi]@elet.polimi.it Dipartimento di Elettronica e Informazione Politecnico di Milano

More information

Network. Department of Statistics. University of California, Berkeley. January, Abstract

Network. Department of Statistics. University of California, Berkeley. January, Abstract Parallelizing CART Using a Workstation Network Phil Spector Leo Breiman Department of Statistics University of California, Berkeley January, 1995 Abstract The CART (Classication and Regression Trees) program,

More information

Weak Dynamic Coloring of Planar Graphs

Weak Dynamic Coloring of Planar Graphs Weak Dynamic Coloring of Planar Graphs Caroline Accurso 1,5, Vitaliy Chernyshov 2,5, Leaha Hand 3,5, Sogol Jahanbekam 2,4,5, and Paul Wenger 2 Abstract The k-weak-dynamic number of a graph G is the smallest

More information

Localization in Graphs. Richardson, TX Azriel Rosenfeld. Center for Automation Research. College Park, MD

Localization in Graphs. Richardson, TX Azriel Rosenfeld. Center for Automation Research. College Park, MD CAR-TR-728 CS-TR-3326 UMIACS-TR-94-92 Samir Khuller Department of Computer Science Institute for Advanced Computer Studies University of Maryland College Park, MD 20742-3255 Localization in Graphs Azriel

More information

Mining of association rules is a research topic that has received much attention among the various data mining problems. Many interesting wors have be

Mining of association rules is a research topic that has received much attention among the various data mining problems. Many interesting wors have be Is Sampling Useful in Data Mining? A Case in the Maintenance of Discovered Association Rules. S.D. Lee David W. Cheung Ben Kao Department of Computer Science, The University of Hong Kong, Hong Kong. fsdlee,dcheung,aog@cs.hu.h

More information

The temporal explorer who returns to the base 1

The temporal explorer who returns to the base 1 The temporal explorer who returns to the base 1 Eleni C. Akrida, George B. Mertzios, and Paul G. Spirakis, Department of Computer Science, University of Liverpool, UK Department of Computer Science, Durham

More information

Hybrid Algorithms for SAT. Irina Rish and Rina Dechter.

Hybrid Algorithms for SAT. Irina Rish and Rina Dechter. To Guess or to Think? Hybrid Algorithms for SAT Irina Rish and Rina Dechter Information and Computer Science University of California, Irvine fdechter,irinarg@ics.uci.edu http://www.ics.uci.edu/f~irinar,~dechterg

More information

requests or displaying activities, hence they usually have soft deadlines, or no deadlines at all. Aperiodic tasks with hard deadlines are called spor

requests or displaying activities, hence they usually have soft deadlines, or no deadlines at all. Aperiodic tasks with hard deadlines are called spor Scheduling Aperiodic Tasks in Dynamic Priority Systems Marco Spuri and Giorgio Buttazzo Scuola Superiore S.Anna, via Carducci 4, 561 Pisa, Italy Email: spuri@fastnet.it, giorgio@sssup.it Abstract In this

More information

Process Allocation for Load Distribution in Fault-Tolerant. Jong Kim*, Heejo Lee*, and Sunggu Lee** *Dept. of Computer Science and Engineering

Process Allocation for Load Distribution in Fault-Tolerant. Jong Kim*, Heejo Lee*, and Sunggu Lee** *Dept. of Computer Science and Engineering Process Allocation for Load Distribution in Fault-Tolerant Multicomputers y Jong Kim*, Heejo Lee*, and Sunggu Lee** *Dept. of Computer Science and Engineering **Dept. of Electrical Engineering Pohang University

More information

PRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1. We present some preliminary results on a system for tracking 3D motion using

PRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1. We present some preliminary results on a system for tracking 3D motion using PRELIMINARY RESULTS ON REAL-TIME 3D FEATURE-BASED TRACKER 1 Tak-keung CHENG derek@cs.mu.oz.au Leslie KITCHEN ljk@cs.mu.oz.au Computer Vision and Pattern Recognition Laboratory, Department of Computer Science,

More information

CSC Discrete Math I, Spring Sets

CSC Discrete Math I, Spring Sets CSC 125 - Discrete Math I, Spring 2017 Sets Sets A set is well-defined, unordered collection of objects The objects in a set are called the elements, or members, of the set A set is said to contain its

More information

NP-Completeness of 3SAT, 1-IN-3SAT and MAX 2SAT

NP-Completeness of 3SAT, 1-IN-3SAT and MAX 2SAT NP-Completeness of 3SAT, 1-IN-3SAT and MAX 2SAT 3SAT The 3SAT problem is the following. INSTANCE : Given a boolean expression E in conjunctive normal form (CNF) that is the conjunction of clauses, each

More information

CSC 501 Semantics of Programming Languages

CSC 501 Semantics of Programming Languages CSC 501 Semantics of Programming Languages Subtitle: An Introduction to Formal Methods. Instructor: Dr. Lutz Hamel Email: hamel@cs.uri.edu Office: Tyler, Rm 251 Books There are no required books in this

More information

Role Modelling: the ASSO Perspective

Role Modelling: the ASSO Perspective Role Modelling: the ASSO Perspective Donatella Castelli, Elvira Locuratolo Istituto di Elaborazione dell'informazione Consiglio Nazionale delle Ricerche Via S. Maria, 46 Pisa, Italy e-mail: castelli@iei.pi.cnr.it,

More information

detected inference channel is eliminated by redesigning the database schema [Lunt, 1989] or upgrading the paths that lead to the inference [Stickel, 1

detected inference channel is eliminated by redesigning the database schema [Lunt, 1989] or upgrading the paths that lead to the inference [Stickel, 1 THE DESIGN AND IMPLEMENTATION OF A DATA LEVEL DATABASE INFERENCE DETECTION SYSTEM Raymond W. Yip and Karl N. Levitt Abstract: Inference is a way tosubvert access control mechanisms of database systems.

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

Intersection of sets *

Intersection of sets * OpenStax-CNX module: m15196 1 Intersection of sets * Sunil Kumar Singh This work is produced by OpenStax-CNX and licensed under the Creative Commons Attribution License 2.0 We have pointed out that a set

More information

to automatically generate parallel code for many applications that periodically update shared data structures using commuting operations and/or manipu

to automatically generate parallel code for many applications that periodically update shared data structures using commuting operations and/or manipu Semantic Foundations of Commutativity Analysis Martin C. Rinard y and Pedro C. Diniz z Department of Computer Science University of California, Santa Barbara Santa Barbara, CA 93106 fmartin,pedrog@cs.ucsb.edu

More information

An On-line Variable Length Binary. Institute for Systems Research and. Institute for Advanced Computer Studies. University of Maryland

An On-line Variable Length Binary. Institute for Systems Research and. Institute for Advanced Computer Studies. University of Maryland An On-line Variable Length inary Encoding Tinku Acharya Joseph F. Ja Ja Institute for Systems Research and Institute for Advanced Computer Studies University of Maryland College Park, MD 242 facharya,

More information

[8] that this cannot happen on the projective plane (cf. also [2]) and the results of Robertson, Seymour, and Thomas [5] on linkless embeddings of gra

[8] that this cannot happen on the projective plane (cf. also [2]) and the results of Robertson, Seymour, and Thomas [5] on linkless embeddings of gra Apex graphs with embeddings of face-width three Bojan Mohar Department of Mathematics University of Ljubljana Jadranska 19, 61111 Ljubljana Slovenia bojan.mohar@uni-lj.si Abstract Aa apex graph is a graph

More information

Algebra of Sets (Mathematics & Logic A)

Algebra of Sets (Mathematics & Logic A) Algebra of Sets (Mathematics & Logic A) RWK/MRQ October 28, 2002 Note. These notes are adapted (with thanks) from notes given last year by my colleague Dr Martyn Quick. Please feel free to ask me (not

More information

Matching Algorithms. Proof. If a bipartite graph has a perfect matching, then it is easy to see that the right hand side is a necessary condition.

Matching Algorithms. Proof. If a bipartite graph has a perfect matching, then it is easy to see that the right hand side is a necessary condition. 18.433 Combinatorial Optimization Matching Algorithms September 9,14,16 Lecturer: Santosh Vempala Given a graph G = (V, E), a matching M is a set of edges with the property that no two of the edges have

More information

Challenges and Interesting Research Directions in Associative Classification

Challenges and Interesting Research Directions in Associative Classification Challenges and Interesting Research Directions in Associative Classification Fadi Thabtah Department of Management Information Systems Philadelphia University Amman, Jordan Email: FFayez@philadelphia.edu.jo

More information

Concept Tree Based Clustering Visualization with Shaded Similarity Matrices

Concept Tree Based Clustering Visualization with Shaded Similarity Matrices Syracuse University SURFACE School of Information Studies: Faculty Scholarship School of Information Studies (ischool) 12-2002 Concept Tree Based Clustering Visualization with Shaded Similarity Matrices

More information

(i,j,k) North. Back (0,0,0) West (0,0,0) 01. East. Z Front. South. (a) (b)

(i,j,k) North. Back (0,0,0) West (0,0,0) 01. East. Z Front. South. (a) (b) A Simple Fault-Tolerant Adaptive and Minimal Routing Approach in 3-D Meshes y Jie Wu Department of Computer Science and Engineering Florida Atlantic University Boca Raton, FL 33431 Abstract In this paper

More information

Evolving SQL Queries for Data Mining

Evolving SQL Queries for Data Mining Evolving SQL Queries for Data Mining Majid Salim and Xin Yao School of Computer Science, The University of Birmingham Edgbaston, Birmingham B15 2TT, UK {msc30mms,x.yao}@cs.bham.ac.uk Abstract. This paper

More information

First Order Logic in Practice 1 First Order Logic in Practice John Harrison University of Cambridge Background: int

First Order Logic in Practice 1 First Order Logic in Practice John Harrison University of Cambridge   Background: int First Order Logic in Practice 1 First Order Logic in Practice John Harrison University of Cambridge http://www.cl.cam.ac.uk/users/jrh/ Background: interaction and automation Why do we need rst order automation?

More information

(a) (4 pts) Prove that if a and b are rational, then ab is rational. Since a and b are rational they can be written as the ratio of integers a 1

(a) (4 pts) Prove that if a and b are rational, then ab is rational. Since a and b are rational they can be written as the ratio of integers a 1 CS 70 Discrete Mathematics for CS Fall 2000 Wagner MT1 Sol Solutions to Midterm 1 1. (16 pts.) Theorems and proofs (a) (4 pts) Prove that if a and b are rational, then ab is rational. Since a and b are

More information

h=[3,2,5,7], pos=[2,1], neg=[4,4]

h=[3,2,5,7], pos=[2,1], neg=[4,4] 2D1431 Machine Learning Lab 1: Concept Learning & Decision Trees Frank Hoffmann e-mail: hoffmann@nada.kth.se November 8, 2002 1 Introduction You have to prepare the solutions to the lab assignments prior

More information

Eddie Schwalb, Rina Dechter. It is well known that all these tasks are NP-hard.

Eddie Schwalb, Rina Dechter.  It is well known that all these tasks are NP-hard. Coping With Disjunctions in Temporal Constraint Satisfaction Problems 3 Eddie Schwalb, Rina Dechter Department of Information and Computer Science University of California at Irvine, CA 977 eschwalb@ics.uci.edu,

More information

Appears in Proceedings of the International Joint Conference on Neural Networks (IJCNN-92), Baltimore, MD, vol. 2, pp. II II-397, June, 1992

Appears in Proceedings of the International Joint Conference on Neural Networks (IJCNN-92), Baltimore, MD, vol. 2, pp. II II-397, June, 1992 Appears in Proceedings of the International Joint Conference on Neural Networks (IJCNN-92), Baltimore, MD, vol. 2, pp. II-392 - II-397, June, 1992 Growing Layers of Perceptrons: Introducing the Extentron

More information

EFFICIENT ATTRIBUTE REDUCTION ALGORITHM

EFFICIENT ATTRIBUTE REDUCTION ALGORITHM EFFICIENT ATTRIBUTE REDUCTION ALGORITHM Zhongzhi Shi, Shaohui Liu, Zheng Zheng Institute Of Computing Technology,Chinese Academy of Sciences, Beijing, China Abstract: Key words: Efficiency of algorithms

More information

Outline. Computer Science 331. Information Hiding. What This Lecture is About. Data Structures, Abstract Data Types, and Their Implementations

Outline. Computer Science 331. Information Hiding. What This Lecture is About. Data Structures, Abstract Data Types, and Their Implementations Outline Computer Science 331 Data Structures, Abstract Data Types, and Their Implementations Mike Jacobson 1 Overview 2 ADTs as Interfaces Department of Computer Science University of Calgary Lecture #8

More information

Planning. Qiang Yang Shuo Bai and Guiyou Qiu. Department of Computer Science National Research Center for

Planning. Qiang Yang Shuo Bai and Guiyou Qiu. Department of Computer Science National Research Center for A Framework for Automatic Problem Decomposition in Planning Qiang Yang Shuo Bai and Guiyou Qiu Department of Computer Science National Research Center for University ofwaterloo Intelligint Computing Systems

More information

Data Analytics and Boolean Algebras

Data Analytics and Boolean Algebras Data Analytics and Boolean Algebras Hans van Thiel November 28, 2012 c Muitovar 2012 KvK Amsterdam 34350608 Passeerdersstraat 76 1016 XZ Amsterdam The Netherlands T: + 31 20 6247137 E: hthiel@muitovar.com

More information

Andrew Davenport and Edward Tsang. fdaveat,edwardgessex.ac.uk. mostly soluble problems and regions of overconstrained, mostly insoluble problems as

Andrew Davenport and Edward Tsang. fdaveat,edwardgessex.ac.uk. mostly soluble problems and regions of overconstrained, mostly insoluble problems as An empirical investigation into the exceptionally hard problems Andrew Davenport and Edward Tsang Department of Computer Science, University of Essex, Colchester, Essex CO SQ, United Kingdom. fdaveat,edwardgessex.ac.uk

More information