Generation of Near-Optimal Solutions Using ILP-Guided Sampling

Size: px
Start display at page:

Download "Generation of Near-Optimal Solutions Using ILP-Guided Sampling"

Transcription

1 Generation of Near-Optimal Solutions Using ILP-Guided Sampling Ashwin Srinivasan 1, Gautam Shroff 2, Lovekesh Vig 2, and Sarmimala Saikia 2 1 Department of Computer Science & Information Systems BITS Pilani, Goa Campus, Goa TCS Research, New Delhi. Abstract. Our interest in this paper is in optimisation problems that are intractable to solve by direct numerical optimisation, but nevertheless have significant amounts of relevant domain-specific knowledge. The category of heuristic search techniques known as estimation of distribution algorithms (EDAs) seek to incrementally sample from probability distributions in which optimal (or near-optimal) solutions have increasingly higher probabilities. Can we use domain knowledge to assist the estimation of these distributions? To answer this in the affirmative, we need: (a) a general-purpose technique for the incorporation of domain knowledge when constructing models for optimal values; and (b) a way of using these models to generate new data samples. Here we investigate a combination of the use of Inductive Logic Programming (ILP) for (a), and standard logic-programming machinery to generate new samples for (b). We demonstrate the approach on two optimisation problems (predicting optimal depth-of-win for the KRK endgame, and job-shop scheduling). Our results are promising and suggest that the use of ILP-constructed theories could be a useful technique for incorporating complex domainknowledge into estimation distribution procedures. 1 Introduction There are many real-world planning problems for which domain knowledge is qualitative, and not easily encoded in a form suitable for numerical optimisation. Here, for instance, are some guiding principles that are followed by the Australian Rail Track Corporation when scheduling trains: (1) If a healthy Train is running late, it should be given equal preference to other healthy Trains; (2) A higher priority train should be given preference to a lower priority train, provided the delay to the lower priority train is kept to a minimum; and so on. It is evident from this that train-scheduling may benefit from knowing if a train is healthy, what a train s priority is, and so on. But are priorities and train-health fixed, irrespective of the context? What values constitute acceptable delays to a low-priority train? Generating good train-schedules will require a combination of quantitative knowledge of a train s running times and qualitative knowledge about the train in isolation, and in relation to other trains. In this paper, we propose a heuristic search method, that comes under the broad category of an

2 2 Ashwin Srinivasan, Gautam Shroff, Lovekesh Vig, and Sarmimala Saikia estimation distribution algorithm (EDA). EDAs iteratively generates better solutions to the optimisation problem using machine-constructed models. Usually EDA s have used generative probabilistic models, such as Bayesian Networks, where domain-knowledge needs to be translated into prior distributions and/or network topology. In this paper, we are concerned with problems for such a translation is not evident. Our interest in ILP is that it presents perhaps one the most flexible ways to use domain-knowledge when constructing models. 2 EDA for Optimisation The basic EDA approach we use is the one proposed by the MIMIC algorithm [4]. Assuming that we are looking to minimise an objective function F (x), where x is an instance from some instance-space X, the approach first constructs an appropriate machine-learning model to discriminate between samples of lower and higher value, i.e., F (x) θ and F (x) > θ, and then generates samples using this model Procedure EOMS: Evolutionary Optimisation using Model-Assisted Sampling 1. Initialize population P := {x i}; θ := θ 0 2. while not converged do (a) for all x i in P label(x i) := 1 if F (x i) θ else label(x i) := 0 (b) train model M to discriminate between 1 and 0 labels i.e., P (x : label(x) = 1 M) > P (x : label(x) = 0 M) (c) regenerate P by repeated sampling using model M (d) reduce threshold θ 3. return P Fig. 1. Evolutionary optimisation using machine-learning models to guide sampling. 2.1 ILP-assisted Evolutionary Optimisation We propose the use ILP as the model construction technique in the EOMS procedure, since it provides an extremely flexible way to construct models using domain-knowledge. On the face of it, this would seem to pose a difficulty for the sampling step: how are we to generate new instances that are entailed by an ILP-constructed model? There are two straightforward options. First, if we have an enumerator of the instance space X, then we could resort to a form of rejection-sampling. Second, we can restrict ILP-theories for any predicate to generative clauses, which allows the theories to be used generatively. 3, using standard logic-programming inference machinery to generate instances of the 3 A syntactic way to do this is by adding constraints to the body of the clause that impose range (that is, type) restrictions on the variables in the head: see [5].

3 Generation of Near-Optimal Solutions Using ILP-Guided Sampling 3 success-set of each predicate. Instances obtained in this manner are selected with some probability to achieve a non-uniform sampling of the success-set. Both these methods are viable for the purposes of this paper, but in general, we expect that more sophisticated ways of sampling would be needed: see for example [3]. The procedure EOIS in Fig. 2 is a refinement of the EOMS procedure above. Procedure EOIS: Evolutionary Optimisation using ILP-Assisted Sampling Given: (a) Background knowledge B; (b) an upper-bound θ on the cost of acceptable solutions; (c) a decreasing sequence of cost-values θ 1, θ 2,..., θ n s.t. θ 1 θ θ n; and (d) an upper-bound on the sample size n 1. Let M 0:= and P 0:= sample(n, M 0, B) 2. Let k = 1 3. while (θ k θ ) do (a) E k + := {x i : x i P and F (x i) θ k } and E k := {x i : x i P and F (x i) > θ k } (b) M k := ilp(b, E k +, E k ) (c) P k := sample(n, M k, B) (d) increment k 4. return P k 1 Fig. 2. Evolutionary optimisation using ILP models to guide sampling. In EOIS, ilp(b, E +, E ) is an ILP algorithm that returns a theory M s.t. B M = E + ; B M is inconsistent with the E only to the extent allowed by constraints in B; and sample(n, M, B) returns a set of at most n instances entailed by B M, if M. If M =, it returns a random selection of n instances from the instance-space. In general, we expect sample to be implemented as a probabilistic logic program (see [3]). Here, we make do by providing an initial sample to EOIS as input when M = (to prevent biasing future iterations: this sample is obtained by uniform random selection from the instance space). For subsequent steps, we assume the availability of a generator that returns a sample of the success-set of target-predicate. That is, when M, sample returns the set S = {e i : 1 i n e i X and B M e and P r(e i ) δ}, where δ is some probability threshold (correctly therefore sample(n, M, B) should be sample(n, δ, M, B)). Thus if δ = 1, the first n instances derived (or fewer if there are less) using SLD-resolution with B M will be selected. Finally, we note that on iterations k 1, we can use the data P 0 P k 1 to obtain training examples E k + and E k since the actual costs for the P s have have already been computed. For clarity, this detail has been omitted in EOIS. 3 Empirical Evaluation For reasons of space, only bare details are presented here. For a complete description, see [7].

4 4 Ashwin Srinivasan, Gautam Shroff, Lovekesh Vig, and Sarmimala Saikia 3.1 Aims Our aims in the empirical evaluation are to investigate the following conjectures: (1) On each iteration, the EOIS procedure will yield better samples than simple random sampling of the instance-space; and (2) On termination, the EOIS procedure will yield more near-optimal instances than simple random sampling of the same number of instances as used for constructing the model. 3.2 Materials Data We use two synthetic datasets, one arising from the KRK chess endgame, and the other a restricted, but nevertheless hard 5 5 job-shop scheduling (scheduling 5 jobs taking varying lengths of time onto 5 machines, each capable of processing just one task at a time). The optimisation problem we examine for the KRK endgame is to predict the depth-of-win with optimal play [1]. The job-shop scheduling problem is less controlled than the chess endgame, but is nevertheless representative of many real-life applications (like scheduling trains), which are, in general, known to be computationally hard. We use a jobshop problem with five jobs, each consisting of five tasks that need to be executed in order. These 25 tasks are to be performed using 5 machines, each capable of performing a particular task, albeit for any of the jobs. A 5 5 matrix defines how long task j of job i takes to execute on machine j. 3.3 Background Knowledge For Chess, background predicates encode the following (WK denotes the While King, WR the White Rook, and BK the Black King): (a) Distance between pieces WK-BK, WK-BK, WK-WR; (b) File and distance patterns: WR-BK, WK-WR, WK-BK; (c) Alignment distance : WR-BK; (d) Adjacency patterns: WK-WR, WK-BK, WR-BK; (e) Between patterns: WR between WK and BK, WK between WR and BK, BK between WK and WR; (f) Distance to closest edge: BK; (g) Distance to closest corner: BK; (h) Distance to centre: WK; and (i) Inter-piece patterns: Kings in opposition, Kings almost-in-opposition, L-shaped pattern. We direct the reader to [2] for the history of using these concepts, and their definitions. For Job-Shop, background predicates encode: (a) schedule job J early on machine M (early means first or second); (b) schedule job J late on machine M (late means last or second-last); (c) job J has the fastest task for machine M; (d) job J has the slowest task for machine M; (e) job J has a fast task for machine M (fast means the fastest or second-fastest); (f) Job J has a slow task for machine M (slow means slowest or second-slowest); (g) Waiting time for machine M; (h) Total waiting time; (i) Time taken before executing a task on a machine. Correctly, the predicates for (g) (i) encode upper and lower bounds on times, using the standard inequality predicates and.

5 Generation of Near-Optimal Solutions Using ILP-Guided Sampling Results Results relevant to conjectures (1) and (2) are tabulated in Fig. 3 and Fig. 4. The principal conclusions that can drawn from the results are these: (1) For both problems, and every threshold value θ k, the probabilty of obtaining instances with cost at most θ k with model-guided sampling is substantially higher than without a model. This provides evidence that model-guided sampling results in better samples than simple random sampling (Conjecture 1); and (2) For both problems and every threshold value θ k, samples obtained with model-guided sampling contain a substantially number of near-optimal instances than samples obtained with a model (Conjecture 2) We note also that all results have been obtained by sampling a small portion of the instance space (about 10 % for Chess, and about 3 % for Job-Shop). We now Model P r(f (x) θ k M k ) None ILP (6.0) (21.0) (409.0) (a) Chess Model P r(f (x) θ k M k ) None ILP (1.3) (6.8) (26.7) (b) Job-Shop Fig. 3. Probabilities of obtaining good instances x for each iteration k of the EOIS procedure. In effect, this is an estimate of the precision when predicting F (x) θ k. None corresponds to simple random sampling (M k = ). The number in parentheses below each ILP entry denotes the ratio of that entry against the corresponding entry for None. This represents the gain in precision of using the ILP model over simple random sampling. Model Near-Optimal Instances None 1/27 2/27 3/27 ILP 11/27 22/27 27/27 (1000) (1964) (2549) (a) Chess Model Near-Optimal Instances None 3/304 6/304 9/304 ILP 6/304 28/304 36/304 (1000) (1987) (2895) (b) Job-Shop Fig. 4. Fraction of near-optimal instances (F (x) θ ) generated on each iteration of EOIS. The fraction a/b denotes that a instances of b are generated. The numbers in parentheses denote the number of training instances used by the ILP engine. The values with None are the numbers expected by sampling the same number of training instances as used for training the ILP engine.

6 6 Ashwin Srinivasan, Gautam Shroff, Lovekesh Vig, and Sarmimala Saikia examine the result in more detail. It is evident that the performance on the Job- Shop domain is not as good as on Chess. The natural question that arises is: Why is this so? We conjecture that this is a consequence of the background knowledge for Job-Shop not being as relevant to low cost values, as was the case for Chess. Some evidence for this was already apparent when we had to increase the lengths of clauses allowed for the ILP engine (this is usually a sign that the background knowledge is somewhat low-level). In contrast, with Chess, some of the concepts refer specifically to cornering the Black King, with a view of ending the game as soon as possible. We would expect these predicates to be especially useful for positions at depths-of-win near 0. Evidence of the unreliable performance of the EOIS procedure in Chess, with irrelevant background knowledge is in Fig. 5. These results suggest a refinement to the conclusions we can draw from the use of EOIS, namely: we expect the EOIS procedure to be less effective if the background knowledge is not very relevant to low-cost solutions. Background P r(f (x) θ k M k,b ) B B low B high (a) Background Near-Optimal Instances B B low 17/27 4/27 0/27 (1000) (1959) (2581) B high 11/27 17/27 27/27 (1000) (1964) (2549) (b) Fig. 5. Precision (a), and recall of near-optimal instances (b) of the EOIS procedure with background knowledge of low relevance to near-optimal solutions. The results are for Chess, with B low denoting background predicates that simply define the geometry of the board, using the predicates less than and adjacent. B high denotes background used previously, with high relevance to low depths-of-win. The numbers in parentheses in (b) are the number of training instances as before. 4 Concluding Remarks It is uncommon to see ILP applied to optimisation problems. The use of ILPconstructed theories within an evolutionary optimisation procedure is one answer to the question of how to use ILP for optimisation (but not the only answer: recent work in [6], for a different approach). This requires the ILP model to be used generatively, which is not common practice in ILP. In this paper we have been able to do this by a combination of careful definition of the background knowledge, and adding range-restrictions to clauses constructed by ILP. Our results provide evidence that this combination of ILP and logic-programming provides one way of incorporating complex, but relevant domain-knowledge into evolutionary optimisation.

7 Generation of Near-Optimal Solutions Using ILP-Guided Sampling 7 1. M. Bain and S. Muggleton. Learning optimal chess strategies. In K. Furukawa, D. Michie, and S. Muggleton, editors, Machine Intelligence 13, pages Oxford University Press, Inc., New York, NY, USA, Gabriel Breda. KRK Chess Endgame Database Knowledge Extraction and Compression, Diploma Thesis, Technische Universität, Darmstadt. 3. James Cussens. Stochastic logic programs: Sampling, inference and applications. In Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence, UAI 00, pages , San Francisco, CA, USA, Morgan Kaufmann Publishers Inc. 4. Jeremy S De Bonet, Charles L Isbell, Paul Viola, et al. Mimic: Finding optima by estimating probability densities. Advances in neural information processing systems, pages , S. Muggleton and C. Feng. Efficient Induction of Logic Programs. In S. Muggleton, editor, Inductive Logic Programming, pages Academic Press, London, Y. Shiina and H. Ohwada. Using Machine-Generated Soft Constraints for Roster Problems. In S. Muggleton and H. Watanabe, editors, Latest Advances in Inductive Logic Programming, pages World Scientific Press, A. Srinivasan, G. Shroff, L. Vig, and S. Saikia. Generation of Near-Optimal Solutions Using ILP-Assisted Sampling, Technical Report TR-EOIS , TCS Research. Available at

Neuro-symbolic EDA-based Optimization using ILP-enhanced DBNs

Neuro-symbolic EDA-based Optimization using ILP-enhanced DBNs Neuro-symbolic EDA-based Optimization using ILP-enhanced DBNs Sarmimala Saikia sarmimala.saikia@tcs.com Lovekesh Vig lovekesh.vig@tcs.com Ashwin Srinivasan Department of Computer Science BITS Goa ashwin@goa.bits-pilani.ac.in

More information

Random projection for non-gaussian mixture models

Random projection for non-gaussian mixture models Random projection for non-gaussian mixture models Győző Gidófalvi Department of Computer Science and Engineering University of California, San Diego La Jolla, CA 92037 gyozo@cs.ucsd.edu Abstract Recently,

More information

Probabilistic Abstraction Lattices: A Computationally Efficient Model for Conditional Probability Estimation

Probabilistic Abstraction Lattices: A Computationally Efficient Model for Conditional Probability Estimation Probabilistic Abstraction Lattices: A Computationally Efficient Model for Conditional Probability Estimation Daniel Lowd January 14, 2004 1 Introduction Probabilistic models have shown increasing popularity

More information

Estimation of Distribution Algorithm Based on Mixture

Estimation of Distribution Algorithm Based on Mixture Estimation of Distribution Algorithm Based on Mixture Qingfu Zhang, Jianyong Sun, Edward Tsang, and John Ford Department of Computer Science, University of Essex CO4 3SQ, Colchester, Essex, U.K. 8th May,

More information

Graph-Based Concept Learning

Graph-Based Concept Learning Graph-Based Concept Learning Jesus A. Gonzalez, Lawrence B. Holder, and Diane J. Cook Department of Computer Science and Engineering University of Texas at Arlington Box 19015, Arlington, TX 76019-0015

More information

Multiple Constraint Satisfaction by Belief Propagation: An Example Using Sudoku

Multiple Constraint Satisfaction by Belief Propagation: An Example Using Sudoku Multiple Constraint Satisfaction by Belief Propagation: An Example Using Sudoku Todd K. Moon and Jacob H. Gunther Utah State University Abstract The popular Sudoku puzzle bears structural resemblance to

More information

SEARCH METHODS USING HEURISTIC STRATEGIES. Michael Georgeff. Department of Computer Science, Monash University, Clayton, Vic, Australia.

SEARCH METHODS USING HEURISTIC STRATEGIES. Michael Georgeff. Department of Computer Science, Monash University, Clayton, Vic, Australia. SEARCH METHODS USING HEURISTIC STRATEGIES Michael Georgeff Department of Computer Science, Monash University, Clayton, Vic, Australia. Abstract Real-valued heuristic functions have been extensively used

More information

Large-Scale Assessment of Deep Relational Machines

Large-Scale Assessment of Deep Relational Machines Large-Scale Assessment of Deep Relational Machines (Presentation at the 28 th ILP 2018) Tirtharaj Dash 1, Ashwin Srinivasan 1, Lovekesh Vig 2, Oghenejokpeme I. Orhobor 3, Ross D. King 3 1 BITS Pilani,

More information

Two-dimensional Totalistic Code 52

Two-dimensional Totalistic Code 52 Two-dimensional Totalistic Code 52 Todd Rowland Senior Research Associate, Wolfram Research, Inc. 100 Trade Center Drive, Champaign, IL The totalistic two-dimensional cellular automaton code 52 is capable

More information

Bumptrees for Efficient Function, Constraint, and Classification Learning

Bumptrees for Efficient Function, Constraint, and Classification Learning umptrees for Efficient Function, Constraint, and Classification Learning Stephen M. Omohundro International Computer Science Institute 1947 Center Street, Suite 600 erkeley, California 94704 Abstract A

More information

A nodal based evolutionary structural optimisation algorithm

A nodal based evolutionary structural optimisation algorithm Computer Aided Optimum Design in Engineering IX 55 A dal based evolutionary structural optimisation algorithm Y.-M. Chen 1, A. J. Keane 2 & C. Hsiao 1 1 ational Space Program Office (SPO), Taiwan 2 Computational

More information

15.083J Integer Programming and Combinatorial Optimization Fall Enumerative Methods

15.083J Integer Programming and Combinatorial Optimization Fall Enumerative Methods 5.8J Integer Programming and Combinatorial Optimization Fall 9 A knapsack problem Enumerative Methods Let s focus on maximization integer linear programs with only binary variables For example: a knapsack

More information

Advanced Algorithms Class Notes for Monday, October 23, 2012 Min Ye, Mingfu Shao, and Bernard Moret

Advanced Algorithms Class Notes for Monday, October 23, 2012 Min Ye, Mingfu Shao, and Bernard Moret Advanced Algorithms Class Notes for Monday, October 23, 2012 Min Ye, Mingfu Shao, and Bernard Moret Greedy Algorithms (continued) The best known application where the greedy algorithm is optimal is surely

More information

Structural Advantages for Ant Colony Optimisation Inherent in Permutation Scheduling Problems

Structural Advantages for Ant Colony Optimisation Inherent in Permutation Scheduling Problems Structural Advantages for Ant Colony Optimisation Inherent in Permutation Scheduling Problems James Montgomery No Institute Given Abstract. When using a constructive search algorithm, solutions to scheduling

More information

ARMA MODEL SELECTION USING PARTICLE SWARM OPTIMIZATION AND AIC CRITERIA. Mark S. Voss a b. and Xin Feng.

ARMA MODEL SELECTION USING PARTICLE SWARM OPTIMIZATION AND AIC CRITERIA. Mark S. Voss a b. and Xin Feng. Copyright 2002 IFAC 5th Triennial World Congress, Barcelona, Spain ARMA MODEL SELECTION USING PARTICLE SWARM OPTIMIZATION AND AIC CRITERIA Mark S. Voss a b and Xin Feng a Department of Civil and Environmental

More information

IDENTIFICATION AND ELIMINATION OF INTERIOR POINTS FOR THE MINIMUM ENCLOSING BALL PROBLEM

IDENTIFICATION AND ELIMINATION OF INTERIOR POINTS FOR THE MINIMUM ENCLOSING BALL PROBLEM IDENTIFICATION AND ELIMINATION OF INTERIOR POINTS FOR THE MINIMUM ENCLOSING BALL PROBLEM S. DAMLA AHIPAŞAOĞLU AND E. ALPER Yıldırım Abstract. Given A := {a 1,..., a m } R n, we consider the problem of

More information

The Cross-Entropy Method

The Cross-Entropy Method The Cross-Entropy Method Guy Weichenberg 7 September 2003 Introduction This report is a summary of the theory underlying the Cross-Entropy (CE) method, as discussed in the tutorial by de Boer, Kroese,

More information

Inductive Logic Programming Using a MaxSAT Solver

Inductive Logic Programming Using a MaxSAT Solver Inductive Logic Programming Using a MaxSAT Solver Noriaki Chikara 1, Miyuki Koshimura 2, Hiroshi Fujita 2, and Ryuzo Hasegawa 2 1 National Institute of Technology, Tokuyama College, Gakuendai, Shunan,

More information

Preliminary results from an agent-based adaptation of friendship games

Preliminary results from an agent-based adaptation of friendship games Preliminary results from an agent-based adaptation of friendship games David S. Dixon June 29, 2011 This paper presents agent-based model (ABM) equivalents of friendshipbased games and compares the results

More information

I R TECHNICAL RESEARCH REPORT. On the Use of Integer Programming Models in AI Planning. by Thomas Vossen, Michael Ball, Amnon Lotem, Dana Nau TR 99-84

I R TECHNICAL RESEARCH REPORT. On the Use of Integer Programming Models in AI Planning. by Thomas Vossen, Michael Ball, Amnon Lotem, Dana Nau TR 99-84 TECHNICAL RESEARCH REPORT On the Use of Integer Programming Models in AI Planning by Thomas Vossen, Michael Ball, Amnon Lotem, Dana Nau TR 99-84 I R INSTITUTE FOR SYSTEMS RESEARCH ISR develops, applies

More information

Job Shop Scheduling Problem (JSSP) Genetic Algorithms Critical Block and DG distance Neighbourhood Search

Job Shop Scheduling Problem (JSSP) Genetic Algorithms Critical Block and DG distance Neighbourhood Search A JOB-SHOP SCHEDULING PROBLEM (JSSP) USING GENETIC ALGORITHM (GA) Mahanim Omar, Adam Baharum, Yahya Abu Hasan School of Mathematical Sciences, Universiti Sains Malaysia 11800 Penang, Malaysia Tel: (+)

More information

Honour Thy Neighbour Clique Maintenance in Dynamic Graphs

Honour Thy Neighbour Clique Maintenance in Dynamic Graphs Honour Thy Neighbour Clique Maintenance in Dynamic Graphs Thorsten J. Ottosen Department of Computer Science, Aalborg University, Denmark nesotto@cs.aau.dk Jiří Vomlel Institute of Information Theory and

More information

Classifying Documents by Distributed P2P Clustering

Classifying Documents by Distributed P2P Clustering Classifying Documents by Distributed P2P Clustering Martin Eisenhardt Wolfgang Müller Andreas Henrich Chair of Applied Computer Science I University of Bayreuth, Germany {eisenhardt mueller2 henrich}@uni-bayreuth.de

More information

Loopy Belief Propagation

Loopy Belief Propagation Loopy Belief Propagation Research Exam Kristin Branson September 29, 2003 Loopy Belief Propagation p.1/73 Problem Formalization Reasoning about any real-world problem requires assumptions about the structure

More information

Integer Programming ISE 418. Lecture 7. Dr. Ted Ralphs

Integer Programming ISE 418. Lecture 7. Dr. Ted Ralphs Integer Programming ISE 418 Lecture 7 Dr. Ted Ralphs ISE 418 Lecture 7 1 Reading for This Lecture Nemhauser and Wolsey Sections II.3.1, II.3.6, II.4.1, II.4.2, II.5.4 Wolsey Chapter 7 CCZ Chapter 1 Constraint

More information

GENETIC ALGORITHM VERSUS PARTICLE SWARM OPTIMIZATION IN N-QUEEN PROBLEM

GENETIC ALGORITHM VERSUS PARTICLE SWARM OPTIMIZATION IN N-QUEEN PROBLEM Journal of Al-Nahrain University Vol.10(2), December, 2007, pp.172-177 Science GENETIC ALGORITHM VERSUS PARTICLE SWARM OPTIMIZATION IN N-QUEEN PROBLEM * Azhar W. Hammad, ** Dr. Ban N. Thannoon Al-Nahrain

More information

Evolved Multi-resolution Transforms for Optimized Image Compression and Reconstruction under Quantization

Evolved Multi-resolution Transforms for Optimized Image Compression and Reconstruction under Quantization Evolved Multi-resolution Transforms for Optimized Image Compression and Reconstruction under Quantization FRANK W. MOORE Mathematical Sciences Department University of Alaska Anchorage CAS 154, 3211 Providence

More information

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India.

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India. Volume 3, Issue 3, March 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Training Artificial

More information

Formal Model. Figure 1: The target concept T is a subset of the concept S = [0, 1]. The search agent needs to search S for a point in T.

Formal Model. Figure 1: The target concept T is a subset of the concept S = [0, 1]. The search agent needs to search S for a point in T. Although this paper analyzes shaping with respect to its benefits on search problems, the reader should recognize that shaping is often intimately related to reinforcement learning. The objective in reinforcement

More information

7. Decision or classification trees

7. Decision or classification trees 7. Decision or classification trees Next we are going to consider a rather different approach from those presented so far to machine learning that use one of the most common and important data structure,

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

Mutation in Compressed Encoding in Estimation of Distribution Algorithm

Mutation in Compressed Encoding in Estimation of Distribution Algorithm Mutation in Compressed Encoding in Estimation of Distribution Algorithm Orawan Watchanupaporn, Worasait Suwannik Department of Computer Science asetsart University Bangkok, Thailand orawan.liu@gmail.com,

More information

Discrete Optimization. Lecture Notes 2

Discrete Optimization. Lecture Notes 2 Discrete Optimization. Lecture Notes 2 Disjunctive Constraints Defining variables and formulating linear constraints can be straightforward or more sophisticated, depending on the problem structure. The

More information

SOFTWARE ARCHITECTURE For MOLECTRONICS

SOFTWARE ARCHITECTURE For MOLECTRONICS SOFTWARE ARCHITECTURE For MOLECTRONICS John H. Reif Duke University Computer Science Department In Collaboration with: Allara, Hill, Reed, Seminario, Tour, Weiss Research Report Presentation to DARPA:

More information

A Reduction of Conway s Thrackle Conjecture

A Reduction of Conway s Thrackle Conjecture A Reduction of Conway s Thrackle Conjecture Wei Li, Karen Daniels, and Konstantin Rybnikov Department of Computer Science and Department of Mathematical Sciences University of Massachusetts, Lowell 01854

More information

Investigating the Application of Genetic Programming to Function Approximation

Investigating the Application of Genetic Programming to Function Approximation Investigating the Application of Genetic Programming to Function Approximation Jeremy E. Emch Computer Science Dept. Penn State University University Park, PA 16802 Abstract When analyzing a data set it

More information

IMPROVED MESOSCALE MATERIAL PROPERTY BOUNDS BASED ON VORONOI TESSELLATION OF STATISTICAL VOLUME ELEMENTS

IMPROVED MESOSCALE MATERIAL PROPERTY BOUNDS BASED ON VORONOI TESSELLATION OF STATISTICAL VOLUME ELEMENTS Meccanica dei Materiali e delle Strutture Vol. VI (216), no.1, pp. 179-186 ISSN: 235-679X Dipartimento di Ingegneria Civile, Ambientale, Aerospaziale, Dei Materiali DICAM IMPROVED MESOSCALE MATERIAL PROPERTY

More information

1 Introduction. 2 Iterative-Deepening A* 3 Test domains

1 Introduction. 2 Iterative-Deepening A* 3 Test domains From: AAAI Technical Report SS-93-04. Compilation copyright 1993, AAAI (www.aaai.org). All rights reserved. Fast Information Distribution for Massively Parallel IDA* Search Diane J. Cook Department of

More information

AN ALGORITHM WHICH GENERATES THE HAMILTONIAN CIRCUITS OF A CUBIC PLANAR MAP

AN ALGORITHM WHICH GENERATES THE HAMILTONIAN CIRCUITS OF A CUBIC PLANAR MAP AN ALGORITHM WHICH GENERATES THE HAMILTONIAN CIRCUITS OF A CUBIC PLANAR MAP W. L. PRICE ABSTRACT The paper describes an algorithm which generates those Hamiltonian circuits of a given cubic planar map

More information

Order from Chaos. Nebraska Wesleyan University Mathematics Circle

Order from Chaos. Nebraska Wesleyan University Mathematics Circle Order from Chaos Nebraska Wesleyan University Mathematics Circle Austin Mohr Department of Mathematics Nebraska Wesleyan University February 2, 20 The (, )-Puzzle Start by drawing six dots at the corners

More information

An Information-Theoretic Approach to the Prepruning of Classification Rules

An Information-Theoretic Approach to the Prepruning of Classification Rules An Information-Theoretic Approach to the Prepruning of Classification Rules Max Bramer University of Portsmouth, Portsmouth, UK Abstract: Keywords: The automatic induction of classification rules from

More information

Av. Prof. Mello Moraes, 2231, , São Paulo, SP - Brazil

Av. Prof. Mello Moraes, 2231, , São Paulo, SP - Brazil " Generalizing Variable Elimination in Bayesian Networks FABIO GAGLIARDI COZMAN Escola Politécnica, University of São Paulo Av Prof Mello Moraes, 31, 05508-900, São Paulo, SP - Brazil fgcozman@uspbr Abstract

More information

Approximately Uniform Random Sampling in Sensor Networks

Approximately Uniform Random Sampling in Sensor Networks Approximately Uniform Random Sampling in Sensor Networks Boulat A. Bash, John W. Byers and Jeffrey Considine Motivation Data aggregation Approximations to COUNT, SUM, AVG, MEDIAN Existing work does not

More information

Koch-Like Fractal Images

Koch-Like Fractal Images Bridges Finland Conference Proceedings Koch-Like Fractal Images Vincent J. Matsko Department of Mathematics University of San Francisco vince.matsko@gmail.com Abstract The Koch snowflake is defined by

More information

Using Neural Networks to Tune Heuristic Parameters in Evolutionary Optimization

Using Neural Networks to Tune Heuristic Parameters in Evolutionary Optimization Using Neural Networks to Tune Heuristic Parameters in Evolutionary Optimization MARTIN HOLEŇA Institute of Computer Science, Academy of Sciences of the Czech Republic Pod vodárenskou věží 2, 18207 Praha

More information

Stochastic propositionalization of relational data using aggregates

Stochastic propositionalization of relational data using aggregates Stochastic propositionalization of relational data using aggregates Valentin Gjorgjioski and Sašo Dzeroski Jožef Stefan Institute Abstract. The fact that data is already stored in relational databases

More information

6.034 Quiz 2, Spring 2005

6.034 Quiz 2, Spring 2005 6.034 Quiz 2, Spring 2005 Open Book, Open Notes Name: Problem 1 (13 pts) 2 (8 pts) 3 (7 pts) 4 (9 pts) 5 (8 pts) 6 (16 pts) 7 (15 pts) 8 (12 pts) 9 (12 pts) Total (100 pts) Score 1 1 Decision Trees (13

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

A genetic algorithms approach to optimization parameter space of Geant-V prototype

A genetic algorithms approach to optimization parameter space of Geant-V prototype A genetic algorithms approach to optimization parameter space of Geant-V prototype Oksana Shadura CERN, PH-SFT & National Technical Univ. of Ukraine Kyiv Polytechnic Institute Geant-V parameter space [1/2]

More information

Attribute combinations for image segmentation

Attribute combinations for image segmentation Attribute combinations for image segmentation Adam Halpert and Robert G. Clapp ABSTRACT Seismic image segmentation relies upon attributes calculated from seismic data, but a single attribute (usually amplitude)

More information

Trading Accuracy for Simplicity in Decision Trees

Trading Accuracy for Simplicity in Decision Trees Machine Learning, 15,223-250 (1994) 1994 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. Trading Accuracy for Simplicity in Decision Trees MARKO BOHANEC "Joief Stefan" Institute, Jamova

More information

A Genetic Algorithm Applied to Graph Problems Involving Subsets of Vertices

A Genetic Algorithm Applied to Graph Problems Involving Subsets of Vertices A Genetic Algorithm Applied to Graph Problems Involving Subsets of Vertices Yaser Alkhalifah Roger L. Wainwright Department of Mathematical Department of Mathematical and Computer Sciences and Computer

More information

Bayesian analysis of genetic population structure using BAPS: Exercises

Bayesian analysis of genetic population structure using BAPS: Exercises Bayesian analysis of genetic population structure using BAPS: Exercises p S u k S u p u,s S, Jukka Corander Department of Mathematics, Åbo Akademi University, Finland Exercise 1: Clustering of groups of

More information

Bipartite Graph Partitioning and Content-based Image Clustering

Bipartite Graph Partitioning and Content-based Image Clustering Bipartite Graph Partitioning and Content-based Image Clustering Guoping Qiu School of Computer Science The University of Nottingham qiu @ cs.nott.ac.uk Abstract This paper presents a method to model the

More information

Dynamic Programming User Manual v1.0 Anton E. Weisstein, Truman State University Aug. 19, 2014

Dynamic Programming User Manual v1.0 Anton E. Weisstein, Truman State University Aug. 19, 2014 Dynamic Programming User Manual v1.0 Anton E. Weisstein, Truman State University Aug. 19, 2014 Dynamic programming is a group of mathematical methods used to sequentially split a complicated problem into

More information

Dynamic Clustering of Data with Modified K-Means Algorithm

Dynamic Clustering of Data with Modified K-Means Algorithm 2012 International Conference on Information and Computer Networks (ICICN 2012) IPCSIT vol. 27 (2012) (2012) IACSIT Press, Singapore Dynamic Clustering of Data with Modified K-Means Algorithm Ahamed Shafeeq

More information

Scheduling Algorithms to Minimize Session Delays

Scheduling Algorithms to Minimize Session Delays Scheduling Algorithms to Minimize Session Delays Nandita Dukkipati and David Gutierrez A Motivation I INTRODUCTION TCP flows constitute the majority of the traffic volume in the Internet today Most of

More information

9.1. K-means Clustering

9.1. K-means Clustering 424 9. MIXTURE MODELS AND EM Section 9.2 Section 9.3 Section 9.4 view of mixture distributions in which the discrete latent variables can be interpreted as defining assignments of data points to specific

More information

PROBLEM FORMULATION AND RESEARCH METHODOLOGY

PROBLEM FORMULATION AND RESEARCH METHODOLOGY PROBLEM FORMULATION AND RESEARCH METHODOLOGY ON THE SOFT COMPUTING BASED APPROACHES FOR OBJECT DETECTION AND TRACKING IN VIDEOS CHAPTER 3 PROBLEM FORMULATION AND RESEARCH METHODOLOGY The foregoing chapter

More information

An Eternal Domination Problem in Grids

An Eternal Domination Problem in Grids Theory and Applications of Graphs Volume Issue 1 Article 2 2017 An Eternal Domination Problem in Grids William Klostermeyer University of North Florida, klostermeyer@hotmail.com Margaret-Ellen Messinger

More information

Handout 9: Imperative Programs and State

Handout 9: Imperative Programs and State 06-02552 Princ. of Progr. Languages (and Extended ) The University of Birmingham Spring Semester 2016-17 School of Computer Science c Uday Reddy2016-17 Handout 9: Imperative Programs and State Imperative

More information

Spatial and multi-scale data assimilation in EO-LDAS. Technical Note for EO-LDAS project/nceo. P. Lewis, UCL NERC NCEO

Spatial and multi-scale data assimilation in EO-LDAS. Technical Note for EO-LDAS project/nceo. P. Lewis, UCL NERC NCEO Spatial and multi-scale data assimilation in EO-LDAS Technical Note for EO-LDAS project/nceo P. Lewis, UCL NERC NCEO Abstract Email: p.lewis@ucl.ac.uk 2 May 2012 In this technical note, spatial data assimilation

More information

Automatic Counterflow Pipeline Synthesis

Automatic Counterflow Pipeline Synthesis Automatic Counterflow Pipeline Synthesis Bruce R. Childers, Jack W. Davidson Computer Science Department University of Virginia Charlottesville, Virginia 22901 {brc2m, jwd}@cs.virginia.edu Abstract The

More information

Samuel Coolidge, Dan Simon, Dennis Shasha, Technical Report NYU/CIMS/TR

Samuel Coolidge, Dan Simon, Dennis Shasha, Technical Report NYU/CIMS/TR Detecting Missing and Spurious Edges in Large, Dense Networks Using Parallel Computing Samuel Coolidge, sam.r.coolidge@gmail.com Dan Simon, des480@nyu.edu Dennis Shasha, shasha@cims.nyu.edu Technical Report

More information

Face Recognition using Eigenfaces SMAI Course Project

Face Recognition using Eigenfaces SMAI Course Project Face Recognition using Eigenfaces SMAI Course Project Satarupa Guha IIIT Hyderabad 201307566 satarupa.guha@research.iiit.ac.in Ayushi Dalmia IIIT Hyderabad 201307565 ayushi.dalmia@research.iiit.ac.in Abstract

More information

Controlling the spread of dynamic self-organising maps

Controlling the spread of dynamic self-organising maps Neural Comput & Applic (2004) 13: 168 174 DOI 10.1007/s00521-004-0419-y ORIGINAL ARTICLE L. D. Alahakoon Controlling the spread of dynamic self-organising maps Received: 7 April 2004 / Accepted: 20 April

More information

Ordering attributes for missing values prediction and data classification

Ordering attributes for missing values prediction and data classification Ordering attributes for missing values prediction and data classification E. R. Hruschka Jr., N. F. F. Ebecken COPPE /Federal University of Rio de Janeiro, Brazil. Abstract This work shows the application

More information

The University of Jordan. Accreditation & Quality Assurance Center. Curriculum for Doctorate Degree

The University of Jordan. Accreditation & Quality Assurance Center. Curriculum for Doctorate Degree Accreditation & Quality Assurance Center Curriculum for Doctorate Degree 1. Faculty King Abdullah II School for Information Technology 2. Department Computer Science الدكتوراة في علم الحاسوب (Arabic).3

More information

Order from Chaos. University of Nebraska-Lincoln Discrete Mathematics Seminar

Order from Chaos. University of Nebraska-Lincoln Discrete Mathematics Seminar Order from Chaos University of Nebraska-Lincoln Discrete Mathematics Seminar Austin Mohr Department of Mathematics Nebraska Wesleyan University February 8, 20 The (, )-Puzzle Start by drawing six dots

More information

A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection (Kohavi, 1995)

A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection (Kohavi, 1995) A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection (Kohavi, 1995) Department of Information, Operations and Management Sciences Stern School of Business, NYU padamopo@stern.nyu.edu

More information

What is Learning? CS 343: Artificial Intelligence Machine Learning. Raymond J. Mooney. Problem Solving / Planning / Control.

What is Learning? CS 343: Artificial Intelligence Machine Learning. Raymond J. Mooney. Problem Solving / Planning / Control. What is Learning? CS 343: Artificial Intelligence Machine Learning Herbert Simon: Learning is any process by which a system improves performance from experience. What is the task? Classification Problem

More information

Binary Representations of Integers and the Performance of Selectorecombinative Genetic Algorithms

Binary Representations of Integers and the Performance of Selectorecombinative Genetic Algorithms Binary Representations of Integers and the Performance of Selectorecombinative Genetic Algorithms Franz Rothlauf Department of Information Systems University of Bayreuth, Germany franz.rothlauf@uni-bayreuth.de

More information

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

More information

Using Genetic Algorithms to optimize ACS-TSP

Using Genetic Algorithms to optimize ACS-TSP Using Genetic Algorithms to optimize ACS-TSP Marcin L. Pilat and Tony White School of Computer Science, Carleton University, 1125 Colonel By Drive, Ottawa, ON, K1S 5B6, Canada {mpilat,arpwhite}@scs.carleton.ca

More information

A CSP Search Algorithm with Reduced Branching Factor

A CSP Search Algorithm with Reduced Branching Factor A CSP Search Algorithm with Reduced Branching Factor Igor Razgon and Amnon Meisels Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, 84-105, Israel {irazgon,am}@cs.bgu.ac.il

More information

A Re-examination of Limited Discrepancy Search

A Re-examination of Limited Discrepancy Search A Re-examination of Limited Discrepancy Search W. Ken Jackson, Morten Irgens, and William S. Havens Intelligent Systems Lab, Centre for Systems Science Simon Fraser University Burnaby, B.C., CANADA V5A

More information

This leads to our algorithm which is outlined in Section III, along with a tabular summary of it's performance on several benchmarks. The last section

This leads to our algorithm which is outlined in Section III, along with a tabular summary of it's performance on several benchmarks. The last section An Algorithm for Incremental Construction of Feedforward Networks of Threshold Units with Real Valued Inputs Dhananjay S. Phatak Electrical Engineering Department State University of New York, Binghamton,

More information

Inductive Logic Programming (ILP)

Inductive Logic Programming (ILP) Inductive Logic Programming (ILP) Brad Morgan CS579 4-20-05 What is it? Inductive Logic programming is a machine learning technique which classifies data through the use of logic programs. ILP algorithms

More information

Simulation of Petri Nets in Rule-Based Expert System Shell McESE

Simulation of Petri Nets in Rule-Based Expert System Shell McESE Abstract Simulation of Petri Nets in Rule-Based Expert System Shell McESE F. Franek and I. Bruha Dept of Computer Science and Systems, McMaster University Hamilton, Ont., Canada, L8S4K1 Email: {franya

More information

Chapter 4. Clustering Core Atoms by Location

Chapter 4. Clustering Core Atoms by Location Chapter 4. Clustering Core Atoms by Location In this chapter, a process for sampling core atoms in space is developed, so that the analytic techniques in section 3C can be applied to local collections

More information

On Computing Minimum Size Prime Implicants

On Computing Minimum Size Prime Implicants On Computing Minimum Size Prime Implicants João P. Marques Silva Cadence European Laboratories / IST-INESC Lisbon, Portugal jpms@inesc.pt Abstract In this paper we describe a new model and algorithm for

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

Neural Network Weight Selection Using Genetic Algorithms

Neural Network Weight Selection Using Genetic Algorithms Neural Network Weight Selection Using Genetic Algorithms David Montana presented by: Carl Fink, Hongyi Chen, Jack Cheng, Xinglong Li, Bruce Lin, Chongjie Zhang April 12, 2005 1 Neural Networks Neural networks

More information

Cover Page. The handle holds various files of this Leiden University dissertation.

Cover Page. The handle   holds various files of this Leiden University dissertation. Cover Page The handle http://hdl.handle.net/1887/22055 holds various files of this Leiden University dissertation. Author: Koch, Patrick Title: Efficient tuning in supervised machine learning Issue Date:

More information

K-Means Clustering With Initial Centroids Based On Difference Operator

K-Means Clustering With Initial Centroids Based On Difference Operator K-Means Clustering With Initial Centroids Based On Difference Operator Satish Chaurasiya 1, Dr.Ratish Agrawal 2 M.Tech Student, School of Information and Technology, R.G.P.V, Bhopal, India Assistant Professor,

More information

Structural Health Monitoring Using Guided Ultrasonic Waves to Detect Damage in Composite Panels

Structural Health Monitoring Using Guided Ultrasonic Waves to Detect Damage in Composite Panels Structural Health Monitoring Using Guided Ultrasonic Waves to Detect Damage in Composite Panels Colleen Rosania December, Introduction In the aircraft industry safety, maintenance costs, and optimum weight

More information

Efficient Feature Learning Using Perturb-and-MAP

Efficient Feature Learning Using Perturb-and-MAP Efficient Feature Learning Using Perturb-and-MAP Ke Li, Kevin Swersky, Richard Zemel Dept. of Computer Science, University of Toronto {keli,kswersky,zemel}@cs.toronto.edu Abstract Perturb-and-MAP [1] is

More information

ASSOCIATIVE MEMORY MODELS WITH STRUCTURED CONNECTIVITY

ASSOCIATIVE MEMORY MODELS WITH STRUCTURED CONNECTIVITY ASSOCIATIVE MEMORY MODELS WITH STRUCTURED CONNECTIVITY Simon Turvey, Steve Hunt, Ray Frank, Neil Davey Department of Computer Science, University of Hertfordshire, Hatfield, AL1 9AB. UK {S.P.Turvey, S.P.Hunt,

More information

CHAPTER 9 INPAINTING USING SPARSE REPRESENTATION AND INVERSE DCT

CHAPTER 9 INPAINTING USING SPARSE REPRESENTATION AND INVERSE DCT CHAPTER 9 INPAINTING USING SPARSE REPRESENTATION AND INVERSE DCT 9.1 Introduction In the previous chapters the inpainting was considered as an iterative algorithm. PDE based method uses iterations to converge

More information

A B. A: sigmoid B: EBA (x0=0.03) C: EBA (x0=0.05) U

A B. A: sigmoid B: EBA (x0=0.03) C: EBA (x0=0.05) U Extending the Power and Capacity of Constraint Satisfaction Networks nchuan Zeng and Tony R. Martinez Computer Science Department, Brigham Young University, Provo, Utah 8460 Email: zengx@axon.cs.byu.edu,

More information

Multiobjective Optimization Using Adaptive Pareto Archived Evolution Strategy

Multiobjective Optimization Using Adaptive Pareto Archived Evolution Strategy Multiobjective Optimization Using Adaptive Pareto Archived Evolution Strategy Mihai Oltean Babeş-Bolyai University Department of Computer Science Kogalniceanu 1, Cluj-Napoca, 3400, Romania moltean@cs.ubbcluj.ro

More information

Structured Learning. Jun Zhu

Structured Learning. Jun Zhu Structured Learning Jun Zhu Supervised learning Given a set of I.I.D. training samples Learn a prediction function b r a c e Supervised learning (cont d) Many different choices Logistic Regression Maximum

More information

MAXIMUM LIKELIHOOD ESTIMATION USING ACCELERATED GENETIC ALGORITHMS

MAXIMUM LIKELIHOOD ESTIMATION USING ACCELERATED GENETIC ALGORITHMS In: Journal of Applied Statistical Science Volume 18, Number 3, pp. 1 7 ISSN: 1067-5817 c 2011 Nova Science Publishers, Inc. MAXIMUM LIKELIHOOD ESTIMATION USING ACCELERATED GENETIC ALGORITHMS Füsun Akman

More information

PARTICLE SWARM OPTIMIZATION (PSO)

PARTICLE SWARM OPTIMIZATION (PSO) PARTICLE SWARM OPTIMIZATION (PSO) J. Kennedy and R. Eberhart, Particle Swarm Optimization. Proceedings of the Fourth IEEE Int. Conference on Neural Networks, 1995. A population based optimization technique

More information

Machine Learning (BSMC-GA 4439) Wenke Liu

Machine Learning (BSMC-GA 4439) Wenke Liu Machine Learning (BSMC-GA 4439) Wenke Liu 01-31-017 Outline Background Defining proximity Clustering methods Determining number of clusters Comparing two solutions Cluster analysis as unsupervised Learning

More information

Forgetting and Compacting data in Concept Learning

Forgetting and Compacting data in Concept Learning Forgetting and Compacting data in Concept Learning Gunther Sablon and Luc De Raedt Department of Computer Science, Katholieke Universiteit Leuven Celestijnenlaan 200A, B-3001 Heverlee, Belgium Email: {Gunther.Sablon,Luc.DeRaedt}@cs.kuleuven.ac.be

More information

USING CONVEX PSEUDO-DATA TO INCREASE PREDICTION ACCURACY

USING CONVEX PSEUDO-DATA TO INCREASE PREDICTION ACCURACY 1 USING CONVEX PSEUDO-DATA TO INCREASE PREDICTION ACCURACY Leo Breiman Statistics Department University of California Berkeley, CA 94720 leo@stat.berkeley.edu ABSTRACT A prediction algorithm is consistent

More information

Escola Politécnica, University of São Paulo Av. Prof. Mello Moraes, 2231, , São Paulo, SP - Brazil

Escola Politécnica, University of São Paulo Av. Prof. Mello Moraes, 2231, , São Paulo, SP - Brazil Generalizing Variable Elimination in Bayesian Networks FABIO GAGLIARDI COZMAN Escola Politécnica, University of São Paulo Av. Prof. Mello Moraes, 2231, 05508-900, São Paulo, SP - Brazil fgcozman@usp.br

More information

Efficient Non-domination Level Update Approach for Steady-State Evolutionary Multiobjective Optimization

Efficient Non-domination Level Update Approach for Steady-State Evolutionary Multiobjective Optimization Efficient Non-domination Level Update Approach for Steady-State Evolutionary Multiobjective Optimization Ke Li 1, Kalyanmoy Deb 1, Qingfu Zhang 2, and Sam Kwong 2 1 Department of Electrical and Computer

More information

Deep Generative Models Variational Autoencoders

Deep Generative Models Variational Autoencoders Deep Generative Models Variational Autoencoders Sudeshna Sarkar 5 April 2017 Generative Nets Generative models that represent probability distributions over multiple variables in some way. Directed Generative

More information