A Late Acceptance Hill-Climbing algorithm the winner of the International Optimisation Competition
|
|
- Jeffrey Hood
- 6 years ago
- Views:
Transcription
1 The University of Nottingham, Nottingham, United Kingdom A Late Acceptance Hill-Climbing algorithm the winner of the International Optimisation Competition Yuri Bykov 16 February 2012 ASAP group research seminar 1
2 The contents of the presentation The Magic Square (MS) problem. Unconstrained and constrained variants. The International Optimisation Competition and its results. Understanding the MS problem The decomposition of the MS problem. Choosing the best optimization heuristic for the MS problem. 2
3 Magic Square The task is to put the numbers from 1 to N 2 into a square matrix N N so, that the sums of numbers in all rows, columns and diagonals are equal to the Magic Number: N*(N 2 +1)/2 Magic Squares are known for over 4000 years. In different ancient cultures they had astrological significance. Magic Squares engraved on stone and metal were worn as talismans to prevent diseases. From ancient times they were widely studied by mathematicians. They were widely used as an element of art by painters, sculptures and architects. 3
4 Unconstrained and constrained Magic Square A simple (unconstrained) Magic Square can be solved with a variety of quick exact algorithms. The oldest one (Siamese method) is known in Europe since More complex variant the constrained Magic Square. There are many ways to put constraints on the Magic Square. One of the variants: the solution should contain a given submatrix in a given position. There are no exact polynomial algorithm for this problem. So, we can try heuristic methods to solve it. 4
5 Magic Square competition The International Optimization Competition was organized in November The competition was hosted by SolveIT Software Pty Ltd. The purpose of the competition was to promote the modern heuristic optimization methods among the research communities. The task was to develop an algorithm, which is able to solve the largest constrained Magic Square problem within one minute. The algorithm had to be submitted in the form of Java command-line application. Submitted applications were first tested on Magic Squares of sizes 20x20, 100x100 and 200x200 and then with progressively larger number to determine the maximum size solvable within 1 minute. The first place award was 5000 AUD. The details of the competition are available on the web at: 5
6 The Results of the Competition The competition results became available on 19 December 2011 The 2nd runner-up was Xiaofeng Xie from the Carnegie Mellon University his program solves the constrained version of magic square within a minute. The competition entry requirement of The 1st runner-up was Geoffrey Chu from the University of Melbourne his program solves the constrained version of 1,000 1,000 magic square within a minute. The winner of the competition was Yuri Bykov from the University of Nottingham his program solves the constrained version of 2,600 2,600 magic square within a minute. 6
7 While 2600x2600? The competition rules required to compose the MS and write the result into a text file. This has caused some restriction on the largest possible size. The text file containing Magic Square of 2600x2600 has the size of 52 Mb The Java procedure of writing text files is quite slow. It takes 40 seconds just to output the solution, while the actual solving procedure takes less than 20 seconds. The largest MS, which was solved by Java program is 3300x3300. For larger sizes the Java virtual machine reports a memory error. The same program written in Delphi was able to solve the MS of size 7800x7800 in less than one minute. But this limit was also caused by the maximum available PC memory size. On better hardware this algorithm could solve even larger MS within one minute. 60 times larger than the 1 st runner up 7
8 How to create such an algorithm? The following five steps are necessary: To understand the problem. To found a right problem representation. To formulate a suitable cost function. To select an effective set of moves. To choose a proper optimization heuristic. 8
9 Understanding the problem We can say the following regarding the MS problem: In reality, the MS problem represents a typical permutation problem. It is different from TSP, Quadratic Assignment, Flowshop Scheduling, etc. by just its cost function The cost function is not given explicitly. Therefore we can define and use any cost function, where the global minimum corresponds to the target MS. This is the constraint satisfaction problem (rather than the minimization one). I.e. the goal is the global optimum only. Any local optimum will not satisfy us disregarding of how close it will be to the global one. However, heuristic search methods usually provide near-optimum solutions. To get the global optimum by the heuristic search we have to employ a reheating mechanism. 9
10 Reheating If the search converges before the global optimum, control parameters are reassigned and a new search attempt is started. The attempts are continued until the global optimum is achieved. Cost N moves With a longer attempt we are more luckily to get a better solution. With a shorter attempt we can run more attempts in the same time. Thus, in order to achieve the global optimum in a minimum time we should find a right balance between the number of attempts and their length. 10
11 Simplistic approach The simplest variant is to solve the entire problem in a single phase by a one-point heuristic search Initialization: the numbers are put randomly into the square matrix ( the constraint matrix is placed into a given position). Moves: swapping two randomly chosen numbers (the elements of the constraint matrix are not moved). Cost function (C): a sum of the absolute deviations of the sums of rows, columns and diagonals from the Magic Number. Acceptance condition: can be HC, SA, or LAHC. Reheating s s no Start Initialization s s 0 Generate a candidate solution s Calculate the candidate cost function C Apply acceptance condition Accept s s' Convergence C=0 yes Stop yes yes no Unfortunately this approach is not powerful enough (max N=50-70 in one minute ) 11
12 The golden rule of heuristic optimization If you can decompose the problem do that! This is the most effective way to increase the quality of results. 12
13 A decomposition approach The Magic Square problem can be decomposed into a number of Border Square or Magic Frame problems The Magic Frame (MF) is a square matrix N N where only border rows and columns are non-zero. The sum of numbers in the border rows and columns is equal to the Magic Number. The sum of numbers in other rows, columns and diagonals is equal to N*N The MF contains 2*(N-1) numbers A i N*N/2 and the same number of their counterparts B i =N*N+1-A i placed symmetrically. The MF can be composed from any given set of numbers, which satisfy the above conditions. The MF can contain constraints (e.g. specified for the complete Magic Square). 13
14 The main property of the Magic Frames If we take a Magic Frame of N N and place inside it a Magic Square of (N-1) (N-1) composed from the remaining numbers, then we get a Magic Square of N N Correspondingly: The MS can be composed by placing the MFs one into other. A smaller MS can be placed inside the set of MFs. If the constraint matrix is placed not far from the border, we can construct several constrained MFs and then fill the center by a quick exact method. If the constraint matrix lies more deeper inside, we can construct several unconstrained MFs and then use the simplistic approach to insert a small constrained MS in the centre. 14
15 The moving of elements within the Magic Square Having the deeply placed constraints we can also use the following method: If we take four points A, B, C, and D so that they represent the vertices of a rectangle, do not belong to diagonals and A+B=C+D, then we can swap A with C and B with D without disturbing the magic properties of the square. By this method we can assign constraints close to the border, compose the Magic Square and then move the constraints into a necessary position. If some of these points belong to diagonals, the same procedure is repeated to repair the Magic Square. 15
16 Heuristic search for the Magic Frame problem It is different from the simplistic approach by the following points: At the initialization step the algorithm selects the necessary numbers A i and put them randomly into the Magic Frame. Constraints are put into the necessary places. Counterparts B i are put symmetrically to A i. To calculate the cost function we first calculate the sums of the numbers in the first row and the first column. Then the cost is calculated as a sum of absolute deviations of these sums from the Magic Number. We do not need to count the last row and the last column as they give the same cost =51 Magic Number for 5x5 Magic Square = 65 Cost= =23 =46 16
17 Heuristic search for the MF problem (cont) Solutions are modified by two types of moves: 1 st type: a randomly chosen number is swapped with its counterpart. 2 nd type: the algorithm randomly chooses two numbers and then swaps them and their counterparts. The type of the move is selected randomly. Constraints are not moved. The both types keep the feasibility of the MF, i.e. A i +B i =N*N+1 throughout the search procedure
18 The scheme of the entry algorithm The entry algorithm has to solve constrained and unconstrained MS problems of any size Start yes Constrained problem no yes Problem size < 20 yes no Constraints near the board no Solve one outer MF yes Evenly even problem no Apply the simplistic approach to MS Solve a number of outer MFs Solve a number of outer MFs Replace constraints Apply constructive method Fill the center with constructive method Stop 18
19 Choosing an optimization heuristic There were tested 3 candidate heuristics: Greedy Hill-Climbing (HC), Simulated Annealing (SA), and the Late Acceptance Hill-Climbing (LAHC). A suitable heuristic should provide a stable result within 1 minute on MS problems with a high variation of size: from 25x25 to 7000x7000. The exceeding of 1 minute is regarded as a fail. The algorithm should run in a fully-automated mode. So, the algorithmic parameters should be independent on the size of the problem. The parameterization is complicated because we have a search with reheating. All tests are run on the most complex constrained MS: the constraint matrix is placed into position (4,4). Here the algorithm consequently solves 7 outer MFs. 19
20 Late Acceptance Hill-Climbing (LAHC) This is the new optimization heuristic invented in All improving candidates are accepted. A worse candidate is accepted if it is better than the solution, which was current several iterations ago. Previous current costs are stored in a list (fitness array) of length L fa. The candidate cost is compared with the last element of the list. L fa is a singe input parameter of this algorithm. During the reheating all elements of the fitness array are increased on 10% of their initial values. Hence, we have to set up: Just the length of the fitness array L fa. 20
21 The parameterization of the LAHC This is quite straightforward: run the algorithm a number of times with different L fa, record the processing time and choose the L fa, which gives the shortest time. Time 60 Size=25x25 Time 60 Size=1000x Lfa Lfa With relatively small problems the LAHC works well with any value of L fa (except of too small values). This demonstrates that the HC (LAHC with L fa =1) is not suitable for MS problem. 21
22 The parameterization of the LAHC (cont.) The tests continued with largest problem (7000x7000). With 7000x7000 problem in the majority of cases the LAHC produced a result in less than 30 sec. Only one run (over 3669 ones) lasted over 60 sec (the reliability of the method = %). Experiments did not reveal a clear optimum for L fa. The L fa of looks to be good enough for all problem sizes. Time Size=7000x Lfa 22
23 Simulated Annealing The second (after Genetic Algorithms) most studied method above all metaheuristics. All improving candidates are accepted. Worse candidates are accepted with probability P=exp[(C-C )/T j ], where T j is a temperature on the j th iteration. In logarithmic cooling schedule the temperature for the next (j+1) iteration can be calculated as T j+1 =T j /(1+λT j ). Cooling factor λ can be calculated as λ=(t i -T f )/(N tot *T i *T f ), where T i is the initial temperature, T f is the final temperature and N tot is the total number of iterations, required for passing from T i to T f. During the reheating the temperature is increased on 10% of its initial value. Thus, before running SA, we have to set up: Initial temperature T i Final temperature T f Total number of iterations N tot 23
24 The initial and final temperatures of SA The literature suggests: The initial temperature should be set so that, around 85% of non-improving moves will be accepted. However, tests have shown that the optimal initial temperature is highly varied across the problems of different sizes. We have two hypothetical options of how to set the optimum values of T i for each MS size: To develop a special algorithm for adjusting T i. To derive an analytical formula for T i using the regression. Both options require significant extra efforts. Moreover, it is unknown how beneficial this suggestions for MS problem. So, these experiments were postponed.? MS size Optimal T i * * *10 6 The final temperature should guarantee the convergence of the search. The convergence temperature is also highly different across the runs and the MS sizes. During tests it was not lower than 0.5. So, this value was assumed for T f. 24
25 Testing the total number of iterations for SA Having T i and T f the algorithm was run a number of times while varying N tot. Time Size=25x25 Time Size=200x Ntot (Millions) Ntot (Millions) With small sized problems SA works well (similar to the LAHC). The optimal value of N tot is not completely clear from these diagrams. 25
26 Problems of the larger size When the run time exceeded 60 sec the search was stopped (failure was recorded). Time Size=1000x1000 Time Size=2600x Ntot (Millions) Ntot (Millions) With the 1000x1000 problem the search fails in 5% of cases. With 2600x2600 problem only 38% of runs were successful. 26
27 The largest problem With 7000x7000 problem SA fails in 93% of cases. Te reliability is only 7%. These experiments did not reveal any beneficial value of N tot for the large-sized problems. Time Size=7000x Ntot (Millions) A hypothesis: maybe T i taken from the literature is not suitable here? To check that it is worth to test T i in the same way.? 27
28 Testing the initial temperature N tot was set to 20 millions and the SA was run while varying T i. Time 60 Size=2600x2600 Time 60 Size=7000x Ti (Millions) With the 2600x2600 problem the search fails in 62% of cases With 7000x7000 problem only 7% of runs were successful Ti (Millions) The experiments again did not reveal the optimal value of T i 28
29 The last attempt to tune SA Finally the SA was run with varying both T i and N tot. The last attempt still did not reveal any region of T i & N tot where the SA does not fail. Ntot (Millions) 80 Size=7000x7000 Now the SA has failed in 83% of runs. The distribution of the failing is uniform over the experimental space. Although the parameterization of SA took a lot of time and efforts, it did not yield a positive result. It could be concluded that SA performs very poorly with large MS problems regardless of the parameters setting. SA Ti (Millions) - T 60 sec - T > 60 sec 29
30 Final comparison of the LAHC with the SA Several qualifiers were taken into account: Qualifier LAHC SA Implementation Simple Simple Productivity 8*10 6 iterations per second 4.9*10 6 iterations per second (because of the time-expensive exponent calculation) Number of parameters 1 3 Parameterization Easy Hard Reliability with small MS problems Reliability with large MS problems Suitable for the competition entry High High Yes High Low No 30
31 Conclusions There already known an example of non-linearly rescaled Exam Timetabling problem, where the SA fails but the LAHC works well (see: The Magic Square problem is the second example, where the LAHC clearly outperforms the SA. It could be proposed that the LAHC is more suitable than SA to very large optimization problems. This is not surprising, because the LAHC holds the unique property: it combines the power of one-point searches and reliability of ranking-based methods. You can check everything by yourself: the original Java code is available on the above web page. 31
32 Acknowledgement This was my first program, written in Java and I would like to thank Matthew Hyde, who has helped me with my first steps in Java. 32
33 Any questions? 33
Constructing Constrained-Version of Magic Squares Using Selection Hyper-heuristics
Constructing Constrained-Version of Magic Squares Using Selection Hyper-heuristics Ahmed Kheiri and Ender Özcan University of Nottingham, School of Computer Science Jubilee Campus, Wollaton Road, Nottingham,
More informationNon-deterministic Search techniques. Emma Hart
Non-deterministic Search techniques Emma Hart Why do local search? Many real problems are too hard to solve with exact (deterministic) techniques Modern, non-deterministic techniques offer ways of getting
More informationAlgorithm Design (4) Metaheuristics
Algorithm Design (4) Metaheuristics Takashi Chikayama School of Engineering The University of Tokyo Formalization of Constraint Optimization Minimize (or maximize) the objective function f(x 0,, x n )
More informationCHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION
131 CHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION 6.1 INTRODUCTION The Orthogonal arrays are helpful in guiding the heuristic algorithms to obtain a good solution when applied to NP-hard problems. This
More informationVariable Neighborhood Search
Variable Neighborhood Search Hansen and Mladenovic, Variable neighborhood search: Principles and applications, EJOR 43 (2001) 1 Basic notions of VNS Systematic change of the neighborhood in search Does
More informationConstraint Satisfaction Problems
Constraint Satisfaction Problems Berlin Chen Department of Computer Science & Information Engineering National Taiwan Normal University References: 1. S. Russell and P. Norvig. Artificial Intelligence:
More information5. Computational Geometry, Benchmarks and Algorithms for Rectangular and Irregular Packing. 6. Meta-heuristic Algorithms and Rectangular Packing
1. Introduction 2. Cutting and Packing Problems 3. Optimisation Techniques 4. Automated Packing Techniques 5. Computational Geometry, Benchmarks and Algorithms for Rectangular and Irregular Packing 6.
More informationSimulated Annealing. G5BAIM: Artificial Intelligence Methods. Graham Kendall. 15 Feb 09 1
G5BAIM: Artificial Intelligence Methods Graham Kendall 15 Feb 09 1 G5BAIM Artificial Intelligence Methods Graham Kendall Simulated Annealing Simulated Annealing Motivated by the physical annealing process
More informationTABU search and Iterated Local Search classical OR methods
TABU search and Iterated Local Search classical OR methods tks@imm.dtu.dk Informatics and Mathematical Modeling Technical University of Denmark 1 Outline TSP optimization problem Tabu Search (TS) (most
More informationOutline. TABU search and Iterated Local Search classical OR methods. Traveling Salesman Problem (TSP) 2-opt
TABU search and Iterated Local Search classical OR methods Outline TSP optimization problem Tabu Search (TS) (most important) Iterated Local Search (ILS) tks@imm.dtu.dk Informatics and Mathematical Modeling
More informationExample: Map coloring
Today s s lecture Local Search Lecture 7: Search - 6 Heuristic Repair CSP and 3-SAT Solving CSPs using Systematic Search. Victor Lesser CMPSCI 683 Fall 2004 The relationship between problem structure and
More informationIntroduction to optimization methods and line search
Introduction to optimization methods and line search Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi How to find optimal solutions? Trial and error widely used in practice, not efficient and
More informationWhat is Search For? CSE 473: Artificial Intelligence. Example: N-Queens. Example: N-Queens. Example: Map-Coloring 4/7/17
CSE 473: Artificial Intelligence Constraint Satisfaction Dieter Fox What is Search For? Models of the world: single agent, deterministic actions, fully observed state, discrete state space Planning: sequences
More informationComparison of TSP Algorithms
Comparison of TSP Algorithms Project for Models in Facilities Planning and Materials Handling December 1998 Participants: Byung-In Kim Jae-Ik Shim Min Zhang Executive Summary Our purpose in this term project
More informationOutline of the module
Evolutionary and Heuristic Optimisation (ITNPD8) Lecture 2: Heuristics and Metaheuristics Gabriela Ochoa http://www.cs.stir.ac.uk/~goc/ Computing Science and Mathematics, School of Natural Sciences University
More informationTheoretical Concepts of Machine Learning
Theoretical Concepts of Machine Learning Part 2 Institute of Bioinformatics Johannes Kepler University, Linz, Austria Outline 1 Introduction 2 Generalization Error 3 Maximum Likelihood 4 Noise Models 5
More informationGenetic Algorithm for Circuit Partitioning
Genetic Algorithm for Circuit Partitioning ZOLTAN BARUCH, OCTAVIAN CREŢ, KALMAN PUSZTAI Computer Science Department, Technical University of Cluj-Napoca, 26, Bariţiu St., 3400 Cluj-Napoca, Romania {Zoltan.Baruch,
More informationChapter 14 Global Search Algorithms
Chapter 14 Global Search Algorithms An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Introduction We discuss various search methods that attempts to search throughout the entire feasible set.
More informationKalev Kask and Rina Dechter. Department of Information and Computer Science. University of California, Irvine, CA
GSAT and Local Consistency 3 Kalev Kask and Rina Dechter Department of Information and Computer Science University of California, Irvine, CA 92717-3425 fkkask,dechterg@ics.uci.edu Abstract It has been
More informationEFFICIENT ATTACKS ON HOMOPHONIC SUBSTITUTION CIPHERS
EFFICIENT ATTACKS ON HOMOPHONIC SUBSTITUTION CIPHERS A Project Report Presented to The faculty of the Department of Computer Science San Jose State University In Partial Fulfillment of the Requirements
More information56:272 Integer Programming & Network Flows Final Exam -- December 16, 1997
56:272 Integer Programming & Network Flows Final Exam -- December 16, 1997 Answer #1 and any five of the remaining six problems! possible score 1. Multiple Choice 25 2. Traveling Salesman Problem 15 3.
More informationKnowledge Discovery and Data Mining. Neural Nets. A simple NN as a Mathematical Formula. Notes. Lecture 13 - Neural Nets. Tom Kelsey.
Knowledge Discovery and Data Mining Lecture 13 - Neural Nets Tom Kelsey School of Computer Science University of St Andrews http://tom.home.cs.st-andrews.ac.uk twk@st-andrews.ac.uk Tom Kelsey ID5059-13-NN
More informationKnowledge Discovery and Data Mining
Knowledge Discovery and Data Mining Lecture 13 - Neural Nets Tom Kelsey School of Computer Science University of St Andrews http://tom.home.cs.st-andrews.ac.uk twk@st-andrews.ac.uk Tom Kelsey ID5059-13-NN
More informationOptimization Techniques for Design Space Exploration
0-0-7 Optimization Techniques for Design Space Exploration Zebo Peng Embedded Systems Laboratory (ESLAB) Linköping University Outline Optimization problems in ERT system design Heuristic techniques Simulated
More informationSimple mechanisms for escaping from local optima:
The methods we have seen so far are iterative improvement methods, that is, they get stuck in local optima. Simple mechanisms for escaping from local optima: I Restart: re-initialise search whenever a
More informationARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS
ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS Gabriela Ochoa http://www.cs.stir.ac.uk/~goc/ OUTLINE Optimisation problems Optimisation & search Two Examples The knapsack problem
More informationLECTURES 3 and 4: Flows and Matchings
LECTURES 3 and 4: Flows and Matchings 1 Max Flow MAX FLOW (SP). Instance: Directed graph N = (V,A), two nodes s,t V, and capacities on the arcs c : A R +. A flow is a set of numbers on the arcs such that
More informationFoundations of Computing
Foundations of Computing Darmstadt University of Technology Dept. Computer Science Winter Term 2005 / 2006 Copyright c 2004 by Matthias Müller-Hannemann and Karsten Weihe All rights reserved http://www.algo.informatik.tu-darmstadt.de/
More informationSimplicial Global Optimization
Simplicial Global Optimization Julius Žilinskas Vilnius University, Lithuania September, 7 http://web.vu.lt/mii/j.zilinskas Global optimization Find f = min x A f (x) and x A, f (x ) = f, where A R n.
More informationModule 1 Lecture Notes 2. Optimization Problem and Model Formulation
Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization
More informationThe Partitioning Problem
The Partitioning Problem 1. Iterative Improvement The partitioning problem is the problem of breaking a circuit into two subcircuits. Like many problems in VLSI design automation, we will solve this problem
More informationReddit Recommendation System Daniel Poon, Yu Wu, David (Qifan) Zhang CS229, Stanford University December 11 th, 2011
Reddit Recommendation System Daniel Poon, Yu Wu, David (Qifan) Zhang CS229, Stanford University December 11 th, 2011 1. Introduction Reddit is one of the most popular online social news websites with millions
More informationCS 331: Artificial Intelligence Local Search 1. Tough real-world problems
CS 331: Artificial Intelligence Local Search 1 1 Tough real-world problems Suppose you had to solve VLSI layout problems (minimize distance between components, unused space, etc.) Or schedule airlines
More informationSparse Matrices Reordering using Evolutionary Algorithms: A Seeded Approach
1 Sparse Matrices Reordering using Evolutionary Algorithms: A Seeded Approach David Greiner, Gustavo Montero, Gabriel Winter Institute of Intelligent Systems and Numerical Applications in Engineering (IUSIANI)
More informationD-Optimal Designs. Chapter 888. Introduction. D-Optimal Design Overview
Chapter 888 Introduction This procedure generates D-optimal designs for multi-factor experiments with both quantitative and qualitative factors. The factors can have a mixed number of levels. For example,
More informationLocal Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )
Local Search and Optimization Chapter 4 Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld ) 1 2 Outline Local search techniques and optimization Hill-climbing
More informationLocal Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )
Local Search and Optimization Chapter 4 Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld ) 1 2 Outline Local search techniques and optimization Hill-climbing
More informationJob Shop Scheduling Problem (JSSP) Genetic Algorithms Critical Block and DG distance Neighbourhood Search
A JOB-SHOP SCHEDULING PROBLEM (JSSP) USING GENETIC ALGORITHM (GA) Mahanim Omar, Adam Baharum, Yahya Abu Hasan School of Mathematical Sciences, Universiti Sains Malaysia 11800 Penang, Malaysia Tel: (+)
More informationmywbut.com Informed Search Strategies-II
Informed Search Strategies-II 1 3.3 Iterative-Deepening A* 3.3.1 IDA* Algorithm Iterative deepening A* or IDA* is similar to iterative-deepening depth-first, but with the following modifications: The depth
More informationHeuristic Optimisation
Heuristic Optimisation Part 2: Basic concepts Sándor Zoltán Németh http://web.mat.bham.ac.uk/s.z.nemeth s.nemeth@bham.ac.uk University of Birmingham S Z Németh (s.nemeth@bham.ac.uk) Heuristic Optimisation
More informationUsing a Divide and Conquer Method for Routing in a PC Vehicle Routing Application. Abstract
Using a Divide and Conquer Method for Routing in a PC Vehicle Routing Application Brenda Cheang Department of Management Information Systems University College Dublin Belfield, Dublin 4, Ireland. Sherlyn
More informationProgramming, numerics and optimization
Programming, numerics and optimization Lecture C-4: Constrained optimization Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428 June
More informationEvolutionary Algorithms: Perfecting the Art of Good Enough. Liz Sander
Evolutionary Algorithms: Perfecting the Art of Good Enough Liz Sander Source: wikipedia.org Source: fishbase.org Source: youtube.com Sometimes, we can t find the best solution. Sometimes, we can t find
More informationPlacement Algorithm for FPGA Circuits
Placement Algorithm for FPGA Circuits ZOLTAN BARUCH, OCTAVIAN CREŢ, KALMAN PUSZTAI Computer Science Department, Technical University of Cluj-Napoca, 26, Bariţiu St., 3400 Cluj-Napoca, Romania {Zoltan.Baruch,
More informationOne-mode Additive Clustering of Multiway Data
One-mode Additive Clustering of Multiway Data Dirk Depril and Iven Van Mechelen KULeuven Tiensestraat 103 3000 Leuven, Belgium (e-mail: dirk.depril@psy.kuleuven.ac.be iven.vanmechelen@psy.kuleuven.ac.be)
More informationMetaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini
Metaheuristic Development Methodology Fall 2009 Instructor: Dr. Masoud Yaghini Phases and Steps Phases and Steps Phase 1: Understanding Problem Step 1: State the Problem Step 2: Review of Existing Solution
More informationMidterm Examination CS540-2: Introduction to Artificial Intelligence
Midterm Examination CS540-2: Introduction to Artificial Intelligence March 15, 2018 LAST NAME: FIRST NAME: Problem Score Max Score 1 12 2 13 3 9 4 11 5 8 6 13 7 9 8 16 9 9 Total 100 Question 1. [12] Search
More informationArtificial Intelligence (Heuristic Search)
Artificial Intelligence (Heuristic Search) KR Chowdhary, Professor & Head Email: kr.chowdhary@acm.org Department of Computer Science and Engineering MBM Engineering College, Jodhpur kr chowdhary heuristic
More informationWelfare Navigation Using Genetic Algorithm
Welfare Navigation Using Genetic Algorithm David Erukhimovich and Yoel Zeldes Hebrew University of Jerusalem AI course final project Abstract Using standard navigation algorithms and applications (such
More informationIterative improvement algorithms. Today. Example: Travelling Salesperson Problem. Example: n-queens
Today See Russell and Norvig, chapters 4 & 5 Local search and optimisation Constraint satisfaction problems (CSPs) CSP examples Backtracking search for CSPs 1 Iterative improvement algorithms In many optimization
More informationSolving the Large Scale Next Release Problem with a Backbone Based Multilevel Algorithm
IEEE TRANSACTIONS ON JOURNAL NAME, MANUSCRIPT ID 1 Solving the Large Scale Next Release Problem with a Backbone Based Multilevel Algorithm Jifeng Xuan, He Jiang, Member, IEEE, Zhilei Ren, and Zhongxuan
More informationModule 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur
Module 4 Constraint satisfaction problems Lesson 10 Constraint satisfaction problems - II 4.5 Variable and Value Ordering A search algorithm for constraint satisfaction requires the order in which variables
More informationHomework 2: Search and Optimization
Scott Chow ROB 537: Learning Based Control October 16, 2017 Homework 2: Search and Optimization 1 Introduction The Traveling Salesman Problem is a well-explored problem that has been shown to be NP-Complete.
More informationOutline. Best-first search
Outline Best-first search Greedy best-first search A* search Heuristics Local search algorithms Hill-climbing search Beam search Simulated annealing search Genetic algorithms Constraint Satisfaction Problems
More informationHeuristic Optimisation
Heuristic Optimisation Revision Lecture Sándor Zoltán Németh http://web.mat.bham.ac.uk/s.z.nemeth s.nemeth@bham.ac.uk University of Birmingham S Z Németh (s.nemeth@bham.ac.uk) Heuristic Optimisation University
More informationCHAPTER 5 MAINTENANCE OPTIMIZATION OF WATER DISTRIBUTION SYSTEM: SIMULATED ANNEALING APPROACH
79 CHAPTER 5 MAINTENANCE OPTIMIZATION OF WATER DISTRIBUTION SYSTEM: SIMULATED ANNEALING APPROACH 5.1 INTRODUCTION Water distribution systems are complex interconnected networks that require extensive planning
More informationArtificial Intelligence Informed search. Peter Antal Tadeusz Dobrowiecki
Artificial Intelligence Informed search Peter Antal antal@mit.bme.hu Tadeusz Dobrowiecki tade@mit.bme.hu A.I. 9/17/2018 1 Informed = use problem-specific knowledge Which search strategies? Best-first search
More informationCS W4701 Artificial Intelligence
CS W4701 Artificial Intelligence Fall 2013 Chapter 6: Constraint Satisfaction Problems Jonathan Voris (based on slides by Sal Stolfo) Assignment 3 Go Encircling Game Ancient Chinese game Dates back At
More informationConstraint Satisfaction
Constraint Satisfaction Philipp Koehn 1 October 2015 Outline 1 Constraint satisfaction problems (CSP) examples Backtracking search for CSPs Problem structure and problem decomposition Local search for
More information7. Decision or classification trees
7. Decision or classification trees Next we are going to consider a rather different approach from those presented so far to machine learning that use one of the most common and important data structure,
More informationMachine Learning for Software Engineering
Machine Learning for Software Engineering Single-State Meta-Heuristics Prof. Dr.-Ing. Norbert Siegmund Intelligent Software Systems 1 2 Recap: Goal is to Find the Optimum Challenges of general optimization
More informationProblem Solving: Informed Search
Problem Solving: Informed Search References Russell and Norvig, Artificial Intelligence: A modern approach, 2nd ed. Prentice Hall, 2003 (Chapters 1,2, and 4) Nilsson, Artificial intelligence: A New synthesis.
More informationEvaluating Classifiers
Evaluating Classifiers Charles Elkan elkan@cs.ucsd.edu January 18, 2011 In a real-world application of supervised learning, we have a training set of examples with labels, and a test set of examples with
More informationB553 Lecture 12: Global Optimization
B553 Lecture 12: Global Optimization Kris Hauser February 20, 2012 Most of the techniques we have examined in prior lectures only deal with local optimization, so that we can only guarantee convergence
More informationHEURISTIC OPTIMIZATION USING COMPUTER SIMULATION: A STUDY OF STAFFING LEVELS IN A PHARMACEUTICAL MANUFACTURING LABORATORY
Proceedings of the 1998 Winter Simulation Conference D.J. Medeiros, E.F. Watson, J.S. Carson and M.S. Manivannan, eds. HEURISTIC OPTIMIZATION USING COMPUTER SIMULATION: A STUDY OF STAFFING LEVELS IN A
More informationOptimization. Industrial AI Lab.
Optimization Industrial AI Lab. Optimization An important tool in 1) Engineering problem solving and 2) Decision science People optimize Nature optimizes 2 Optimization People optimize (source: http://nautil.us/blog/to-save-drowning-people-ask-yourself-what-would-light-do)
More informationVNS-based heuristic with an exponential neighborhood for the server load balancing problem
Available online at www.sciencedirect.com Electronic Notes in Discrete Mathematics 47 (2015) 53 60 www.elsevier.com/locate/endm VNS-based heuristic with an exponential neighborhood for the server load
More informationArtificial Intelligence
Artificial Intelligence Informed Search and Exploration Chapter 4 (4.3 4.6) Searching: So Far We ve discussed how to build goal-based and utility-based agents that search to solve problems We ve also presented
More informationLecture 6: Constraint Satisfaction Problems (CSPs)
Lecture 6: Constraint Satisfaction Problems (CSPs) CS 580 (001) - Spring 2018 Amarda Shehu Department of Computer Science George Mason University, Fairfax, VA, USA February 28, 2018 Amarda Shehu (580)
More informationCHAPTER 6 IDENTIFICATION OF CLUSTERS USING VISUAL VALIDATION VAT ALGORITHM
96 CHAPTER 6 IDENTIFICATION OF CLUSTERS USING VISUAL VALIDATION VAT ALGORITHM Clustering is the process of combining a set of relevant information in the same group. In this process KM algorithm plays
More informationAlgorithms & Complexity
Algorithms & Complexity Nicolas Stroppa - nstroppa@computing.dcu.ie CA313@Dublin City University. 2006-2007. November 21, 2006 Classification of Algorithms O(1): Run time is independent of the size of
More informationLecture 4. Convexity Robust cost functions Optimizing non-convex functions. 3B1B Optimization Michaelmas 2017 A. Zisserman
Lecture 4 3B1B Optimization Michaelmas 2017 A. Zisserman Convexity Robust cost functions Optimizing non-convex functions grid search branch and bound simulated annealing evolutionary optimization The Optimization
More informationEvolutionary Non-Linear Great Deluge for University Course Timetabling
Evolutionary Non-Linear Great Deluge for University Course Timetabling Dario Landa-Silva and Joe Henry Obit Automated Scheduling, Optimisation and Planning Research Group School of Computer Science, The
More informationSingle Candidate Methods
Single Candidate Methods In Heuristic Optimization Based on: [3] S. Luke, "Essentials of Metaheuristics," [Online]. Available: http://cs.gmu.edu/~sean/book/metaheuristics/essentials.pdf. [Accessed 11 May
More informationTabu Search for Constraint Solving and Its Applications. Jin-Kao Hao LERIA University of Angers 2 Boulevard Lavoisier Angers Cedex 01 - France
Tabu Search for Constraint Solving and Its Applications Jin-Kao Hao LERIA University of Angers 2 Boulevard Lavoisier 49045 Angers Cedex 01 - France 1. Introduction The Constraint Satisfaction Problem (CSP)
More informationNotes for Lecture 24
U.C. Berkeley CS170: Intro to CS Theory Handout N24 Professor Luca Trevisan December 4, 2001 Notes for Lecture 24 1 Some NP-complete Numerical Problems 1.1 Subset Sum The Subset Sum problem is defined
More informationReview of the Robust K-means Algorithm and Comparison with Other Clustering Methods
Review of the Robust K-means Algorithm and Comparison with Other Clustering Methods Ben Karsin University of Hawaii at Manoa Information and Computer Science ICS 63 Machine Learning Fall 8 Introduction
More informationA Modular Multiphase Heuristic Solver for Post Enrolment Course Timetabling
A Modular Multiphase Heuristic Solver for Post Enrolment Course Timetabling Marco Chiarandini 1, Chris Fawcett 2, and Holger H. Hoos 2,3 1 University of Southern Denmark, Department of Mathematics and
More informationChapter 16 Heuristic Search
Chapter 16 Heuristic Search Part I. Preliminaries Part II. Tightly Coupled Multicore Chapter 6. Parallel Loops Chapter 7. Parallel Loop Schedules Chapter 8. Parallel Reduction Chapter 9. Reduction Variables
More informationREGRESSION ANALYSIS : LINEAR BY MAUAJAMA FIRDAUS & TULIKA SAHA
REGRESSION ANALYSIS : LINEAR BY MAUAJAMA FIRDAUS & TULIKA SAHA MACHINE LEARNING It is the science of getting computer to learn without being explicitly programmed. Machine learning is an area of artificial
More informationLocal Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )
Local Search and Optimization Chapter 4 Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld ) 1 Outline Local search techniques and optimization Hill-climbing
More informationEscaping Local Optima: Genetic Algorithm
Artificial Intelligence Escaping Local Optima: Genetic Algorithm Dae-Won Kim School of Computer Science & Engineering Chung-Ang University We re trying to escape local optima To achieve this, we have learned
More informationLocal Search Methods. CS 188: Artificial Intelligence Fall Announcements. Hill Climbing. Hill Climbing Diagram. Today
CS 188: Artificial Intelligence Fall 2006 Lecture 5: Robot Motion Planning 9/14/2006 Local Search Methods Queue-based algorithms keep fallback options (backtracking) Local search: improve what you have
More informationCHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM
20 CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 2.1 CLASSIFICATION OF CONVENTIONAL TECHNIQUES Classical optimization methods can be classified into two distinct groups:
More informationIntroduction to Optimization Using Metaheuristics. Thomas J. K. Stidsen
Introduction to Optimization Using Metaheuristics Thomas J. K. Stidsen Outline General course information Motivation, modelling and solving Hill climbers Simulated Annealing 1 Large-Scale Optimization
More informationCPSC 340: Machine Learning and Data Mining. Regularization Fall 2016
CPSC 340: Machine Learning and Data Mining Regularization Fall 2016 Assignment 2: Admin 2 late days to hand it in Friday, 3 for Monday. Assignment 3 is out. Due next Wednesday (so we can release solutions
More informationABSTRACT I. INTRODUCTION. J Kanimozhi *, R Subramanian Department of Computer Science, Pondicherry University, Puducherry, Tamil Nadu, India
ABSTRACT 2018 IJSRSET Volume 4 Issue 4 Print ISSN: 2395-1990 Online ISSN : 2394-4099 Themed Section : Engineering and Technology Travelling Salesman Problem Solved using Genetic Algorithm Combined Data
More informationACO and other (meta)heuristics for CO
ACO and other (meta)heuristics for CO 32 33 Outline Notes on combinatorial optimization and algorithmic complexity Construction and modification metaheuristics: two complementary ways of searching a solution
More informationHardware-Software Codesign
Hardware-Software Codesign 4. System Partitioning Lothar Thiele 4-1 System Design specification system synthesis estimation SW-compilation intellectual prop. code instruction set HW-synthesis intellectual
More informationInformed search algorithms. Chapter 4
Informed search algorithms Chapter 4 Outline Best-first search Greedy best-first search A * search Heuristics Memory Bounded A* Search Best-first search Idea: use an evaluation function f(n) for each node
More informationRules for Identifying the Initial Design Points for Use in the Quick Convergent Inflow Algorithm
International Journal of Statistics and Probability; Vol. 5, No. 1; 2016 ISSN 1927-7032 E-ISSN 1927-7040 Published by Canadian Center of Science and Education Rules for Identifying the Initial Design for
More informationArtificial Intelligence p.1/49. n-queens. Artificial Intelligence p.2/49. Initial state: the empty board or a board with n random
Example: n-queens Put n queens on an n n board with no two queens on the same row, column, or diagonal A search problem! State space: the board with 0 to n queens Initial state: the empty board or a board
More informationUsing an outward selective pressure for improving the search quality of the MOEA/D algorithm
Comput Optim Appl (25) 6:57 67 DOI.7/s589-5-9733-9 Using an outward selective pressure for improving the search quality of the MOEA/D algorithm Krzysztof Michalak Received: 2 January 24 / Published online:
More informationSolving Traveling Salesman Problem Using Parallel Genetic. Algorithm and Simulated Annealing
Solving Traveling Salesman Problem Using Parallel Genetic Algorithm and Simulated Annealing Fan Yang May 18, 2010 Abstract The traveling salesman problem (TSP) is to find a tour of a given number of cities
More informationA hybrid heuristic for the p-median problem
Mathematical Programming in Rio Búzios, November 9-12, 2003 A hybrid heuristic for the p-median problem Maurício G.C. RESENDE AT&T Labs Research USA Renato F. WERNECK Princeton University a USA p-median
More informationArtificial Intelligence
Artificial Intelligence Information Systems and Machine Learning Lab (ISMLL) Tomáš Horváth 10 rd November, 2010 Informed Search and Exploration Example (again) Informed strategy we use a problem-specific
More informationThe Influence of Run-Time Limits on Choosing Ant System Parameters
The Influence of Run-Time Limits on Choosing Ant System Parameters Krzysztof Socha IRIDIA, Université Libre de Bruxelles, CP 194/6, Av. Franklin D. Roosevelt 50, 1050 Bruxelles, Belgium ksocha@ulb.ac.be
More informationRecent Developments in Model-based Derivative-free Optimization
Recent Developments in Model-based Derivative-free Optimization Seppo Pulkkinen April 23, 2010 Introduction Problem definition The problem we are considering is a nonlinear optimization problem with constraints:
More informationAnnouncements. CS 188: Artificial Intelligence Spring Today. A* Review. Consistency. A* Graph Search Gone Wrong
CS 88: Artificial Intelligence Spring 2009 Lecture 4: Constraint Satisfaction /29/2009 John DeNero UC Berkeley Slides adapted from Dan Klein, Stuart Russell or Andrew Moore Announcements The Python tutorial
More informationAnnouncements. CS 188: Artificial Intelligence Fall Reminder: CSPs. Today. Example: 3-SAT. Example: Boolean Satisfiability.
CS 188: Artificial Intelligence Fall 2008 Lecture 5: CSPs II 9/11/2008 Announcements Assignments: DUE W1: NOW P1: Due 9/12 at 11:59pm Assignments: UP W2: Up now P2: Up by weekend Dan Klein UC Berkeley
More information