METAHEURISTICS. Introduction. Introduction. Nature of metaheuristics. Local improvement procedure. Example: objective function

Similar documents
DERIVATIVE-FREE OPTIMIZATION

Introduction (7.1) Genetic Algorithms (GA) (7.2) Simulated Annealing (SA) (7.3) Random Search (7.4) Downhill Simplex Search (DSS) (7.

Ant Colony Optimization for dynamic Traveling Salesman Problems

Ant Colony Optimization

Ant Algorithms. Simulated Ant Colonies for Optimization Problems. Daniel Bauer July 6, 2006

Using Genetic Algorithms to optimize ACS-TSP

ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS

Solving the Traveling Salesman Problem using Reinforced Ant Colony Optimization techniques

LECTURE 20: SWARM INTELLIGENCE 6 / ANT COLONY OPTIMIZATION 2

Escaping Local Optima: Genetic Algorithm

Evolutionary Computation Algorithms for Cryptanalysis: A Study

Jednociljna i višeciljna optimizacija korištenjem HUMANT algoritma

Non-deterministic Search techniques. Emma Hart

Solving Travelling Salesmen Problem using Ant Colony Optimization Algorithm

Intuitionistic Fuzzy Estimations of the Ant Colony Optimization

Algorithm Design (4) Metaheuristics

SWARM INTELLIGENCE -I

Parallel Implementation of Travelling Salesman Problem using Ant Colony Optimization

Ant colony optimization with genetic operations

Pre-requisite Material for Course Heuristics and Approximation Algorithms

Metaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini

Optimization Techniques for Design Space Exploration

A Steady-State Genetic Algorithm for Traveling Salesman Problem with Pickup and Delivery

Swarm Intelligence (Ant Colony Optimization)

Ant Colony Optimization

A SURVEY OF COMPARISON BETWEEN VARIOUS META- HEURISTIC TECHNIQUES FOR PATH PLANNING PROBLEM

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM

GENETIC ALGORITHM with Hands-On exercise

Solving the Shortest Path Problem in Vehicle Navigation System by Ant Colony Algorithm

Research Article A Hybrid Algorithm Based on ACO and PSO for Capacitated Vehicle Routing Problems

Ant Colony Optimization: The Traveling Salesman Problem

Artificial Intelligence

An Ant System with Direct Communication for the Capacitated Vehicle Routing Problem

A Meta-heuristic Approach to CVRP Problem: Search Optimization Based on GA and Ant Colony

arxiv: v1 [cs.ai] 9 Oct 2013

CHAPTER 4 GENETIC ALGORITHM

Introduction to Optimization

COMPARISON OF DIFFERENT HEURISTIC, METAHEURISTIC, NATURE BASED OPTIMIZATION ALGORITHMS FOR TRAVELLING SALESMAN PROBLEM SOLUTION

Outline of the module

Lecture 4. Convexity Robust cost functions Optimizing non-convex functions. 3B1B Optimization Michaelmas 2017 A. Zisserman

SIMULATED ANNEALING TECHNIQUES AND OVERVIEW. Daniel Kitchener Young Scholars Program Florida State University Tallahassee, Florida, USA

A combination of clustering algorithms with Ant Colony Optimization for large clustered Euclidean Travelling Salesman Problem

Solving a combinatorial problem using a local optimization in ant based system

Derivative-Free Optimization

Introduction to Optimization

IMPLEMENTATION OF ACO ALGORITHM FOR EDGE DETECTION AND SORTING SALESMAN PROBLEM

Simulated Annealing. G5BAIM: Artificial Intelligence Methods. Graham Kendall. 15 Feb 09 1

SIMULATION APPROACH OF CUTTING TOOL MOVEMENT USING ARTIFICIAL INTELLIGENCE METHOD

Evolutionary Algorithms Meta heuristics and related optimization techniques II/II

Parallel Implementation of the Max_Min Ant System for the Travelling Salesman Problem on GPU

intelligence in animals smartness through interaction

Scalability of a parallel implementation of ant colony optimization

CT79 SOFT COMPUTING ALCCS-FEB 2014

Image Edge Detection Using Ant Colony Optimization

Hybrid Ant Colony Optimization and Cuckoo Search Algorithm for Travelling Salesman Problem

Machine Learning for Software Engineering

Data Mining Chapter 8: Search and Optimization Methods Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University

Introduction to Optimization Using Metaheuristics. The Lecturer: Thomas Stidsen. Outline. Name: Thomas Stidsen: Nationality: Danish.

Genetic Algorithms and Genetic Programming Lecture 13

Evolutionary Algorithms. CS Evolutionary Algorithms 1

Heuristic Optimisation

Crew Scheduling Problem: A Column Generation Approach Improved by a Genetic Algorithm. Santos and Mateus (2007)

ABC Optimization: A Co-Operative Learning Approach to Complex Routing Problems

An Efficient Analysis for High Dimensional Dataset Using K-Means Hybridization with Ant Colony Optimization Algorithm

A Review: Optimization of Energy in Wireless Sensor Networks

RESEARCH OF COMBINATORIAL OPTIMIZATION PROBLEM BASED ON GENETIC ANT COLONY ALGORITHM

MSc Robotics and Automation School of Computing, Science and Engineering

n Informally: n How to form solutions n How to traverse the search space n Systematic: guarantee completeness

ACO and other (meta)heuristics for CO

Solving Traveling Salesman Problem Using Parallel Genetic. Algorithm and Simulated Annealing

Kyrre Glette INF3490 Evolvable Hardware Cartesian Genetic Programming

Chapter 14 Global Search Algorithms

Adaptive Ant Colony Optimization for the Traveling Salesman Problem

Job Shop Scheduling Problem (JSSP) Genetic Algorithms Critical Block and DG distance Neighbourhood Search

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

An evolutionary annealing-simplex algorithm for global optimisation of water resource systems

Introduction to Design Optimization: Search Methods

Heuristic Search Methodologies

Introduction to Optimization Using Metaheuristics. Thomas J. K. Stidsen

A new improved ant colony algorithm with levy mutation 1

Relationship between Genetic Algorithms and Ant Colony Optimization Algorithms

Artificial Intelligence in Robot Path Planning

An Ant Colony Optimization Algorithm for Solving Travelling Salesman Problem

The movement of the dimmer firefly i towards the brighter firefly j in terms of the dimmer one s updated location is determined by the following equat

Algorithms & Complexity

CS5401 FS2015 Exam 1 Key

Performance Analysis of Shortest Path Routing Problem using Heuristic Algorithms

An Evolutionary Algorithm for the Multi-objective Shortest Path Problem

TABU search and Iterated Local Search classical OR methods

Outline. TABU search and Iterated Local Search classical OR methods. Traveling Salesman Problem (TSP) 2-opt

Ant Colony Optimization: A New Stochastic Solver for Modeling Vapor-Liquid Equilibrium Data

March 19, Heuristics for Optimization. Outline. Problem formulation. Genetic algorithms

Open Access Research on Traveling Salesman Problem Based on the Ant Colony Optimization Algorithm and Genetic Algorithm

CHAPTER 6 REAL-VALUED GENETIC ALGORITHMS

Automatic Generation of Test Case based on GATS Algorithm *

Mutations for Permutations

RESEARCH ARTICLE. Accelerating Ant Colony Optimization for the Traveling Salesman Problem on the GPU

Outline. CS 6776 Evolutionary Computation. Numerical Optimization. Fitness Function. ,x 2. ) = x 2 1. , x , 5.0 x 1.

Hybrid approach for solving TSP by using DPX Cross-over operator

Transcription:

Introduction METAHEURISTICS Some problems are so complicated that are not possible to solve for an optimal solution. In these problems, it is still important to find a good feasible solution close to the optimal. A heuristic method is a procedure likely to find a very good feasible solution of a considered problem. Procedure should be efficient to deal with very large problems, and be an iterative algorithm. Heuristic methods usually fit a specific problem rather than a variety of applications. Optimization & Decision 9 48 Introduction Nature of metaheuristics For a new problem, OR team would need to start from scratch to develop a heuristic method. This changed with the development of metaheuristics. A metaheuristic is a general solution method that provides a general structure and strategy guidelines for developing a heuristic method for a particular type of problem. Optimization & Decision 9 49 Example: Maximize 5 4 3 f( x) = 1x 975x + 8,x 345,x + 1,8,x subject to x 31 Function has three local optima (where?). The example is a nonconvex programming problem. f(x) is sufficiently complicated to solve analytically. Simple heuristic method: conduct a local improvement procedure. Optimization & Decision 9 41 Example: objective function 5 x 16 4 Local improvement procedure Starts with initial trial solution and uses a hill climbing procedure. Examples? Typical sequence: f(x) 3 f(x) 5 x 16 4 3 1 1 5 1 15 5 331 x Optimization & Decision 9 411 5 1 15 5 331 x Drawback: procedure converges to local optimum. This is only a global optimum if search begins in the neighborhood of this global optimum. Optimization & Decision 9 41 1

Nature of metaheuristics How to overcome this drawback? What happens in large problems with many variables? Metaheuristic: general solution method that orchestrates the interaction between local improvement procedures and higher level strategies to create a process to escape from local optima and perform a robust search of a feasible region. Solutions by metaheuristics A trial solution after a local optimum can be inferior to this local optimum. But how do they solve the problem? Optimization & Decision 9 413 Optimization & Decision 9 414 Metaheuristics Metaheuristic methods Advantage: deals well with large complicated problems. Disadvantage: no guarantee to find optimal solution or even a nearly optimal solution. When possible, an algorithm that can guarantee optimality should be used instead. Can be applied to nonlinear or integer programming. Most commonly it is applied to combinatorial optimization problems. Tabu Search Simulated Annealing Genetic algorithms (GA) Ant Colony Optimization (ACO) Others Optimization & Decision 9 415 Optimization & Decision 9 416 Common characteristics Derivative freeness: methods rely exclusively on repeated evaluations of the objective function; search direction follows heuristic guidelines; Intuitive guidelines: concepts are usually bio inspired; Slowness: slower than derivative based optimization for continuous optimization problems; Flexibility: allows any objective function (even structure of data fitting model); Common characteristics Randomness: stochastic methods use of random number generators in determining subsequent search directions; may be global optimizers given enough computation time (optimistic view); Analytic l opacity: knowledge based on empirical ii studies due to randomness and problem specific nature; Iterative nature: need of stopping criteria to determine when to terminate the optimization process. Optimization & Decision 9 417 Optimization & Decision 9 418

Traveling Salesman Problem Traveling Salesman Problem Starting from a home city, determine which route to follow to visit each city exactly once before returning to home city while minimizing the total length of the tour. Can be symmetric or asymmetric. Optimization & Decision 9 419 Objective: route that minimizes the distance (cost, time). Applications? Problem with n cities and a link between every pair of cities has (n 1)!/ feasible routes. 1 cities: 181 44 feasible solutions cities: 6 1 16 feasible solutions 5 cities: 3 1 6 feasible solutions Optimization & Decision 9 4 Traveling Salesman Problem Some TSP problems can be solved using branch andcut algorithms. Heuristic methods are more general. A new solution is obtained by making small adjustments to the current solution. An example: Solving example Sub tour reversal 3 4 of 1 3 4 5 6 7 1 1 4 3 5 6 7 1 Sub tour reversal adjusts sequence of visited cities by reversing order in which subsequence is visited. Optimization & Decision 9 41 Optimization & Decision 9 4 Sub tour reversal algorithm Initialization: start with any feasible solution. Iteration: for current solution consider all possible ways of performing a sub tour reversal (except reversal of entire tour). Select the one that provides the largest decrease in traveled distance. Stopping rule: stop when no sub tour reversal improves current trial solution. Accept solution as final one. Also a local improvement algorithm: does not assure optimal solution! Example Iteration 1: starting with 1 3 4 5 6 7 1 (Distance = 69), 4 possible sub tour reversals that improve solution are: Reverse 3: 1 3 4 5 6 7 1 Distance = 68 Reverse 3 4: 1 4 3 5 6 7 1 Distance = 65 Reverse 4 5: 1 3 5 4 6 7 1 Distance = 65 Reverse 5 6: 1 3 4 6 5 7 1 Distance = 66 Optimization & Decision 9 43 Optimization & Decision 9 44 3

Example Iteration : continuing with 1 4 3 5 6 7 1 only 1 sub tour reversal leads to improvement Reverse 3 5 6: 1 4 6 5 3 7 1 Distance = 64. Algorithm stops. Last solution is final solution. Is it the optimal one? Optimization & Decision 9 45 Tabu Search Fred Glover, 1977 Includes a local search procedure, allowing nonimprovement moves to the best solution. Referred to as steepest ascent/mildest descent approach. To avoid cycling a local optimum, a tabu list is added. Tabu list records temporarily forbidden moves (tabu moves) that would return solution to a recently visited one. Thus, it uses memory to guide the search. Can include intensification and diversification. Optimization & Decision 9 46 Basic tabu search algorithm Initialization: start with a feasible initial solution. Iteration: 1. Use local search to define feasible moves in neighborhood.. Disregard moves in tabu list, unless they result in a better solution. 3. Determine which move provides best solution. 4. Adopt this solution as next trial solution. 5. Update tabu list. Stopping rule: stop using fixed number of iterations, fixed number of iterations without improvement, etc. Accept best trial solution found as final solution. Optimization & Decision 9 47 Questions in tabu search 1. Which local search procedure should be used?. How to define the neighborhood structure? 3. How to represent tabu moves in the tabu list? 4. Which tabu move should be added to the tabu list in each iteration? 5. How long should a tabu move remain in the tabu list? 6. Which stopping rule should be used? Optimization & Decision 9 48 Ex: minimum spanning tree problem Adding constraints Problem without constraints: solved using greedy algorithm. Constraint 1: link AD can be included only if link DE is also included. Constraint : at most one of the three links AD, CD and AB can be included. Previous solution violates both constraints. Applying tabu search: Charge a penalty of 1 if Constraint 1 is violated. Charge a penalty of 1 if two of the three links in Constraint are included. Increase penalty to if all three links are included. Optimization & Decision 9 49 Optimization & Decision 9 43 4

Answers in tabu search implementation 1. Local search procedure: choose best immediate neighbor not ruled out by its tabu status.. Neighborhood structure: immediate neighbor is one reached by adding a single link and then deleting one of the other links in the cycle. 3. Form of tabu moves: list links not to be deleted. 4. Addition of a tabu move: add chosen link to tabu list. 5. Maximum size of tabu list: two (half of total links). 6. Stopping rule: three iterations without improvement. Optimization & Decision 9 431 Solving example Initial solution: solution of unconstrained version Cost = + 1 + 5 + 15 + (why?) = 5 Iteration 1. Options to add a link are BE, CD and DE. Add Delete Cost BE CE 75 + = 75 BE AC 7 + = 7 BE AB 6 + 1 = 16 CD AD 6 + 1 = 16 CD AC 65 + 3 = 365 DE CE 85 + 1 = 185 DE AC 8 + 1 = 18 DE AD 75 + = 75 Minimum Optimization & Decision 9 43 Application of tabu search Add DE to network. Delete AD from network. Add DE to tabu list Iteration Options to add a link are AD, BE and CD. Add BE to tabu list Add Delete Cost AD DE* (Tabu move) AD CE 85 + 1 = 185 AD AC 8 + 1 = 18 BE CE 1 + = 1 BE AC 95 + = 95 BE AB 85 + = 85 Minimum CD DE* 6 + 1 = 16 CD CE 95 + 1 = 195 * A tabu move; only considered if it results in better solution than best trial solution found previously. Optimization & Decision 9 433 Optimization & Decision 9 434 Iteration 3 Optimal solution Options to add a link are AB, AD and CD. Add CD to tabu list Delete DE from tabu list Add Delete Cost AB BE * (Tabu move) AB CE 1 + = 1 AB AC 95 + = 95 AD DE * 6 + 1 = 16 AD CE 95 + = 95 AD AC 9 + = 9 CD DE * 7 + = 7 Minimum CD CE 15 + = 15 * A tabu move; only considered if result in better solution than best trial solution found previously. Optimization & Decision 9 435 Optimization & Decision 9 436 5

Traveling salesman problem example 1. Local search procedure: choose best immediate neighbor not ruled out by tabu status.. Neighborhood structure: immediate neighbor is the one reached by making a sub tour reversal (requires adding and deleting two links of current solution). 3. Form of tabumoves:list links such that a sub tour reversal would be tabu if both links to be deleted are in the list. 4. Addition of a tabu move: add two new chosen links to tabu list. 5. Maximum size of tabu list: four links. 6. Stopping rule: three iterations without improvement. Optimization & Decision 9 437 Solving problem Initial trial solution: 1 3 4 5 6 7 1 Distance = 69 Iteration 1: choose to reverse 3 4. Deleted links: 3 and 4 5 Added links (tabu list): 4 and 3 5 New trial solution: 1 4 3 5 6 7 1 Distance = 65 Iteration : choose to reverse 3 5 6. Deleted links: 4 3 and 6 7 (OK since not in tabu list) Added links: 4 6 and 3 7 Tabu list: 4, 3 5, 4 6 and 3 7 New trial solution: 1 4 6 5 3 7 1 Distance = 64 Optimization & Decision 9 438 Solving problem Sub tour reversal of 3 7 (Iteration 3) Only two immediate neighbors: Reverse 6 5 3: 1 4 3 5 6 7 1 Distance = 65. (This would delete links 4 6 and 3 7 that are in the tabu list.) Reverse 3 7: 1 4 6 5 7 3 1 Distance = 66 Iteration 3: choose to reverse 3 7. Deleted links: 5 3 and 7 1 Added links: 5 7 and 3 1 Tabu list: 4 6, 3 7, 5 7 and 3 1 New trial solution: 1 4 6 5 7 3 1 Distance = 66 Optimization & Decision 9 439 Optimization & Decision 9 44 Iteration 4 Sub tour reversal of 5 7 (Iteration 4) Four immediate neighbors: Reverse 4 6 5 7: 1 7 5 6 4 3 1 Distance = 65 Reverse 6 5: 1 4 5 6 7 3 1 Distance = 69 Reverse 5 7: 1 4 6 7 5 3 1 Distance = 63 Reverse 7 3: 1 4 5 6 3 7 1 Distance = 64 Iteration 4: choose to reverse 5 7. Deleted links: 6 5 and 7 3 Added links: 6 7 and 5 3 Tabu list: 5 7, 3 1, 6 7 and 5 3 New trial (and final) solution: 1 4 6 7 5 3 1 Distance = 63 Optimization & Decision 9 441 Optimization & Decision 9 44 6

3 5nSimulated Annealing Kirkpatrick, Gelatt, Vecchi, 1983 Suitable for continuous and discrete optimization problems; Effective in finding near optimal solutions for large scale combinatorial i problems like traveling salesman and placement problems; Enables search process to escape from local minima. Instead of steepest ascent/mildest descent approach as in tabu search, it tries to search for the tallest hill. Early iterations take steps in random directions. Optimization & Decision 9 443 Simulated Annealing Principle analogous to metals behavior when cooled at a controlled rate; Value of objective function analogous to energy in thermodynamic system: At high temperatures, it is likely to accept a new point with higher energy; At low temperatures, likelihood of accepting a new point with higher energy is much lower; Annealing or cooling schedule: specifies how rapidly the temperature is lowered from high to low values; Optimization & Decision 9 444 Simulated Annealing Let Z c = objective function value for current trial solution. Z n = objective function value for current candidate to be next trial solution. T = tendency to accept current candidate as next solution if not better than current solution. Optimization & Decision 9 445 Simulated Annealing Move selection rule: among all immediate neighbors of current trial solution, select one randomly. To maximize, accept or reject candidate as follows: If Z n Z c, always accept candidate, If Z n < Z c c,, accept candidate with following probability: x Zn Zc Prob{acceptance} = e, where x= T For minimization problems reverse Z n and Z c. If candidate is rejected, repeat for another random immediate neighbor. If no immediate neighbors remain, terminate algorithm. Optimization & Decision 9 446 Probability of accepting solutions Larger T: probability of accepting is higher. Simulated annealing starts with large T, enabling the search to proceed in almost random directions, Trandom gradually decreases it as the iterations proceed, in order to emphasis on mostly climbing upward. Z-ZcProb{acceptance} = e x.1.99.5.779 1.368 3.5 5.7 A temperature schedule should be chosen. Implementation of move selection rule: compare random number between and 1 to the probability of acceptance. Optimization & Decision 9 447 x=a simulated annealing algorithm Initialization: start with a feasible initial trial solution. Iteration: Use the move selection rule to select next trial. If none of immediate neighbors of current solution are accepted, the algorithm is terminated. Check k the temperature t schedule: hdl decrease T if a certain number of iterations have been performed. Stopping rule: stop after a predetermined number of iterations have been performed at the smallest value of T (or if there are no accepted solutions). Best trial solution at any iteration is the final solution. Optimization & Decision 9 448 7

Questions in simulated annealing 1. How should the initial solution be selected?. What is the neighborhood structure that specifies which solutions are immediate neighbors? 3. What device should be used in the move selection rule to randomly select one of the immediate neighbors? 4. What is an appropriate temperature schedule? Questions answered in the following examples. Optimization & Decision 9 449 Traveling salesman problem example 1. Initial trial solution: any feasible solution. Can be chosen randomly or it can be 1 3 4 5 6 7 1.. Neighborhood structure: is one reached by a sub tour reversal, as described previously. 3. Random selection of an immediate neighbor: beginning slot for sub tour reversal cannot be first, last, or next to last slots; ending slot cannot be last slot. 4. Temperature schedule: five iterations for each of the five values T 1 =.Z c (Z c is objective function for initial solution), T i+1 =.5T i, i = 1,, 3, 4. Optimization & Decision 9 45 Solving problem Iteration 1 Initial trial solution: 1 3 4 5 6 7 1, Z c = 69, T 1 = 13.8. Sub tour to reverse can begin anywhere between the second slot (city ) and the sixth slot (city 6). Can have equal probabilities to start in any of the five slots. After choosing beginning g slot, choose end slot with equal probability. Suppose 3 4 was chosen to reverse, 1 4 3 5 6 7 1, Z n = 65. Solutions are not always feasible (in that case choose new pair of random numbers). If algorithm chooses to reverse 3 4 5, solution 1 5 4 3 6 7 1 is not feasible. As Z n = 65 < Z c = 69, this is the next trial solution. Optimization & Decision 9 451 Solving problem Suppose that Iteration results in reversing 3 5 6, to obtain 1 4 6 5 3 7 1, with Z n = 64. Suppose now that Iteration 3 results in reversing 3 7, to obtain 1 4 6 5 7 3 1, with Z n = 66. As Z n > Z c : Prob{acceptance} = e Zn Zc T /13.8 = e =.865 candidate is accepted if next random number <.865. One application of SA gave best solution 1 3 5 7 6 4 1 at iterations 14 and 16 (out of 5) with distance = 63. Optimization & Decision 9 45 Nonlinear programming application Nonlinear programming application Problems of the type: Maximize f ( x1,, x n ) subject to L x U, for j= 1,, n j j j 1. Initial trial solution: any feasible solution. Can be x j = (U j L j )/.. Neighborhood structure: any feasible solution, see random selection of immediate neighbor. Optimization & Decision 9 453 3. Random selection of an immediate neighbor: Uj Lj Set σ j = ; 6 Reset x = x + N (, σ ), for j= 1,, n j j j where N(,σ j ) is a random observation of a normal distribution with mean and σ j standard deviation. Repeat process until feasible solution is attained. 4. Temperature schedule: five iterations for each of the five values T 1 =.Z c, T i+1 =.5T i, i = 1,, 3, 4. Optimization & Decision 9 454 8

Example Genetic algorithms Initial solution: x = 15.5, Z c = f(15.5) = 3 741 11, and T 1 =. Z c = 748 4. σ = (31 )/6 = 5.167. Iteration 1: x = 15.5 + N(,5.167) = 15.5 7.5 = 8. Z n = f(8) = 3 55 616. As Zn Z T c 355616 371411 = =.916 7484.916 Prob{acceptance} = e =.4 After 5 iterations, gives optimal value of x =.31 (optimal is x = ). Optimization & Decision 9 455 Motivation What evolution brought us? Vision Hearing Smelling Taste Touch Learning and reasoning Can we emulate the evolutionary process with today s fast computers? Optimization & Decision 9 456 Genetic algorithms Randomized search algorithms based on mechanics of natural selection and genetics. Principle of natural selection through survival of the fittest with randomized search. Search efficiently in large spaces. Robust with respect to the complexity of the search problem. Use a population of solutions instead of searching only one solution at a time. Optimization & Decision 9 457 Basic elements Candidate solution is encoded as a string of characters in binary or real. Bit string is called a chromosome. Solution represented by a chromosome is the individual. A number of individuals forms a population. Population is updated iteratively; each iteration is called a generation. Objective function is called the fitness function. Fitness value is maximized. Multiple solutions are evaluated in parallel. Optimization & Decision 9 458 A basic genetic algorithm Genetic algorithm Initialization: Start an initial population of feasible solutions, e.g. randomly. Evaluate the fitness for each member of the population. Iteration: 1. Select some members of population to become parents.. Cross genetic material of parents in a crossover operation. Mutation can occur in some genes. 3. Take care of infeasible solutions, repeating birth process until feasible solution is obtained. 4. Evaluate fitness of new members, including the clones. Stopping rule: stop using fixed number of iterations, fixed number of iterations without improvement, etc. Optimization & Decision 9 459 Optimization & Decision 9 46 9

Encoding crossover mutation Genetic algorithm iteration Binary encoding (11, 6, 9) Crossover 1 1 1 1 1 1 1 1 1 Mutation Chromosome 111 11 11 Gene 1 1 1 1 1 1 1 1 1 Crossover point Mutation bit 1 1 1 1 1 1 1 1 1 1111 111 111 1111 111111 Current generation Elitism Selection Crossover Mutation 1111 111 111 11111 11111 Next generation Optimization & Decision 9 461 Optimization & Decision 9 46 Spaces in GA iteration generation N generation N +1 fitness operators gene space problem space fitness space 11 11 111 111 11 111 genetic operators 111 1111 111 11 111 11 (de)coding 1 17 6 7 4 fitness function Optimization & Decision 9 463 34 48 3 15 41 5 Selection (reproduction) Proportional selection: genes fitness p =.19 p =.3 p =.11 p =.77 p =.16 p =.4 11 11 111 111 11 111 34 48 3 15 41 5 Sum = 11 reproduction genes 11 11 11 11 111 111 Tournament selection: Randomly select pairs Fitter individual wins (deterministic or probabilistic) Optimization & Decision 9 464 Questions in genetic algorithms Nonlinear programming example 1. What is the encoding scheme?. What should the population size be? 3. How should the individuals of the current population be selected to become parents? 4. How should the features of the children be derived from the features of the parents? 5. How should mutations occur in the features of the children? 6. Which stopping rule should be used? 1. Encoding scheme: integers from to 31. Five binary genes are needed. Example: x = 5 is 111 in base.. Population size: 1 (problem is simple). 3. Selection of parents: select randomly 4 from the 5 most fit (according to fitness), and from the 5 least fit. Select 3 pairs of parents, that will produce 6 children. Elitism: four best solutions are clones in next generation. Optimization & Decision 9 465 Optimization & Decision 9 466 1

Nonlinear programming example 4. Crossover operator: When a bit is difference in the two parents, select or 1 randomly (uniform dist.) Parents are x1x. Children can be 111 and 1. 5. Mutation operator: mutation rate is.1 for each gene. 6. Stopping criteria: stop after 5 consecutive generations without improvement. Fitness value is just the objective function value in this example. 6 generations were enough in this example (optimum found on first iteration). Optimization & Decision 9 467 Traveling salesman problem example 1. Encoding scheme: exactly as before (no encoding). Example: 1 3 4 5 6 7 1. Initial population: generated randomly, using possible links between cities.. Population size: 1 (problem is simple). 3. Selection of parents: select randomly 4 from the 5 most fit, and from the 5 least fit. Select 3 pairs of parents, that will produce 6 children. Elitism: four best solutions are clones in next generation. Optimization & Decision 9 468 Traveling salesman problem example 4. and 5. Operators: follows an algorithm: 1. Options for next link: links from current city not in children s tour that are used by parents.. Selection of next link: randomly with a uniform distribution. 3. Mutation: if a mutation occurs (RN<.1) replace last link with any other possible one, unless it is impossible. 4. Repetition: when there is still more than one link to include in child, go to step 1. 5. Completion: add last city and complete tour if links exists. If not, solution is infeasible (miscarriage) and all process should be repeated. 5. Stopping criteria: 5 iterations without improvement. Optimization & Decision 9 469 Traveling salesman problem Example: P1: 1 3 4 5 6 7 1 P: 1 4 6 5 7 3 1 Child: From P1 can be 1 or 1 7. From P can be 1 or 1 3. 1 has 5% probability to be chosen. Why? Suppose 1 was chosen. At next step links 3 (P1) and 4 (P) can be chosen. Suppose it is 4. Child is 1 4. And so on! Note that problem is too simple, and solutions (individuals) can be repeated in the population. See this example in Hillier s book. Optimization & Decision 9 47 Ant Colony Optimization What is special about ants? Introduced by Marco Dorigo (199), has been well received by academic world and it is starting to be used in industrial applications. Ant Colony Optimization is the most used method of the Artificial i Life algorithms (Wasp, Bees, Swarm). Applications: Travelling Salesman Problem, Vehicle Routing, Quadratic Assignment Problem, Internet Routing and Logistic Scheduling. There are also some applications of ACO in clustering and data mining problems. Ants can perform complex tasks: nest building, food storage garbage collection, war foraging (to wander in search of food) There is no management in an ant colony collective intelligence They communicate using: pheromones (chemical substances), sound, touch Curiosities: Ant colonies exist for more than 1 million years Myrmercologists estimate that there are around species of ants. Optimization & Decision 9 471 Optimization & Decision 9 47 11

The foraging behaviour of ants How can almost blind animals manage to learn the shortest route paths from their nests to the food source and back? Artificial ants Artificial ants move in graphs nodes / arcs environment is discrete Food Source Destination a) Ants follow path between the Nest and the Food Source b) Ants go around the obstacle following one of two different paths with equal probability As the real ants, ts,they: choose paths based on pheromone deposit pheromones on the travelled paths The environment updates the pheromones c) On the shorter path, more pheromones are laid down d) At the end, all ants follow the shortest path. Nest Source Artificial ants have more abilities: they can see (heuristic η ) [former visibility] they have memory (feasible neighbourhood N) [former tabu list Γ] Optimization & Decision 9 Photos: http://iridia.ulb.ac.be/~mdorigo/aco/realants.html 473 Optimization & Decision 9 474 Notation for artificial ants in TSP c ij cost for transversal from city i to city j; τ ij pheromone in edge (i,j); Δτ ij amount of pheromone deposited in edge (i,j); η ij = 1/ c ij local heuristic; p ij probability that ant k in city i visits city j; N set of cities still to be visited by ant k in city i ; ρ evaporation coefficient; t time is performance index; α, β parameters that determine relative importance of pheromone versus heuristic; Optimization & Decision 9 475 Mathematical framework of real ants Choose trail i= 1 τ 1 Deposit pheromone Environment (time) updates pheromones τ 13 j = j = 3 Ant 1, t= i = 1 τ 1 i = 1 τ k p = f( τ ) 13 ij +Δτ Ant, t= τ 1 τ ij( t+ = τij( t) k ρ) +Δτ 1) (1 ij Optimization & Decision 9 476 j = j = τ 13 ij Ant 1, t=1 j = 3 Mathematical framework Choose node α β τij ηij, if j Ν α β k p = τ η ij ij ij j Ν, otherwise Update Feasible Neighbourhood List N = N\ j Pheromone update k τ ( l+ 1) = τ ( l) (1 ρ) +Δτ ij ij ij 1/ cij, ifant k travels from i to j k Δ τ ij =, otherwise Initialization Set τ ij = τ For l =1: N max Build a complete tour For i = 1 to n For k = 1 to m Choose node Update N Apply Local Heuristic end end Analyze solutions For k = 1 to m Compute f k end Update pheromones end Optimization & Decision 9 477 Traveling Salesman Problem n cities (5) Complexity: (n 1)! / 1 8 8 5 4 17 5 Optimization & Decision 9 478 17 5 3 1

Solution using the nearest city heuristic Step #1 Step # Step #3 Step #4 17 Optimization & Decision 9 479 Solution using the nearest city heuristic The final solution is obviously nonoptimal This heuristic can give the optimal solution if it is given a proper initial node Step #5 Optimization & Decision 9 48 17 17 5 5 ACO in Travelling Salesman Problem Iteration l=1, ant m=1 m ants n cities η = 1 / d η vs τ ACO balances the heuristic information with the experience (pheromone) information All paths have the same pheromone intensity τ =.5 Pheromone trail and heuristic information have the same weight α = 1, β = 1, ρ=.1 An ant is randomly placed The probability to choose is, in this case, based only on heuristic information p 1 =31% p 13 =16% p 14 =% p 15 =31% Ant m = 1 chooses node 5 Step #1 Optimization & Decision 9 481 Optimization & Decision 9 48 Iteration l=1, ant m=1 Step # Step #3 3% % 53% 47% 46% Step #5 Step #4 1% Optimization & Decision 9 f1 = + + + 5 + 17 = 1.36 483 Iteration l=1, ant m= All paths have the same pheromone intensity τ =.5 Pheromone trail and heuristic information have the same weight α = 1, β = 1,ρ=.1 An ant is randomly placed The probability to choose is, in this case, based only on heuristic information p 1 =31% p 13 =16% p 14 =% p 15 =31% Ant m = chooses node Step #1 Optimization & Decision 9 484 13

Iteration l=1, ant m= Iteration l=1, pheromone update (1) Step # 7% 34% 39% Step #3 35% 65% The final solution of ant m=1 is d=1.36. The reinforcement Δτ ij produced by this ant m=1 is,8. Step #4 Step #5 1% The final solution of ant m= is d=1,47. The reinforcement Δτ ij produced by ant m= is,95. Optimization & Decision 9 f = + 5 + 5 + + = 1.47 485 Optimization & Decision 9 486 Updating pheromone matrix () Iteration l=, ant m=1 The pheromone update can be done following different approaches: Considering the pheromone dropped by every ants.5.5.5.5.5.8.95.5.5.5.5.5.8.95 τ ( l + 1) =.5.5.5.5.5 ( 1 ρ ) +.8 +.95.5.5.5.5.5.8.95.5.5.5.5.5.8.95 Considering the pheromone dropped by the best ant of the present iteration.5.5.5.5.5.95.5.5.5.5.5.95 τ ( l + 1) =.5.5.5.5.5 ( 1 ρ ) +.95.5.5.5.5.5.95.5.5.5.5.5.95 Considering the pheromone dropped by the best ant in all iterations (after iteration N=1, this is the same as the previous approach) The pheromone trails have different intensities Pheromone trail and heuristic information have the same weight α = 1, β = 1, ρ=.1 An ant is randomly placed The probability to choose is p 41 =19% p 4 =6% p 43 =3% p 45 =3% Ant m = 1 chooses node 5 Step #1 Optimization & Decision 9 487 Optimization & Decision 9 488 Iteration l=, ant m=1 Iteration l=, ant m= 46% Step # 3% Step #4 % 1% Step #3 Step #5 71% 9% The pheromone trails have different intensities Pheromone trail and heuristic information have the same weight α = 1, β = 1, ρ=.1 An ant is randomly placed The probability to choose is p 1 =6% p 3 =9% p 4 =6% p 5 =19% Ant m = chooses node 3 Step #1 Optimization & Decision 9 f1 = + + + 5 + 5 = 1.47 489 Optimization & Decision 9 49 14

Iteration l=, ant m=1 Iteration l=, pheromone update (1) Step # 4% Step #3 37% 4% 5% 63% Step #3 Step #4 The final solution of ant m=1 and m= is d=1,47. The reinforcement produced by each ant is,95. 1% Optimization & Decision 9 f = 5 + 5 + + + = 1.47 491 Optimization & Decision 9 49 Updating pheromone matrix () Considering the pheromone dropped by every ants.45.55.45.45.45.45.45.55.45.45 τ ( l + 1) =.45.45.45.55.45 ( 1 ρ) +.45.45.45.45.55.55.45.45.45.45.95.95.95.95 +.95 Optimization & Decision 9 493.95 Considering the pheromone dropped by the best ant of the present iteration Considering the pheromone dropped by the best ant in all iterations.95.95.45.55.45.45.45.95.45.45.55.45.45.95 τ ( l + 1) =.45.45.45.55.45 ( 1 ρ) +.95.45.45.45.45.55.95.55.45.45.45.45.95.95.95 15