Pre-requisite Material for Course Heuristics and Approximation Algorithms

Similar documents
Non-deterministic Search techniques. Emma Hart

Outline of the module

Machine Learning for Software Engineering

ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS

ACO and other (meta)heuristics for CO

Heuristic Optimisation

Metaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini

Theorem 2.9: nearest addition algorithm

Escaping Local Optima: Genetic Algorithm

METAHEURISTICS. Introduction. Introduction. Nature of metaheuristics. Local improvement procedure. Example: objective function

3 INTEGER LINEAR PROGRAMMING

A Steady-State Genetic Algorithm for Traveling Salesman Problem with Pickup and Delivery

Massively Parallel Approximation Algorithms for the Traveling Salesman Problem

Fuzzy Inspired Hybrid Genetic Approach to Optimize Travelling Salesman Problem

Introduction to Optimization

1 The Traveling Salesperson Problem (TSP)

Coping with the Limitations of Algorithm Power Exact Solution Strategies Backtracking Backtracking : A Scenario

Outline of the talk. Local search meta-heuristics for combinatorial problems. Constraint Satisfaction Problems. The n-queens problem

CSE 417 Branch & Bound (pt 4) Branch & Bound

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM

Introduction to Optimization

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg

Hybridization EVOLUTIONARY COMPUTING. Reasons for Hybridization - 1. Naming. Reasons for Hybridization - 3. Reasons for Hybridization - 2

Optimal tour along pubs in the UK

Foundations of Computing

GRASP. Greedy Randomized Adaptive. Search Procedure

Khushboo Arora, Samiksha Agarwal, Rohit Tanwar

NP-Complete Problems

CHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION

Decision Problems. Observation: Many polynomial algorithms. Questions: Can we solve all problems in polynomial time? Answer: No, absolutely not.

P and NP (Millenium problem)

A Web-Based Evolutionary Algorithm Demonstration using the Traveling Salesman Problem

Evolutionary Computation for Combinatorial Optimization

Simplicial Global Optimization

Combinatorial Optimization

CHAPTER 4 GENETIC ALGORITHM

Introduction to Combinatorial Algorithms

Unit 8: Coping with NP-Completeness. Complexity classes Reducibility and NP-completeness proofs Coping with NP-complete problems. Y.-W.

Notes for Lecture 24

ECE 5775 (Fall 17) High-Level Digital Design Automation. Fixed-Point Types Analysis of Algorithms

Application of Improved Discrete Particle Swarm Optimization in Logistics Distribution Routing Problem

Algorithm Design (4) Metaheuristics

An iteration of Branch and Bound One iteration of Branch and Bound consists of the following four steps: Some definitions. Branch and Bound.

Heuristic Optimisation

APPROXIMATION ALGORITHMS FOR GEOMETRIC PROBLEMS

Travelling salesman problem using reduced algorithmic Branch and bound approach P. Ranjana Hindustan Institute of Technology and Science

Algorithms for the Precedence Constrained Generalized Travelling Salesperson Problem

Computational Complexity CSC Professor: Tom Altman. Capacitated Problem

Traveling Salesman Problem (TSP) Input: undirected graph G=(V,E), c: E R + Goal: find a tour (Hamiltonian cycle) of minimum cost

Heuristic Optimization Introduction and Simple Heuristics

TABU search and Iterated Local Search classical OR methods

Outline. TABU search and Iterated Local Search classical OR methods. Traveling Salesman Problem (TSP) 2-opt

Methods and Models for Combinatorial Optimization Exact methods for the Traveling Salesman Problem

4 INFORMED SEARCH AND EXPLORATION. 4.1 Heuristic Search Strategies

Introduction to Optimization Using Metaheuristics. Thomas J. K. Stidsen

Mathematical and Algorithmic Foundations Linear Programming and Matchings

(Refer Slide Time: 01:00)

Parallel Computing in Combinatorial Optimization

GENETIC ALGORITHM with Hands-On exercise

Introduction to Optimization Using Metaheuristics. The Lecturer: Thomas Stidsen. Outline. Name: Thomas Stidsen: Nationality: Danish.

Using Genetic Algorithm with Triple Crossover to Solve Travelling Salesman Problem

B553 Lecture 12: Global Optimization

Artificial Intelligence

Chapter 8. NP-complete problems

Optimization Techniques for Design Space Exploration

Stanford University CS261: Optimization Handout 1 Luca Trevisan January 4, 2011

General Purpose Methods for Combinatorial Optimization

Modified Order Crossover (OX) Operator

A two-level metaheuristic for the All Colors Shortest Path Problem

Exploration vs. Exploitation in Differential Evolution

of optimization problems. In this chapter, it is explained that what network design

Job Shop Scheduling Problem (JSSP) Genetic Algorithms Critical Block and DG distance Neighbourhood Search

CS:4420 Artificial Intelligence

Introduction to Algorithms

Computational problems. Lecture 2: Combinatorial search and optimisation problems. Computational problems. Examples. Example

Using Genetic Algorithms to optimize ACS-TSP

SLS Methods: An Overview

February 19, Integer programming. Outline. Problem formulation. Branch-andbound

Ant Colony Optimization

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 1: Introduction to Optimization. Instructor: Shaddin Dughmi

Constructive and destructive algorithms

Variable Neighbourhood Search (VNS)

Variable Neighbourhood Search (VNS)

LECTURE 20: SWARM INTELLIGENCE 6 / ANT COLONY OPTIMIZATION 2

Evolutionary Non-Linear Great Deluge for University Course Timetabling

Primal Dual Schema Approach to the Labeling Problem with Applications to TSP

International Journal of Current Trends in Engineering & Technology Volume: 02, Issue: 01 (JAN-FAB 2016)

Variable Neighborhood Search for Solving the Balanced Location Problem

An evolutionary annealing-simplex algorithm for global optimisation of water resource systems

Fundamentals of Integer Programming

Simple mechanisms for escaping from local optima:

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization

ECE6095: CAD Algorithms. Optimization Techniques

A memetic algorithm for symmetric traveling salesman problem

Algorithms for Integer Programming

Heuristis for Combinatorial Optimization

MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms

Amanur Rahman Saiyed (Indiana State University) THE TRAVELING SALESMAN PROBLEM November 22, / 21

Examples of P vs NP: More Problems

Non-convex Multi-objective Optimization

Transcription:

Pre-requisite Material for Course Heuristics and Approximation Algorithms This document contains an overview of the basic concepts that are needed in preparation to participate in the course. In addition, it will be useful that students have some experience with Java programming and probability theory but this is desirable, not a mandatory pre-requisite. At the start of the course, students will be asked to indicate their level of expertise in these topics in order to inform the delivery of the course. Contents Page 1. Basics of Complexity Theory 2 1.1. Computability 2 1.2. Computational Problems 2 1.3. Computational Algorithms 3 1.4. Intractable and Undecidable Problems 4 1.5. Time Complexity 4 1.6. Tackling NP-Complete Problems 6 2. Basics of Heuristic Search for Optimization 7 2.1. Optimization Problems 7 2.2. Linear Programming (LP) 8 2.3. Integer Programming (IP) 8 2.4. Discrete/Continuous/Combinatorial Optimization 9 2.5. Approximation Algorithms 10 2.6. Greedy Algorithms and Neighbourhood Search 11 2.7. Basics of Meta-heuristics 14 2.8. Basics of Evolutionary Algorithms 15 ASAP Research Group, School of Computer Science, The University of Nottingham Page 1

1. Basics of Complexity Theory 1.1 Computability Computability focuses on answering questions such as: What class of problems can be solved using a computational algorithm? Is there an efficient computational algorithm to solve a given problem? There have been many attempts by computer scientists to define precisely what is an effective procedure or computable function. Intuitively, a computable function can be implemented on a computer, whether in a reasonable computational time or not. An ideal formulation of an effective procedure or computable function involves: 1. A language in which a set of rules can be expressed. 2. A rigorous description of a single machine which can interpret the rules in the given language and carry out the steps of a specified process. 1.2 Computational Problems The classical Travelling Salesman Problem (TSP): Given a set of c cities 1,2,,c and the distance d(i,j) (a non-negative integer ) between two cities i and j, find the shortest tour of the cities. The TSP is one of the most intensively studied problems in mathematics and computer science. A computational problem (e.g. the classical TSP) is generally described by: Its parameters (e.g. set of cities, distance between cities) Statement of the required properties in the solution (e.g. ordering of the cities such that the travelling distance is minimised) An instance of a computational problem gives specific values to the problem parameters and a solution to that instance has a specific value. The size of a problem instance is defined in terms of the amount of input data required to describe its parameters. In practice, the instance size is conveniently expressed in terms of a single variable. For example, the size of an instance of the travelling salesman problem (TSP) can be expressed by: A finite input string describing exactly all parameter values (cities and distances) A number intending to reflect the size of the input string (number of cities) ASAP Research Group, School of Computer Science, The University of Nottingham Page 2

An encoding scheme is a mapping between the problem instance and the finite input string that describes the instance. A problem can be described or represented using different encoding schemes. Example. Some encoding schemes that can be used to describe instances of the TSP. Consider the problem instance shown below for these coding schemes. Examples of encoding schemes are: C1 7 C2 a) Vertex list-edge list 4 6 {c1,c2,c3,c4} {(c1,c2),(c1,c3),(c1,c4),(c2,c3),(c2,c4),(c3,c4)} b) Neighbour list 12 7 C3 15 c1-{c2,c3,c4} c2-{c3} c3-{c4} C4 c) Adjacency matrix c1 c2 c3 c4 c1 0 1 1 1 c2 1 0 1 1 c3 1 1 0 1 c4 1 1 1 0 1.3 Computational Algorithms A computational algorithm is a deterministic step-by-step procedure to solve a computational problem. That is, the computational algorithm should produce a solution for any instance of the computational problem. The interest is to find an efficient algorithm, in terms of computing resources, to solve a given problem. Practical computing time is typically the factor which determines if an algorithm can be considered efficient or not. The time complexity of a computational algorithm refers to the largest amount of time required by the algorithm to solve a problem instance of a given size. ASAP Research Group, School of Computer Science, The University of Nottingham Page 3

For a given problem, how to assess if a computational algorithm is efficient? Computational perspective: polynomial time vs. exponential time complexity. The complexity of a function f(n) is O(g(n)) if f(n) c g(n) for all n>0 and some constant c. An algorithm has polynomial time complexity if its complexity function is given by O(g(n)) where g(n) is a polynomial function for the input length n. 1.4 Intractable and Undecidable Problems A computational problem is considered intractable if no computationally efficient algorithm can be given to solve it, or in other words, if no polynomial time algorithm can be found to solve it. Although polynomial time algorithms are preferred, some exponential time algorithms are considered good to solve some problems. This is because the algorithm may have exponential complexity in the worst case but it performs satisfactorily on the average. The tractability of a given problem is independent of the encoding scheme and the computer model. The tractability of a problem can refer to: An exponential amount of computational time is needed to find a solution. For example solving the TSP with algorithm A3. A solution is too long that cannot be described with a polynomial expression on the length of the input. For example, in a TSP instance, asking for all tours that have length L or less. A computational problem is considered undecidable if there is no algorithm at all can be given to solve it. Therefore, an undecidable problem is also intractable. Many intractable problems are decidable and can be solved in polynomial time with a nondeterministic computing model. The NP-complete class of problems include decision problems (the answer is either YES or NO) that can be solved by a non-deterministic computing model in polynomial time. The conjecture is that NP-complete problems are intractable but this is a very much open question. But at least, if the problem is NP-complete there is no known polynomial time algorithm to solve it. 1.5 Time Complexity We attempt to separate the practically solvable problems from those that cannot be solved in a reasonable time. ASAP Research Group, School of Computer Science, The University of Nottingham Page 4

When constructing a computer algorithm it is important to assess how expensive the algorithm is in terms of storage and time. Algorithm complexity analysis tells how much time does an algorithm takes to solve a problem. The size of a problem instance is typically measured by the size of the input that describes the given instance. Algorithm complexity analysis is based on counting primitive operations such as arithmetic, logical, reads, writes, data retrieval, data storage, etc. time A (n) expresses the time complexity of algorithm A and is defined as the greatest number of primitive operations that could be performed by the algorithm given a problem instance of size n. We are usually more interested in the rate of growth of time A than in the actual number of operations in the algorithm. Thus, to determine whether it is practical or feasible to implement A we require a not too large upper bound for time A (n). Time complexity of algorithms is expressed using the O-notation. The complexity of a function f(n) is O(g(n)) if f(n) c g(n) for all n>0 and some constant c. Let f and g be functions on n, then: f(n) = O(g(n)) means that f is of the order of g, that is, f grows at the same rate of g or slower. f(n) = (g(n)) means that f grows faster than g f(n) = (g(n)) means that f and g grow at the same rate. An algorithm is said to perform in polynomial time if and only if its time complexity function is given by O(n k ) for some k 0. That is, an algorithm A is polynomial time if time A (n) f(n) and f is a polynomial, i.e. f has the form: a 1 n k + a 2 n k-1 + + a m Then, the class P is defined as the class of problems which can be solved by polynomial time deterministic algorithms. The notion is that problems in P can be solved quickly. Then, the class NP is defined as the class of problems which can be solved by polynomial time nondeterministic algorithms. The notion is that problems in NP can be verified quickly. ASAP Research Group, School of Computer Science, The University of Nottingham Page 5

Then, within the class NP there are easy problems, those in P, and very much harder problems called NP-complete problems. Class NP-complete Class NP Class P Intuitively, the class NP-complete are the hardest problems in NP as defined by Cook s theorem. 1.6 Tackling NP-Complete Problems There are two approaches: 1. Search techniques that although of exponential time complexity aim for optimality and are an improvement over exhaustive search. These include: 1) techniques that construct partial solutions, e.g. branch and bound, and 2) techniques that reduce the worst time complexity by making clever choices while searching the tree, e.g. specific algorithms for some NP-complete problems. 2. Search techniques that do not aim to find the optimal solutions but instead aim for sub-optimal solutions, that is: 1) approximation solution that is optimal up to a small constant factor, e.g. approximation algorithms, or 2) good enough solution reasonable fast, e.g. local search heuristics. A combinatorial optimization problem (maximization or minimization) consists of three parts: 1. A set of instances D 2. For each instance I a finite set S D (I) of candidate solutions to I 3. An evaluation function F that assigns to each instance I and each candidate solution s S D (I) a positive rational number F(I,s) called the solution value of s. If is a maximisation problem then: An optimal solution s* is one that for all s S D (I), F(I,s*) F(I,s) then OPT(I) is used to denote the value for an optimal solution to the instance I. An algorithm A is called an approximation algorithm if it finds a solution s S D (I) with value A(I). The algorithm is called an optimization algorithm if A(I)=OPT(I). If is NP-complete, no polynomial time algorithm can be found unless P=NP. The option then is to find an approximation algorithm or heuristic, called algorithm A, that runs in reasonable time and for each instance I of, A(I) is close to OPT(I). ASAP Research Group, School of Computer Science, The University of Nottingham Page 6

2. Basics of Heuristic Search for Optimization 2.1 Optimization Problems Basic ingredients of optimization problems: A set of decision variables which affect the value of the objective function. In the travelling salesman problem, the variables might be of the type X ij {0,1} where the value of the variable indicates whether city j is visited after city i or not; in fitting-the-data problem, the variables are the parameters that define the model. An objective function which we want to minimize or maximize. It relates the decision variables to the goal and measures goal attainment. For instance, in the travelling salesman problem, we want to minimise the total distance travelled for visiting all cities in a tour; in fitting experimental data to a user-defined model, we might minimize the total deviation of observed data from predictions based on the model. A set of constraints that allow the variables to take on certain values but exclude others. For some variations of the travelling salesman problem, it may not be allowed to visit some cities in a certain sequence, so restricting the values that certain decision variables can take in a solution to the problem instance. The optimization problem is defined as follows: Find values for all the decision variables that minimize or maximize the objective function while satisfying the given constraints. Formal representation of optimization problems are described by mathematical programming. Mathematical programming is used to describe problems of choosing the best element from some set of available alternatives or in OR terms to describe problems how to allocate limited resources. Mathematical formulation: minimize f(x) subject to g i (x) b i, i=1,,m h j (x) =c j, j=1,,n where x is a vector of decision variables, f(),g i (), h i () are general functions. Similar formulation of the problem holds when the objective function has to be maximized. Placing restrictions on the type of functions under consideration and on the values that the decision variables can take lead to specific classes of optimization problems. A solution which satisfies all constraints is called a feasible solution. ASAP Research Group, School of Computer Science, The University of Nottingham Page 7

2.2 Linear Programming (LP) In an LP formulation, f(),gi(), hi() are linear functions of decision variables LP is one of the most important tools in OR The typical LP problem involves: limited resources competing activities measure of solution quality constraints on the use of resources the optimal solution LP is a mathematical model with: decision variables and parameters linear algebraic expressions (objective function and constraints) planning activities A standard LP formulation for the problem of finding the optimal allocation of limited resources to competing activities. Note that constraints can involve inequalities or, strict or not and equalities. 2.3 Integer programming (IP) In many applications, the solution of an optimization problem makes sense only if decision variables are integers. Therefore, a solution XZ n, where Z n is the set of n-dimensional integer vectors. In mixed-integer programs, some components of X are allowed to be real. In general, integer linear problems are much more difficult than regular linear programming. Integer programming problems, such as the travelling salesman problem, are often expressed in terms of binary variables. Linear and integer programming have proved valuable for modelling many and diverse types of problems in manufacturing, routing, scheduling, assignment, and design. Industries that make use of LP and its extensions include transportation, energy, telecommunications, and manufacturing of many kinds. ASAP Research Group, School of Computer Science, The University of Nottingham Page 8

2.4 Discrete/Continuous/Combinatorial Optimization In discrete optimization, the variables used in the mathematical programming formulation are restricted to assume only discrete values, such as the integers. In continuous optimization, the variables used in the objective function can assume real values, e.g., values from intervals of the real line. In combinatorial optimization, decision variables are discrete, which means that the solution is a set, or sequence of integers or other discrete objects. The number of possible solutions is combinatorial number (extra large number, such as 10 50 ). Combinatorial optimization problems include problems on graphs, matroids and other discrete structures. An example of combinatorial optimization is the set covering problem. Given a finite set of N items and a set of available features U, the problem is to select the smallest subset of items that together include the subset F U of desirable features. Assume that c ij =1 if item i includes feature j, otherwise c ij =0. Mathematical programming formulation for the set covering problem: The mathematical formulation for a specific set covering problem instance: Why is Set Covering a difficult optimization problem to solve? For a Set Covering instance with N = 4 decision variables x i, the corresponding search tree is shown below. Each branch in the tree represents a potential solution and a search algorithm needs to ASAP Research Group, School of Computer Science, The University of Nottingham Page 9

explore this search space. As the number of decision variables increases, the search tree may grow exponentially. 2.5 Approximation Algorithms For the BIN PACKING problem with a set of items U={u i,u 2,,u n } each with size given by a rational number s i and a set of bins each with capacity equal to 1. The optimization problem is to find a partition of U into disjoint subsets U 1, U 2,,U k such that the sum of sizes of the items in each U i is no more than 1 and such that k is as small as possible. That is, the goal is to pack all the items in as few bins as possible. The First Fit (FF) algorithm is an approximation algorithm for BIN PACKING: - given the sequence of unit-capacity bins B1,B2,B3 - place the items into the bins, one at a time, item ui is placed in the lowest-indexed bin with enough remaining capacity for si which is the item size It has been proven that the worst performance: FF(I) 1.7 OPT(I) + 2 The Best Fit (BF) algorithm is another approximation algorithm for BIN PACKING: - given the sequence of unit-capacity bins B1,B2,B3 - place the items into the bins, one at a time, item ui is placed in bin with enough remaining capacity closest to but not exceeding 1 s1 It has been proven that the worst performance: BF(I) 1.7 OPT(I) + 2 The performance of the FF and BF approximation algorithms for BIN PACKING depends very much on the ordering of the items, in particular whether larger or smaller items appear before in the sequence. ASAP Research Group, School of Computer Science, The University of Nottingham Page 10

The First Fit Decreasing (FFD) algorithm is another approximation algorithm for BIN PACKING: - given the sequence of unit-capacity bins B 1,B 2,B 3 - order the items in non-increasing order of their size - place the items into the bins, one at a time, item ui is placed in the lowest-indexed bin with enough remaining capacity for si which is the item size (as in FF) It has been proven that the worst performance: FFD(I) 1.22 OPT(I) + 4 2.6 Greedy Algorithms and Neighbourhood Search Real-world optimization problems (combinatorial or continuous) are difficult to solve for several reasons: Size of the search space is very large (exhaustive search impractical) Simplified models are needed to facilitate an answer Difficult to design an accurate evaluation function Existence of large number of constraints Difficult to collect accurate data Human limitations to construct solutions Hard constraints must be satisfied for a solution to be feasible. Soft constraints are not mandatory but desirable conditions in good quality solutions. Given search space S and a set of feasible solutions F S, a search problem is to find a solution x S such that: Two solutions x,y are said to be neighbours if they are close to each other in the sense that the distance between their corresponding encodings or representations is within a certain limit. Some search algorithms work with complete solutions, i.e. try to go from the current solution to a better one. Example: neighbourhood search. Other search algorithms work with partial solutions, i.e. construct a complete solution step by step. Example: greedy algorithms. Example. Consider a symmetric n-city TSP over a complete graph with solutions encoded as permutations. permutation of integers 1 to n totalnumber of permutations : n! n! ignoring symmetrictours : 2 ignoring shifted identical tours : n 1! 2 e.g. 13546278 e.g. 13546278 87264531 e.g. 13546278 54627813 ASAP Research Group, School of Computer Science, The University of Nottingham Page 11

Evaluation Function S is set of solutions and a solution s c c c f(s) Objective i.e. find a d( c, c 1 2 ) d( c find ss such that f ( s) 2, c ) d( c 3 f ( s') n1, c tour with the minimum length Neighbourhood Solutions 2 - opt move :interchanges current :13546278 n 1 s' S 2 3 c ) d( c 2 non - adjacent edges n n, c ) 1 then neighbours :13246578,13746258,13586274, etc. 2 - right_insert move :moves a city 2 positionsto the right current :13546278 Greedy Algorithm 1 1.Select random starting city 2. Proceed 3. Repeat step 2 until all 4. Return tostartingcity Greedy Algorithm 2 1. Find shortest edge 3. Add cities neighbours :15436278,13542768, etc. to nearest unvisited city ( c,c 2.Select next chapest edge (making sure no city is c,c r t to the tour cities are visited ) and add cities c,c ( c,c ) visited more than once) 4. Repeat steps2-3 until a tour is formed i j r t i j to the tour In combinatorial optimization problems, the neighbourhood of a solution x in the search space S can be defined as: N( x) y S y ( x) where is a move or sequence of moves A feasible solution x is a local optimum with respect to N(x) if: f ( x) f ( y) y N( x) (assuming minimisation) If the inequality above is strict then x is a strict local optimum. ASAP Research Group, School of Computer Science, The University of Nottingham Page 12

The graph below illustrates a fitness landscape with local optima, global optima, hills, valleys and plateaus. Neighbourhood search refers to exploring solutions within the neighbourhood of the current solution with the goal to find the best solution in the vicinity, i.e. a local optimum. Trade-off in neighbourhood search: N(x) vs. Search Time Note: defining appropriate neighbourhoods according to the problem domain and the search strategy is crucial in heuristic local search. Neighbourhood Size The k-exchange neighbourhood is widely used in many combinatorial optimization problems. Typically, k-exchange neighbourhood = O(n k ) Neighbourhood pruning: consider only a subset of neighbours which are selected based on the state of the current solution. Some Strategies for Escaping Local Optima Re-initialise the search from a different starting solution Accept non-improving candidate solutions to replace the current solution in deterministic or probabilistic manner Explore only a fraction of the neighbourhood (e.g. pruning) Use more than one neighbourhood to generate candidate solutions Design composite (independent or not) neighbourhoods Maintain a memory of already visited solutions or regions Modify the fitness function to change the search landscape Perturb local optima in deterministic or probabilistic manner Improve more than one solutions simultaneously ASAP Research Group, School of Computer Science, The University of Nottingham Page 13

2.7 Basics of Meta-heuristics Greedy heuristics and simple local search have some limitations with respect to the quality of the solution that they can produce. A good balance between intensification and diversification is crucial in order to obtain high-quality solutions for difficult problems. A meta-heuristic can be defined as an iterative master process that guides and modifies the operations of subordinate heuristics to efficiently produce high-quality solutions. It may manipulate a complete (or incomplete) single solution or a collection of solutions at each iteration. The subordinate heuristics may be high (or low) level procedures, or a simple local search, or just a construction method (Voss et al. 1999). A meta-heuristic method is meant to provide a generalized approach that can be applied to different problems. Meta-heuristics do not need a formal mathematical model of the problem to solve. An example of a basic meta-heuristics is the iterated local search approach, an intuitive method to combine intensification and diversification. The local search and perturbation operators should be carefully designed so that they work well in combination. Flow Diagram ASAP Research Group, School of Computer Science, The University of Nottingham Page 14

Some Types of Meta-heuristics Single-solution method vs. Population-based methods Nature-inspired method vs. Non-nature inspired methods Pure methods vs. Hybrid methods Memory-based vs. Memory-less methods Deterministic vs. Stochastic methods Iterative vs. Greedy methods The following algorithms are examples of meta-heuristics: Iterated local search Threshold acceptance Great deluge Simulated annealing Greedy randomised search procedure Guided local search Variable neighbourhood search Tabu search Evolutionary algorithms Particle swarm optimization Artificial immune systems Etc. 2.8 Basics of Evolutionary Algorithms The rationale for evolutionary algorithms (EAs) is to maintain a population of solutions during the search. The solution (individuals) compete between them and are subject to selection, and reproduction operators during a number of generations. Exploitation vs. Exploration Having many solutions instead of only one. Survival of the fittest principle. Pass on good solution components through recombination. Explore and discover new components through self-adaptation. Solutions are modified from generation to generation by means of reproduction operators (recombination and self-adaptation). ASAP Research Group, School of Computer Science, The University of Nottingham Page 15

Typical Evolutionary Algorithm Cycle Typical Components of an EA Solution representation Population size Initialisation strategy Evaluation function Reproduction operators Selection mechanism Stopping condition to detect convergence A diverse range of evolutionary procedures can be designed for the subject problem by tuning the above components. It is crucial to design components and tune parameters while considering the choices made on the various parts of the algorithm. Some Examples of Evolutionary Algorithms Genetic Algorithms (GA) Evolutionary Strategies (ES) Genetic Programming (GP) Ant Colony Optimization (ACO) Memetic Algorithms (MA) ASAP Research Group, School of Computer Science, The University of Nottingham Page 16

Particle Swarm Optimization (PSO) Differential Evolution (DE) Estimation of Distribution Algorithm (EDA) Cultural Algorithms (CA) Since this a very active research area, new variants of EAs and other population-based metaheuristics are often proposed in the specialised literature. In optimization problems the goal is to find the best feasible solution. Issues When Dealing with Infeasible Solutions Evaluate quality of feasible and infeasible solutions Discriminate between feasible and infeasible solutions Decide if infeasibility should be eliminated Decide if infeasibility should be repaired Decide if infeasibility should be penalised Initiate the search with feasible solutions, infeasible ones or both Maintain feasibility with specialised representations, moves and operators Design a decoder to translate an encoding into a feasible solution Focus the search on specific areas, e.g. in the boundaries A constraint handling technique is required to help the effective and efficient exploration of the search space. ASAP Research Group, School of Computer Science, The University of Nottingham Page 17