Algorithms & Complexity

Similar documents
Non-deterministic Search techniques. Emma Hart

Lecture 4. Convexity Robust cost functions Optimizing non-convex functions. 3B1B Optimization Michaelmas 2017 A. Zisserman

Introduction to Design Optimization: Search Methods

Escaping Local Optima: Genetic Algorithm

Heuristic Optimisation

INF Biologically inspired computing Lecture 1: Marsland chapter 9.1, Optimization and Search Jim Tørresen

Artificial Intelligence

Simulated Annealing. Slides based on lecture by Van Larhoven

CS 331: Artificial Intelligence Local Search 1. Tough real-world problems

Heuristic Optimization Introduction and Simple Heuristics

Outline. Informed Search. Recall: Uninformed Search. An Idea. Heuristics Informed search techniques More on heuristics Iterative improvement

Algorithm Design (4) Metaheuristics

Homework 2: Search and Optimization

Machine Learning for Software Engineering

Introduction to Artificial Intelligence 2 nd semester 2016/2017. Chapter 4: Beyond Classical Search

Artificial Intelligence

Hardware-Software Codesign

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Solving Traveling Salesman Problem Using Parallel Genetic. Algorithm and Simulated Annealing

Administrative. Local Search!

Predicting Diabetes using Neural Networks and Randomized Optimization

ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS

Introduction to Optimization

Introduction to Optimization

Global Optimization. for practical engineering applications. Harry Lee 4/9/2018 CEE 696

N-Queens problem. Administrative. Local Search

Outline. Best-first search. Greedy best-first search A* search Heuristics Local search algorithms

Artificial Intelligence

Informed search algorithms. (Based on slides by Oren Etzioni, Stuart Russell)

Local Search (Ch )

Hill Climbing. Assume a heuristic value for each assignment of values to all variables. Maintain an assignment of a value to each variable.

Chapter 14 Global Search Algorithms

March 19, Heuristics for Optimization. Outline. Problem formulation. Genetic algorithms

GENETIC ALGORITHM with Hands-On exercise

EXECUTION PLAN OPTIMIZATION TECHNIQUES

An evolutionary annealing-simplex algorithm for global optimisation of water resource systems

AI Programming CS S-08 Local Search / Genetic Algorithms

METAHEURISTICS. Introduction. Introduction. Nature of metaheuristics. Local improvement procedure. Example: objective function

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Gradient Descent. 1) S! initial state 2) Repeat: Similar to: - hill climbing with h - gradient descent over continuous space

Artificial Intelligence

mywbut.com Informed Search Strategies-II

Ar#ficial)Intelligence!!

Using Genetic Algorithms to optimize ACS-TSP

TABU search and Iterated Local Search classical OR methods

Outline. TABU search and Iterated Local Search classical OR methods. Traveling Salesman Problem (TSP) 2-opt

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Sina Institute, University of Birzeit

Evolutionary Algorithms: Perfecting the Art of Good Enough. Liz Sander

3.6.2 Generating admissible heuristics from relaxed problems

Artificial Intelligence in Robot Path Planning

Job Shop Scheduling Problem (JSSP) Genetic Algorithms Critical Block and DG distance Neighbourhood Search

Comparison of TSP Algorithms

Dr. Mustafa Jarrar. Chapter 4 Informed Searching. Artificial Intelligence. Sina Institute, University of Birzeit

Introduction (7.1) Genetic Algorithms (GA) (7.2) Simulated Annealing (SA) (7.3) Random Search (7.4) Downhill Simplex Search (DSS) (7.

Introduction to Stochastic Optimization Methods (meta-heuristics) Modern optimization methods 1

Design and Analysis of Algorithms

PROBLEM SOLVING AND SEARCH IN ARTIFICIAL INTELLIGENCE

Informed Search. Dr. Richard J. Povinelli. Copyright Richard J. Povinelli Page 1

Outline of the module

Introduction to Genetic Algorithms

Introduction to Genetic Algorithms. Genetic Algorithms

Massively Parallel Approximation Algorithms for the Traveling Salesman Problem

Optimization Techniques for Design Space Exploration

Search Algorithms for Regression Test Suite Minimisation

Two approaches. Local Search TSP. Examples of algorithms using local search. Local search heuristics - To do list

Outline. Informed Search. Recall: Uninformed Search. An Idea. Heuristics Informed search techniques More on heuristics Iterative improvement

An Overview of Search Algorithms With a Focus in Simulated Annealing

Solving Optimization Problems with MATLAB Loren Shure

Random Search Report An objective look at random search performance for 4 problem sets

Pre-requisite Material for Course Heuristics and Approximation Algorithms

Informed search algorithms. Chapter 4

SIMULATED ANNEALING TECHNIQUES AND OVERVIEW. Daniel Kitchener Young Scholars Program Florida State University Tallahassee, Florida, USA

Machine Learning for Software Engineering

Dr. Stephan Steigele Vorlesung im SS 2008

Simulated Annealing. G5BAIM: Artificial Intelligence Methods. Graham Kendall. 15 Feb 09 1

ACO and other (meta)heuristics for CO

Artificial Intelligence

Lecture Plan. Best-first search Greedy search A* search Designing heuristics. Hill-climbing. 1 Informed search strategies. Informed strategies

Research Incubator: Combinatorial Optimization. Dr. Lixin Tao December 9, 2003

EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM

Introduction to Computer Science and Programming for Astronomers

Fuzzy Inspired Hybrid Genetic Approach to Optimize Travelling Salesman Problem

A Steady-State Genetic Algorithm for Traveling Salesman Problem with Pickup and Delivery

Single Candidate Methods

Sparse Matrices Reordering using Evolutionary Algorithms: A Seeded Approach

CS:4420 Artificial Intelligence

Informed Search and Exploration

Uninformed Search Methods. Informed Search Methods. Midterm Exam 3/13/18. Thursday, March 15, 7:30 9:30 p.m. room 125 Ag Hall

Hybridization EVOLUTIONARY COMPUTING. Reasons for Hybridization - 1. Naming. Reasons for Hybridization - 3. Reasons for Hybridization - 2

Evolutionary Computation Algorithms for Cryptanalysis: A Study

Artificial Intelligence

x n+1 = x n f(x n) f (x n ), (1)

Derivative-Free Optimization

Outline. Best-first search

TDDC17. Intuitions behind heuristic search. Recall Uniform-Cost Search. Best-First Search. f(n) =... + h(n) g(n) = cost of path from root node to n

Term Paper for EE 680 Computer Aided Design of Digital Systems I Timber Wolf Algorithm for Placement. Imran M. Rizvi John Antony K.

Informed search algorithms. Chapter 4

Hardware/Software Partitioning of Digital Systems

Transcription:

Algorithms & Complexity Nicolas Stroppa - nstroppa@computing.dcu.ie CA313@Dublin City University. 2006-2007. November 21, 2006

Classification of Algorithms O(1): Run time is independent of the size of the problem n. O(ln(n)): Occurs when a big problem is solved by transforming it into a smaller size by some constant fraction. (Logarithmic) O(n): Occurs when each element of the problem requires a small amount of processing. (Linear) O(n ln(n)): Occurs when a problem is broken into smaller subproblems, solving them independently, and combining the solutions. (Linearithmic) O(n 2 ): Occurs when an algorithm processes all pairs of elements. (Quadratic) O(2 n ): Exponential run time. To be avoided. Characteristic of brute force approaches. (Exponential).

Classification of Algorithms If each step takes 1ms (10 3 s) then: n=100 n=200 O(ln(n)) 2ms 2.3ms O( n) 10ms 14.1ms O(n) 0.1s 0.2s O(n ln(n)) 0.2s 0.46s O(n 2 ) 10s 40s O(2 n ) 4x1019 years 5.1x1048 years

Classification of Algorithms If each step takes 1ms (10 3 s) then: n=100 n=200 O(ln(n)) 2ms 2.3ms O( n) 10ms 14.1ms O(n) 0.1s 0.2s O(n ln(n)) 0.2s 0.46s O(n 2 ) 10s 40s O(2 n ) 4x1019 years 5.1x1048 years exponential problems are untractable!

Some NP-complete problems Games Chess Sudoku Minesweeper Scheduling Find the best matching between rooms, teachers, and students. Find the best way to schedule processes in a computer. Positioning Positioning Antenna Designing the lines of a metro

Complex problems need approximate solutions Definition (Heuristic) A heuristic is an approach for directing one s attention in learning, discovery, or problem-solving. A heuristic is usually based on a general principle that prefers some solutions to others. A heuristic usually leads to an approximate (but often good) solution, in tractable time. Note Comes from the Greek heurisko (I find). ( eureka I have found it!)

- Driving in Paris You are visiting your friend in Paris for a week. You rent a car at the airport and you have to find the shortest path to go to his place. This is an optimization problem: among the set of all possible paths, you want to find the shortest one. Depending on the configuration of the streets, this problem can be more or less difficult. 1 In Paris, it is more complex... A heuristic in this case is to use the main avenues: this is not an optimal method, but it will always give you a decent solution. 1 In Manhattan, this problem is very easy, all the paths have the same length.

- Traveling in Europe You want to visit the capital cities of Europe. You want to minimize the total cost of the trip. This problem is actually the same as the Traveling Salesman Problem! (distances are replaced with costs (flights,train)) : Go from one city A to the less expensive one from A.

- Chess In chess, you often swap pieces with the other player. A heuristic consists in assigning a number to each piece, and to try, during a swap, not to lose a piece with less value than the one the other player is losing.

Some known Genetic Algorithms Simulated Annealing Tabu Search Note These methods are sometimes called meta-heuristics because they are very general, and (in principle) they can be applied to any problem.

Genetic algorithms Genetic algorithms are biologically inspired algorithms, and are a particular class of evolutionary algorithm. They use techniques inspired by evolutionary biology such as inheritance, mutation, selection, and crossover (also called recombination). Intuitively, they rely on the principles of natural selection to solve complex optimisation problems.

Genetic algorithms - Main ideas Each element of the search space is associated to an individual. each individual represents a candidate for the solution of our problem. The fitness of an individual is the quality of the solution (for example, in the TSP, it would be the total cost of a path). The evolution starts from a population of randomly generated individuals and happens in generations. In each generation, the fitness of every individual in the population is evaluated, multiple individuals (usually the best ones) are selected from the current population, and modified (recombined and possibly mutated) to form a new population. The new population is then used in the next iteration of the algorithm.

Genetic algorithms - Main ideas Selection Given a population P = {i 1,..., i n }, choose the k best individuals SELECT (P, k) of P, according to the fitness function. Mutation Given an individual i, mutate i to get an individual m(i) that differs slightly from i. Cross-over/Recombination Given two individuals i and j, combine them to get a new individual c(i, j).

Genetic algorithms - Formalization Each element e of the search space S is associated to an individual, a genetic representation of e. The fitness function is noted f. 1. Init. Generate a random initial population P = {i 1,..., i n } V with n individuals. i = 0, P i P. 2. Selection. P i+1 SELECT (P i, k) 3. If a stopping criterion is reached, go to (6); otherwise, go to step (4). 4. Mutation and Cross-Over. From P i+1, create new individuals P new through crossover and mutation. P i+1 P i+1 P new 5. i i + 1. Go to step (2) 6. Propose the best individual in P i+1

Genetic algorithms - Properties The goal of the mutation step is to be able to explore new areas and thus to avoid being stucked in local minima. The goal of the combination step is to combine the strength of two good individuals in order to create an individual (possibly) better than its parents. You can stop if you consider that your population is too homogeneous or if you have reached a given number of steps, or if you have reached a time limit. At each step, the best individual can be proposed, so it is an anytime algorithm. It is a stochastic algorithm (i.e. non deterministic). The main parameters of the algorithms are: n (number of individuals in a population), k (number of selected individuals), µ (probability of mutation)

Genetic algorithms and the TSP Representation A path is a list of cities: {a, b, d, f, e} Selection The fitness function is the total cost of a path Mutation Swap two cities in the path {a, b, d, f, e, c} {a, d, b, f, e, c} Re-combination Random Split Position. {a, b,, d, f, e, c} + {d, f,, c, b, e, a} {a, b,, c, e, d, f }

Hill Climbing Hill-climbing is an optimisation method in which you explore the search space by going from one solution to one of its neighbours. Intuitively, you try to climb the hill! In the basic version of hill-climbing, you start from a random solution, you examine its neighbours and you chose the first one better than it according to the fitness function. You stop when: (i) you have reached a local minimum (i.e. the current solution is better than all its neighbours), or (ii) you have reached a given number of steps.

Hill Climbing Pseudo Code solution := randomsolution() MAX_FITNESS := FITNESS(solution) found := True WHILE found: found := False neighbours := Neighbours(solution) FOR s IN neighbours: IF (FITNESS(s) > MAX_FITNESS): MAX_FITNESS := FITNESS(s) solution := s found := True BREAK RETURN point

Hill Climbing Hill climbing is a very simple optimization algorithm!

Hill Climbing Hill climbing is a very simple optimization algorithm! Complexity O(t k), where t is the total number of steps, and k the number of neighbours per path.

Hill Climbing - Application to the TSP Representation To paths are neighbours if you can turn one into the other one by swapping two cities. Complexity O(t k), where t is the total number of steps, and k the number of neighbours per path. If n is the number of cities, what is k?

Hill Climbing - Application to the TSP Representation To paths are neighbours if you can turn one into the other one by swapping two cities. Complexity O(t k), where t is the total number of steps, and k the number of neighbours per path. If n is the number of cities, what is k? k = n Complexity is O(t n)

Hill Climbing - Application to the TSP Advantages Very generic (can be applied to any optimisation problem) Very simple Is an anytime algorithm

Hill Climbing - Application to the TSP Advantages Very generic (can be applied to any optimisation problem) Very simple Is an anytime algorithm Disadvantages Strongly dependent on the initial random point Gets stuck in local optima (greedy algorithm)

Simulated annealing Avoiding local optima Simulated annealing (SA) is a generic and probabilistic meta-heuristic, introduced in the 80s. Genetic algorithms are biologically inspired, simulated annealing is metallurgy inspired. Thermal annealing is a technique involved in metallurgy to reduce the defects of a material by heating and controlled cooling. the heat causes the atoms to become unstuck from their initial positions (a local minimum of the internal energy) and wander randomly through states of higher energy; the slow cooling gives them more chances of finding configurations with lower internal energy than the initial one.

Simulated annealing By analogy with this physical process, each step of the SA algorithm replaces the current solution by a random nearby solution, chosen with a probability that depends on the difference between the corresponding function values and on a global parameter T (called the temperature), that is gradually decreased during the process. The dependency is such that the current solution changes almost randomly when T is large, but increasingly downhill as T goes to zero. The allowance for uphill moves saves the method from becoming stuck at local minima, which are the bane of greedier methods.

Simulated annealing The algorithm starts by generating an initial solution (usually a random solution) and by initializing the so-called temperature parameter T. Then the following is repeated until the termination condition is satisfied: a solution s from the neighborhood Neighbours(solution) of the current solution solution is randomly sampled and it is accepted as new current solution if FITNESS(solution) < FITNESS(s) or, in case FITNESS(solution) >= FITNESS(s), with a probability which is a function of T and FITNESS(solution) FITNESS(s), usually exp( (FITNESS(solution) FITNESS(s)) T ).

Simulated annealing Pseudo Code solution := randomsolution() T := T_0 WHILE termination conditions not met: s := chooserandomlyfrom(neighbours(solution)) IF (FITNESS(solution) < FITNESS(s)): solution := s ELSE: DO: solution := s with probability exp(-(fitness(solution) - FITNESS(s))/T). Update(T) ENDWHILE