Optimization Techniques for Design Space Exploration

Similar documents
Tabu Search - Examples

Hardware-Software Codesign

Evolutionary Computation Algorithms for Cryptanalysis: A Study

Lecture 06 Tabu Search

Simple mechanisms for escaping from local optima:

Hardware/Software Partitioning of Digital Systems

6. Tabu Search. 6.3 Minimum k-tree Problem. Fall 2010 Instructor: Dr. Masoud Yaghini

Non-deterministic Search techniques. Emma Hart

Note: In physical process (e.g., annealing of metals), perfect ground states are achieved by very slow lowering of temperature.

Parallel Computing in Combinatorial Optimization

Algorithms for Integer Programming

Algorithm Design (4) Metaheuristics

Machine Learning for Software Engineering

Machine Learning for Software Engineering

Simulated Annealing. G5BAIM: Artificial Intelligence Methods. Graham Kendall. 15 Feb 09 1

Overview of Tabu Search

Introduction to Optimization Using Metaheuristics. Thomas J. K. Stidsen

ACO and other (meta)heuristics for CO

Introduction to Optimization Using Metaheuristics. The Lecturer: Thomas Stidsen. Outline. Name: Thomas Stidsen: Nationality: Danish.

n Informally: n How to form solutions n How to traverse the search space n Systematic: guarantee completeness

Heuristic Optimization Introduction and Simple Heuristics

System Level Hardware/Software Partitioning Based on Simulated Annealing and Tabu Search *

Tabu Search Method for Solving the Traveling salesman Problem. University of Mosul Received on: 10/12/2007 Accepted on: 4/3/2008

TABU search and Iterated Local Search classical OR methods

Outline. TABU search and Iterated Local Search classical OR methods. Traveling Salesman Problem (TSP) 2-opt

Artificial Intelligence

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg

GRASP. Greedy Randomized Adaptive. Search Procedure

Stochastic branch & bound applying. target oriented branch & bound method to. optimal scenario tree reduction

Adaptive Large Neighborhood Search

SLS Algorithms. 2.1 Iterative Improvement (revisited)

Chapter 14 Global Search Algorithms

Research Incubator: Combinatorial Optimization

CS 331: Artificial Intelligence Local Search 1. Tough real-world problems

Outline of the module

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Research Incubator: Combinatorial Optimization. Dr. Lixin Tao December 9, 2003

Parallel Simulated Annealing for VLSI Cell Placement Problem

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Metaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini

Simplicial Global Optimization

3 INTEGER LINEAR PROGRAMMING

Simulated Annealing. Slides based on lecture by Van Larhoven

Hardware/Software Codesign

Introduction (7.1) Genetic Algorithms (GA) (7.2) Simulated Annealing (SA) (7.3) Random Search (7.4) Downhill Simplex Search (DSS) (7.

METAHEURISTICS. Introduction. Introduction. Nature of metaheuristics. Local improvement procedure. Example: objective function

Heuristic Optimisation

COMPARATIVE STUDY OF CIRCUIT PARTITIONING ALGORITHMS

Unit 8: Coping with NP-Completeness. Complexity classes Reducibility and NP-completeness proofs Coping with NP-complete problems. Y.-W.

Optimal tour along pubs in the UK

Advanced Search Simulated annealing

Two approaches. Local Search TSP. Examples of algorithms using local search. Local search heuristics - To do list

Pre-requisite Material for Course Heuristics and Approximation Algorithms

CSE 417 Branch & Bound (pt 4) Branch & Bound

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM

Unit 5A: Circuit Partitioning

Construction of Minimum-Weight Spanners Mikkel Sigurd Martin Zachariasen

Solving Traveling Salesman Problem Using Parallel Genetic. Algorithm and Simulated Annealing

General Purpose Methods for Combinatorial Optimization

Term Paper for EE 680 Computer Aided Design of Digital Systems I Timber Wolf Algorithm for Placement. Imran M. Rizvi John Antony K.

Algorithms & Complexity

Test Cost Optimization Using Tabu Search

Introduction to Stochastic Optimization Methods (meta-heuristics) Modern optimization methods 1

Methods and Models for Combinatorial Optimization Heuristis for Combinatorial Optimization

An evolutionary annealing-simplex algorithm for global optimisation of water resource systems

Task Allocation for Minimizing Programs Completion Time in Multicomputer Systems

A Steady-State Genetic Algorithm for Traveling Salesman Problem with Pickup and Delivery

Markov Chain Analysis Example

Outline. Informed Search. Recall: Uninformed Search. An Idea. Heuristics Informed search techniques More on heuristics Iterative improvement

Preliminary Background Tabu Search Genetic Algorithm

Introduction. A very important step in physical design cycle. It is the process of arranging a set of modules on the layout surface.

Computational Complexity CSC Professor: Tom Altman. Capacitated Problem

Genetic Algorithm for Circuit Partitioning

(Stochastic) Local Search Algorithms

6. Tabu Search 6.1 Basic Concepts. Fall 2010 Instructor: Dr. Masoud Yaghini

a local optimum is encountered in such a way that further improvement steps become possible.

Global Optimization Simulated Annealing and Tabu Search. Doron Pearl

Comparison of TSP Algorithms

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

A Tabu Search Heuristic for the Generalized Traveling Salesman Problem

SPATIAL OPTIMIZATION METHODS

A Comparative Study of Tabu Search and Simulated Annealing for Traveling Salesman Problem. Project Report Applied Optimization MSCI 703

Complete Local Search with Memory

Artificial Intelligence p.1/49. n-queens. Artificial Intelligence p.2/49. Initial state: the empty board or a board with n random

A Study of Neighborhood Structures for the Multiple Depot Vehicle Scheduling Problem

CAD Algorithms. Placement and Floorplanning

Data Mining Chapter 8: Search and Optimization Methods Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University

Parallel Machine Scheduling: A (Meta)Heuristic Computational Evaluation MIC 2001, Porto, Portugal

Solving Zero-One Mixed Integer Programming Problems Using Tabu Search

Introduction VLSI PHYSICAL DESIGN AUTOMATION

underlying iterative best improvement procedure based on tabu attributes. Heuristic Optimization

Kapitel 5: Local Search

An Improved Hybrid Genetic Algorithm for the Generalized Assignment Problem

Nominal Data. May not have a numerical representation Distance measures might not make sense. PR and ANN

Title: Guided Local Search and Its Application to the Traveling Salesman Problem.

Introduction to Design Optimization: Search Methods

March 19, Heuristics for Optimization. Outline. Problem formulation. Genetic algorithms

Travelling Salesman Problem: Tabu Search

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University

Transcription:

0-0-7 Optimization Techniques for Design Space Exploration Zebo Peng Embedded Systems Laboratory (ESLAB) Linköping University Outline Optimization problems in ERT system design Heuristic techniques Simulated annealing Tabu search

0-0-7 Design Space of ERT Systems Very large due to many solution parameters: architectures and components hardware/software partitioning mapping and scheduling operating systems and global control communication synthesis Hardware Software Microprocessor Embedded memory ASIC Analog circuit Sensor Sourc e: S High-speed electronics DSP Network Design Space of ERT Systems Very bumpy due to the interdependence between the different parameters and the many design constraints (time, power, partial structure,...). 7 Many embedded RT systems have a very complex design space. 9 7 Prof. Z. Peng, ESLAB/LiTH, Sweden,9

0-0-7 Design Space Exploration What are needed in order to explore the complex design space to find a good solution: Exploration in the higher level of abstractions. Development of high-level analysis and estimation techniques. Employment of very fast exploration algorithms. Memory-less algorithms. Each solution needs a huge data structure to store, so we can t afford to keep track of all visited solutions. Design space exploration should be formulated as optimization problems and powerful optimization techniques are needed! The Optimization Problem The majority of design space exploration tasks can be viewed as optimization problems: To find - the architecture (type and number of processors, memory modules, and communication mechanism, as well as their interconnections), - the mapping of functionality onto the architecture components, and - the schedules of basic functions and communications, such that a cost function (in terms of implementation cost, performance, power, etc.) is minimized and a set of constraints is satisfied.

0-0-7 Mathematical Optimization The design optimization problems can be formulated as to Minimize f(x) Subject to g i (x) b i ; i =,,..., m; where x is a vector of decision variables (x 0); f is the cost (objective) function; g i s are a set of constraints. If f and g i are linear functions, we have a linear programming (LP) problem. LP can be solved by the simplex algorithm, which is an exact method. It will always identify the optimal solution if it exists. 7 Combinational Optimization (CO) In many design problems, the decision variables are restricted to integer values. The solution is a set, or a sequence, of integers or other discrete objects. We have an Integer Linear Programming (ILP) problem, which turns out to be more difficult to solve than the LP problems. Ex. System partitioning can be formulated as: Given a graph with costs on its edges, partition the nodes into k subsets no larger than a given maximum size, to minimize the total cost of the cut edges. A feasible solution (with n nodes) is represented as x i = j; j {,,..., k}, i =,,..., n. They are called combinatorial optimization problems. 8

0-0-7 Features of CO Problems Most CO problems, e.g., system partitioning with constraints, for ERT system designs are NP-compete. The time needed to solve an NP-compete problem grows exponentially with respect to the problem size n. Approaches for solving such problems are usually based on implicit enumeration of the feasible solutions. For example, to enumerate all feasible solutions for a scheduling problem (all possible permutation), we have: 0 tasks in hour (assumption); tasks in 0 hour; tasks in 7. days;... tasks in centuries. 9 Outline Optimization problems in ERT system design Heuristic techniques Simulated annealing Tabu search 0

0-0-7 Heuristics A heuristic seeks near-optimal solutions at a reasonable computational cost without being able to guarantee either feasibility or optimality. Motivations: Many exact algorithms require a huge computation effort. The decision variables have complicated interdependencies. We have often nonlinear cost functions and constraints, even no mathematical functions. Ex. The cost function f can, for example, be defined by a computer program (e.g., for power estimation). Approximation of the model for optimization. A near optimal solution is usually good enough and could be even better than the theoretical optimum. Heuristic Approaches to CO Constructive Problem specific List scheduling Left-edge algorithm Clustering Generic methods Divide and conquer Branch and bound Transformational Iterative improvement Kernighan-Lin algorithm for graph partitioning Neighborhood search Simulated annealing Tabu search Genetic algorithms

0-0-7 Clustering for System Partitioning Each node initially belongs to its own cluster, and clusters are gradually merged until the desired partitioning is found. Merge operations are selected based on local information (closeness metrics), rather than global view of the system. v 0 v v 7 v 0 v v v v v v v v v v v v v v v v v v v v v v v v v v Branch-and-Bound Traverse an implicit tree to find the best leaf (solution). 0 -City TSP 0 0 0 0 0 0 0 0 Total cost of this solution = 88 7

0-0-7 0 0 0 0 0 0 0 Branch-and-Bound Ex Low-bound on the cost function. Search strategy {0} L 0 {0,} L {0,} L {0,} L {0,,} L {0,,} L 8 {0,,} L {0,,} L 0 {0,,} {0,,} L L {0,,,} L = 88 {0,,,} L = 8 {0,,,} L = 9 {0,,,} L = 8 {0,,,} {0,,,} L = 9 L = 88 Neighborhood Search Method Step (Initialization) (A) Select a starting solution x now X. (B) x best = x now, best_cost = c(x best ). Step (Choice and termination) Choose a solution x next N(x now ), in the neighborhood of x now. If no solution can be selected, or the terminating criteria apply, then the algorithm terminates. Step (Update) Re-set x now = x next. If c(x now ) < best_cost, perform Step (B). Goto Step. 8

0-0-7 Neighborhood Search Method Very attractive for many CO problems as they have a natural neighborhood structure, which can be easily defined and evaluated. Ex. Graph partitioning: swapping two nodes. 0 8 7 0 Cost =09 0 0 8 7 Cost =8 7 The Descent Method Step (Initialization) Step (Choice and termination) Choose x next N(x now ) such that c(x next ) < c(x now ), and terminate if no such x next can be found. Step (Update) The descent method can easily be stuck at a local optimum. Cost Solutions 8 9

0-0-7 Dealing with Local Optimality Enlarge the neighborhood. Start with different initial solutions. To allow uphill moves : Simulated annealing Tabu search Cost X Solutions 9 Outline Optimization problems in ERT system design Heuristic techniques Simulated annealing Tabu search 0 0

0-0-7 Annealing Annealing is the slow cooling of metallic material after heating: A solid material is heated pass its melting point. Cooling is then slowly done. When cooling stops, the material settles usually into a low energy state (e.g., a stable structure). In this way, the properties of the materials are improved. Annealing The annealing process can be viewed intuitively as: At high temperature, the atoms are randomly oriented due to their high energy caused by heat. When the temperature reduces, the atoms tend to line up with their neighbors, but different regions may have different directions. If cooling is done slowly, the final frozen state will have a nearminimal energy state.

0-0-7 Simulated Annealing Annealing can be simulated using computer simulation: Generate a random perturbation of the atom orientations and calculates the resulting energy change. If the energy has decreased, the system moves to this new state. If energy has increased, the new state is accepted according to the laws of thermodynamics: At temperature t, the probability of an increase in energy of magnitude E is given by p( E) ( e k E ) t where k is called the Boltzmann s constant. Probability of Accepting Higher-Energy States p(e) 0,9 0,8 0,7 0, 0, 0, 0, 0, 0, 0 p( E) ( e k E ) t 0, 0, 0, 0,8,,,,8,,,,8,,,,8 E

0-0-7 Simulated Annealing for CO The SA algorithm could be applied to combinatorial optimization: Thermodynamic simulation System states Energy Change of state Temperature Frozen state Combinatorial optimization Feasible solutions Cost Moving to a neighboring solution Control parameter Final solution The SA Algorithm Select an initial solution x now X; Select an initial temperature t > 0; Select a temperature reduction function ; Repeat Repeat Randomly select x next N(x now ); = cost(x next ) - cost(x now ); If < 0 then x now = x next else generate randomly p in the range (0, ) uniformly; If p < exp(-/t) then x now = x next ; Until iteration_count = nrep; Set t = (t); Until stopping condition = true. Return x now as the approximation to the optimal solution.

0-0-7 The SA Algorithm Simulated annealing is really an optimization strategy rather than an algorithm a meta heuristic. To have a working algorithm, the following must be done: Selection of generic parameters. Problem-specific decisions must also be made. 7 A HW/SW Partitioning Example 7000 70000 optimum at iteration 00 000 0000 Cost function value 000 0000 000 0000 000 0 00 00 00 800 000 00 00 Number of iterations 8

0-0-7 Initial temperature (IT): The Cooling Schedule I IT must be "hot" enough to allow an almost free exchange of neighborhood solutions, if the final solution is to be independent of the starting one. Simulating the heating process: A system can be first heated rapidly until the proportion of accepted moves to rejected moves reaches a given value; e.g., when 80% of moves leading to higher costs will be accepted. Cooling will then start. 9 Temperature reduction scheme: The Cooling Schedule II A large number of iterations at few temperatures or a small number of iterations at many temperatures. Typically (t) = a x t, where a < ; a should be large, usually between 0.8 and 0.99. For better results, the reduction rate should be slower in middle temperature ranges. Stopping conditions: Zero temperature - the theoretical requirement. A number of iterations or temperatures has passed without any acceptance of moves. A given total number of iterations have been completed (or a fixed amount of execution time). 0

0-0-7 Problem-Specific Decisions The neighborhood structure should be defined such that: All solutions should be reachable from each other. Easy to generate randomly a neighboring feasible solution. Penalty for infeasible solutions, if the solution space is strongly constrained. The cost difference between s and s 0 should be able to be efficiently calculated. The size of the neighborhood should be kept reasonably small. Many decision parameters must be fine-tuned based on experimentation on typical data. Outline Optimization problems in ERT system design Heuristic techniques Simulated annealing Tabu search

0-0-7 Introduction to Tabu Search Tabu search (TS) is a neighborhood search method which employs "intelligent" search and flexible memories to avoid being trapped at local optimum. To de-emphasize randomization. Moves are selected intelligently in each iteration (the best admissible move is selected). Use tabus to restrict the search space and avoid cyclic behavior (dead loop). The classification of tabus is based on the history of the search. Taking advantage of history. It emulates human problem solving process. An Illustrative Example A set of tests is to be scheduled to check the correctness and other features of a given system. To find an ordering of the tests that maximizes the test performance (fault coverage, time, and power): A feasible solution can be simply represented by a permutation of the given set of tests. A neighborhood move can be defined by swapping two tests. The best move will be selected in each step. To avoid repeating or reversing swaps done recently, we classify as tabu all most recent swaps. 7

0-0-7 TS Example 7 7 7 moves are possible in each iteration in this example TS Example Let a paired test tabu to be valid only for three iterations (tabu tenure): 7 7 7 When a tabu move would result in a solution better than any visited so far, its tabu classification may be overridden (an aspiration criterion). 8

0-0-7 Current solution 7 Performance = 0 7 Performance = Tabu structure 7 7 7 A TS Process Top candidates Swap Value, 7,,, 0, - Swap Value,,, - 7, -, - Current solution 7 Performance = 7 Performance = 8 Tabu structure 7 7 8 A TS Process Top candidates Swap Value,,, - 7, -, - Swap Value, -, - 7, -, -7, -9 9

0-0-7 9 A TS Process Current solution Tabu structure Top 7 candidates 7 Swap Value Performance = 8, -, - Uphill moves are allowed! 7, -, -7, -9 7 7 Swap Value Performance =,, Aspiration criterion applies! 7, 0, -, - A TS Process Current solution Tabu structure Top 7 candidates 7 Swap Value Performance =,, 7, 0, -, - 7 7 Swap Value Performance = 0 7, 0 Best so far!, -, -, -, -8 0 0

0-0-7 Tabu Memory The paired test tabu makes use of recencybased memory (short-term memory). It should be complemented by frequency-based memory (long-term memory) to diversify the search into new regions. Diversification is restricted to operate only on particular occasions. For example, we can select those occasions where no admissible improving moves exist. Tabu Memory Structure Iteration Current solution 7 Performance = (Recency-based) 7 Top candidates Swap Value Penalized Value,, - -,7 - -, - -, - - 7 (Frequency-based) P.V. = Value Frequency_count

0-0-7 Effects of Random Diversifications Cost function value. e+0.e+0.e+0.e+0.e+0.e+0.e+0.e+0.0e+0 e+0.9e+0 optimum at iteration 9 0 00 0 00 00 000 00 000 N umber of iterations.00 e+0.00 e+0 optimum at iteration 9 Cost function value e+0.99 8e+0.99 e+0.99 e+0.99 e+0.99e+0 0 00 000 00 0 00 00 000 Num ber of iterations Step (Initialization) (A) Select a starting solution x now X. (B) x best = x now, best_cost = c(x best ). (C) Set the history record H empty. Step (Choice and termination) The Basic TS Algorithm Determine Candidate_N(x now ) as a subset of N(H, x now ). Select x next from Candidate_N(x now ) to minimize c(h, x). Terminate by a chosen iteration cut-off rule. Step (Update) Re-set x now = x next. If c(x now ) < best_cost, perform Step (B). Update the history record H. Return to Step.

0-0-7 Tabu and Tabu Status A tabu is usually specified by some attributes of the moves. Typically when a move is performed that contains an attribute, a record is maintained for its reverse attribute. => Preventing reversals or repetitions! A tabu restriction is typically activated only under certain condition: Recency-based restriction: its attributes occurred within a limited number of iterations prior to the present iteration; Frequency-based restriction: occurred with a certain frequency over a longer span of iterations. The tabu restrictions and tenure should be selected to achieve cycle prevention and induce vigor into the search. Tabu Tenure Decision The tabu tenure, t, must be carefully selected: For highly restrictive tabus, t should be smaller than for lesser restrictive tabus. It should be long enough to prevent cycling, but short enough to avoid driving the search away from the global optimum. t can be determined using static rules or dynamic rules: Static rule choose a value for t that remains fixed: t = constant (typically between 7 and 0). t = f(n), where n is the problem size (typically between 0. n / and n /. Experimentation must be carried out to choose the best tenure!

0-0-7 Aspiration Criteria (AC) Used to determine when tabu restrictions can be overridden. They contribute significantly to the quality of the algorithm. Examples of Aspiration Criteria: Aspiration by Default: If all available moves are classified as tabu, and are not rendered admissible by some other AC, then a "least tabu" move is selected. This is always implemented, e.g., by selecting the tabu with the shortest time to become inactive. Aspiration by Objective: c(x trial ) < best_cost. Subdivide the search space into regions R R, and let best_cost(r) denote the minimum c(x) for x found in R. If c(x trial ) < best_cost(r), a move aspiration is satisfied. 7 TS does not converge naturally..e+0 Stopping Conditions A fixed number of iterations has elapsed in total. A fixed number of iterations has elapsed since the last best solution was found. A given amount of CPU time has been used. Cost function value.e+0.e+0.e+0.e+0.e+0.e+0.e+0 optimum at iteration 9.0e+0 e+0.9e+0 0 00 000 00 000 00 000 Number of iterations 8

0-0-7 TS vs. SA Neighborhood space exploration: TS emphasizes complete neighborhood evaluation to identify moves of high quality. SA samples the neighborhood solutions randomly. Move evaluation: TS evaluates the relative attractiveness of moves in relation not only to objective function change, but also to factors of influence. SA evaluates moves only in terms of their objective function change. 9 TS vs. SA (Cont d) Search guidance: TS uses multiple thresholds, reflected in the tabu tenures and aspiration criteria, which varies also non-monotonically. SA is based on a single threshold implicit in the temperature parameter that only changes monotonically. Use of memory: SA is memoryless. TS makes heavily and intelligently use of both short-term and longterm memory. TS can also use the mid-term memory for intensification. 0

0-0-7 Summary Design space exploration is basically an optimization problem. Due to the complexity of the optimization problem, heuristic algorithms are widely used. Many general heuristics are based on neighborhood search principles. SA is applicable to almost any combinatorial optimization problem, and very simple to implement. TS has a natural rationale: it emulates intelligent uses of memory. When properly implemented, TS often outperforms SA (the execution time is often one order of magnitude smaller).