REAL-CODED GENETIC ALGORITHMS CONSTRAINED OPTIMIZATION. Nedim TUTKUN

Similar documents
CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM

The Genetic Algorithm for finding the maxima of single-variable functions

Constraint Handling. Fernando Lobo. University of Algarve

Heuristic Optimisation

Multi-objective Optimization

Chapter 14 Global Search Algorithms

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

DERIVATIVE-FREE OPTIMIZATION

An evolutionary annealing-simplex algorithm for global optimisation of water resource systems

Introduction to Optimization

Introduction to Optimization

HYBRID GENETIC ALGORITHM WITH GREAT DELUGE TO SOLVE CONSTRAINED OPTIMIZATION PROBLEMS

Topological Machining Fixture Layout Synthesis Using Genetic Algorithms

Genetic Algorithms Variations and Implementation Issues

CHAPTER 6 REAL-VALUED GENETIC ALGORITHMS

Algorithm Design (4) Metaheuristics

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach

Optimization of Constrained Function Using Genetic Algorithm

Evolutionary Computation Algorithms for Cryptanalysis: A Study

Escaping Local Optima: Genetic Algorithm

Heuristic Optimisation

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization

Multicriterial Optimization Using Genetic Algorithm

Extending MATLAB and GA to Solve Job Shop Manufacturing Scheduling Problems

An Evolutionary Algorithm for the Multi-objective Shortest Path Problem

Crew Scheduling Problem: A Column Generation Approach Improved by a Genetic Algorithm. Santos and Mateus (2007)

A GENETIC ALGORITHM APPROACH TO OPTIMAL TOPOLOGICAL DESIGN OF ALL TERMINAL NETWORKS

Aero-engine PID parameters Optimization based on Adaptive Genetic Algorithm. Yinling Wang, Huacong Li

Telecommunication and Informatics University of North Carolina, Technical University of Gdansk Charlotte, NC 28223, USA

Revision of a Floating-Point Genetic Algorithm GENOCOP V for Nonlinear Programming Problems

An Efficient Constraint Handling Method for Genetic Algorithms

The Binary Genetic Algorithm. Universidad de los Andes-CODENSA

Particle Swarm Optimization

15. Cutting plane and ellipsoid methods

METAHEURISTICS. Introduction. Introduction. Nature of metaheuristics. Local improvement procedure. Example: objective function

Lecture 25 Nonlinear Programming. November 9, 2009

Introduction to Evolutionary Computation

An Introduction to Evolutionary Algorithms

Introduction to Genetic Algorithms

A Steady-State Genetic Algorithm for Traveling Salesman Problem with Pickup and Delivery

Programming, numerics and optimization

Artificial Intelligence

An Evolutionary Algorithm for Minimizing Multimodal Functions

Introduction to Design Optimization: Search Methods

Literature Review On Implementing Binary Knapsack problem

Simplex of Nelder & Mead Algorithm

Lecture 4. Convexity Robust cost functions Optimizing non-convex functions. 3B1B Optimization Michaelmas 2017 A. Zisserman

ATI Material Do Not Duplicate ATI Material. www. ATIcourses.com. www. ATIcourses.com

Outline. CS 6776 Evolutionary Computation. Numerical Optimization. Fitness Function. ,x 2. ) = x 2 1. , x , 5.0 x 1.

CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES

It is a common practice to replace the equations h j (~x) = 0 by a set of inequalities h j (~x)» ffi and h j (~x) ffi for some small ffi > 0. In the r

March 19, Heuristics for Optimization. Outline. Problem formulation. Genetic algorithms

Evolutionary multi-objective algorithm design issues

Job Shop Scheduling Problem (JSSP) Genetic Algorithms Critical Block and DG distance Neighbourhood Search

Lecture

Bayesian Methods in Vision: MAP Estimation, MRFs, Optimization

Incorporation of Scalarizing Fitness Functions into Evolutionary Multiobjective Optimization Algorithms

THE Multiconstrained 0 1 Knapsack Problem (MKP) is

Automated Test Data Generation and Optimization Scheme Using Genetic Algorithm

Lecture Set 1B. S.D. Sudhoff Spring 2010

Neural Network Weight Selection Using Genetic Algorithms

Introduction to ANSYS DesignXplorer

Outline. No Free Lunch Theorems SMTWTP. Outline DM812 METAHEURISTICS

A MULTI-OBJECTIVE GENETIC ALGORITHM FOR A MAXIMUM COVERAGE FLIGHT TRAJECTORY OPTIMIZATION IN A CONSTRAINED ENVIRONMENT

Optimal Design of a Parallel Beam System with Elastic Supports to Minimize Flexural Response to Harmonic Loading

Algorithms for Integer Programming

Bi-Objective Optimization for Scheduling in Heterogeneous Computing Systems

Recent Developments in Model-based Derivative-free Optimization

Data Mining Chapter 8: Search and Optimization Methods Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University

Optimization of Benchmark Functions Using Genetic Algorithm

Multi-Objective Optimization Using Genetic Algorithms

MTAEA Convexity and Quasiconvexity

DETERMINING MAXIMUM/MINIMUM VALUES FOR TWO- DIMENTIONAL MATHMATICLE FUNCTIONS USING RANDOM CREOSSOVER TECHNIQUES

Global Optimization. for practical engineering applications. Harry Lee 4/9/2018 CEE 696

CHAPTER 4 GENETIC ALGORITHM

Analysis of Directional Beam Patterns from Firefly Optimization

Computational Methods. Constrained Optimization

Suppose you have a problem You don t know how to solve it What can you do? Can you use a computer to somehow find a solution for you?

CMU-Q Lecture 9: Optimization II: Constrained,Unconstrained Optimization Convex optimization. Teacher: Gianni A. Di Caro

Optimizing the Sailing Route for Fixed Groundfish Survey Stations

MAXIMUM LIKELIHOOD ESTIMATION USING ACCELERATED GENETIC ALGORITHMS

Research on time optimal trajectory planning of 7-DOF manipulator based on genetic algorithm

Theoretical Concepts of Machine Learning

Hybridization EVOLUTIONARY COMPUTING. Reasons for Hybridization - 1. Naming. Reasons for Hybridization - 3. Reasons for Hybridization - 2

Non-deterministic Search techniques. Emma Hart

GENETIC ALGORITHM VERSUS PARTICLE SWARM OPTIMIZATION IN N-QUEEN PROBLEM

Automatic Generation of Test Case based on GATS Algorithm *

Lamarckian Repair and Darwinian Repair in EMO Algorithms for Multiobjective 0/1 Knapsack Problems

Keywords benchmark functions, hybrid genetic algorithms, hill climbing, memetic algorithms.

THE DEVELOPMENT OF THE POTENTIAL AND ACADMIC PROGRAMMES OF WROCLAW UNIVERISTY OF TECHNOLOGY METAHEURISTICS

Multiobjective Job-Shop Scheduling With Genetic Algorithms Using a New Representation and Standard Uniform Crossover

Constrained Functions of N Variables: Non-Gradient Based Methods

Handling Multi Objectives of with Multi Objective Dynamic Particle Swarm Optimization

Structural Optimizations of a 12/8 Switched Reluctance Motor using a Genetic Algorithm

Advanced Search Genetic algorithm

B553 Lecture 12: Global Optimization

GENETIC ALGORITHM with Hands-On exercise

Exploration vs. Exploitation in Differential Evolution

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Genetic Programming: A study on Computer Language

Transcription:

REAL-CODED GENETIC ALGORITHMS CONSTRAINED OPTIMIZATION Nedim TUTKUN nedimtutkun@gmail.com

Outlines Unconstrained Optimization Ackley s Function GA Approach for Ackley s Function Nonlinear Programming Penalty Function Genetic Operators Numarical Examples 2

Optimization plays a central role in operations research/management science and engineering design problems. It deals with problems of minimizing or maximizing a function with several variables usually subject to equality and/or inequality constraints. Optimization techniques have had an increasingly great impact on our society. Both the number and variety of their applications continue to grow rapidly, and no slowdown is in sight. 3 Unconstrained Optimization

However, many engineering design problems are very complex in nature and difficult to solve with conventional optimization techniques. In recent years, genetic algorithms have received considerable attention regarding their potential as a novel optimization technique. In this lecture we will discuss the applications of genetic algorithms to unconstrained optimization, nonlinear programming, stochastic programming, goal programming, and interval programming. 4 Unconstrained Optimization

Unconstrained Optimization Unconstrained optimization deals with the problem of minimizing or maximizing a function in the absence of any restrictions. In general, an unconstrained optimization problem can be mathematically represented as follows. min f(x) Subject to x Ω where f is a real-valued function and Ω, the feasible set, is a subset of E n 5

Unconstrained Optimization A point x Ω is said to be a local minima of f over Ω if there is an > 0 such that f(x) f(x*) for all x Ω within a distance of x*. A point x Ω is said to be a global minima of f over Ω if f(x) f(x*) for all x Ω. The necessary conditions for local minima are based on the differential calculus of f, that is, the gradient of f defined as follows: 6

Unconstrained Optimization The Hessian of f at x denoted as 2 f(x) or F(x) is defined as Even though most practical optimization problems have side restrictions that must be satisfied, the study of techniques for unconstrained optimization provides a basis for the further studies. In this lecture, we will discuss how to solve the unconstrained optimization problem with genetic algorithms. 7

Ackley's Function Ackley's function is a continuous and multimodal test function obtained by modulating an exponential function with a cosine wave of moderate amplitude. Its topology, as shown in Figure 1, is characterized by an almost flat outer region and a central hole or peak where modulations by the cosine wave become more and more influential. Ackley's function is as follows. 8

Ackley's Function Ackley's function is as follows where c 1 = 20, c 2 = 0.2, c 3 = 2π, and e = 2.71282. The known optimal solution is (x 1, x 2 ) = 0 and f(x 1,x 2 )= 0. 9

Ackley's Function 10 Fig. 1. Plot of Ackley s Function

Ackley's Function 11 Fig. 2. Contour plot of Ackley s Function

Ackley's Function As Ackley pointed out, this function causes moderate complications to the search. Because a strictly local optimization algorithm that performs hill climbing would surely get trapped in a local optimum. A search strategy that scans a slightly bigger neighborhood would be able to cross intervening valleys. Therefore, Ackley's function provides one of the reasonable test cases for genetic search. 12

Minimization of Ackley's Function To minimize Ackley's function, we simply use the following implementation of the genetic algorithm. 1. Real number encoding 2. Arithmetic crossover 3. Nonuniform mutation 4. Top pop_size selection 13

Minimization of Ackley's Function The arithmetic crossover is defined as the combination of two chromosomes v 1 and v 2 as follows. v 1 new= v 1 +(1- )v 2 v 2 new= v 2 +(1- )v 1 where is a uniforly distributed random number between 0 and 1. 14

Minimization of Ackley's Function The nonuniform mutation is given as follows: For a given parent v, if the element x k of it is selected for mutation, the resulting offspring is v new = [x 1, x 2, x 3,..,x n ], where x k new is randomly selected from two possible choices: x k new= x k + (t, x ku -x k ) or x k new= x k - (t, x k -x kl ) 15

Minimization of Ackley's Function The nonuniform mutation is given as follows: For a given parent v, if the element x k of it is selected for mutation, the resulting offspring is v new = [x 1, x 2, x 3,..,x n ], where x k new is randomly selected from two possible choices: x k new= x k + (t, x ku -x k ) x k new= x k - (t, x k -x kl ) 16 where x k U and x k L are the upper and lower bounds for X k

Minimization of Ackley's Function The function (t, y) returns a value in the range [0, y] such that the value of (t, y) approaches to 0 as t increases (t is the generation number) as follows: (t, y)=y r (1-t / T) b where r is a random number from [0, 1], T is the maximal generation number, and b is a parameter determining the degree of nonuniformity. 17

Minimization of Ackley's Function Top pop_size selection produces the next generation by selecting the best pop_size chromosomes from parents and offspring. For this case, we can simply use the values of objective as fitness values and sort chromosomes according to these values. The parameters of the genetic algorithm are set as follows: 18

Minimization of Ackley's Function The parameters of the genetic algorithm are set as follows: pop_ size: 10 Maxgen: 1000 P m : 0.1 P c : 0.3 19

Minimization of Ackley's Function Table 1. Initial Population of 10 Random Chromosomes 20

Minimization of Ackley's Function Table 2. The corresponding fitness function values 21

Minimization of Ackley's Function This means that the chromosomes v 2, v 6, v 8, and v 9 were selected for crossover. The offspring were generated as follows: 22

Minimization of Ackley's Function This means that the chromosomes v 2, v 6, v 8, and v 9 were selected for crossover. The offspring were generated as follows: 23

Minimization of Ackley's Function The mutation then is performed. Because there are a total 2 x 10 = 20 genes in whole population, we generate a sequence of random numbers r k (k = 1,..., 20) from the range [0, 1]. The corresponding gene to be mutated is: bit_pos chrom_num variable random_num 11 6 x 1 0.081393 11 6-4.068506-0.959655 24

Minimization of Ackley's Function The fitness value for each offspring is as follows: 25

Minimization of Ackley's Function The best 10 chromosomes among parents and offspring form a new population as follows: 26

Minimization of Ackley's Function The corresponding fitness values of variables [x 1, x 2 ] are as follows: 27

Minimization of Ackley's Function Now we just completed one iteration of the genetic procedure (one generation). At the 1000 th generation, we have the following chromosomes: The fitness value is f(x 1 *, x 2 *) = -0.005456. 28

Ackley's Function 29 Fig. 3. Contour plot of Ackley s Function

Contour Plot of the Cost Function Xopt=(0.6196 10-4, -0.1231E-04), F min =1.7879 10-4, 30 Fig. 4. Variation of fitness value with generation.

Contour Plot of the Cost Function 31 Fig. 5. Scattering of the initial population.

Contour Plot of the Cost Function 32 Fig. 6. Scattering of at the 50th generation.

Contour Plot of the Cost Function 33 Fig. 7. Scattering of at the 100th generation.

Contour Plot of the Cost Function 34 Fig. 8. Scattering of at the 150th generation.

Contour Plot of the Cost Function 35 Fig. 9. Scattering of at the 200th generation.

Nonlinear Programming Nonlinear programming (or constrainted optimization) deals with the problem of optimizing an objective function in the presence of equality and/or inequality constraints. Nonlinear programming is an extremely important tool used in almost every area of engineering, operations research, and mathematics because many practical problems cannot be successfully modeled as a linear program. 36

Nonlinear Programming The general nonlinear programming may be written as follows: where f, g i and h i are real valued functions defined on E n, X is a subset of E n, and x is an n-dimensional real vector with components x 1, x 2,...,x n. 37

Nonlinear Programming The above problem must be solved for the values of the variables x 1, x 2,,x n that satisfy the restrictions and meanwhile minimize the function f. The function f is usually called the objective function or criterion function. Each of the constraints g i (x) 0 is called an inequality constraint. Each of the constraints h i (x) = 0 is called an equality constraint. 38

Nonlinear Programming The set X might typically include lower and upper bounds on the variables, which is usually called domain constraint. A vector x X satisfying all the constraints is called a feasible solution to the problem. The collection of all such solutions forms the feasible region. The nonlinear programming problem then is to find a feasible point x such that f(x) f(x ) for each feasible point x. 39

Nonlinear Programming Such a point is called an optimal solution. Unlike linear programming problems, the conventional solution methods for nonlinear programming are very complex and not very efficient. In the past few years, there has been a growing effort to apply genetic algorithms to the nonlinear programming problem. This lecture notes help us how to solve the nonlinear programming problem with genetic algorithms in general. 40

Nonlinear Programming The central problem for applying genetic algorithms to the constrained optimization is how to handle constraints because genetic operators used to manipulate the chromosomes often yield infeasible offspring. Recently, several techniques have been proposed to handle constraints with genetic algorithms Michalewicz has published a very good survey on this problem. 41

Nonlinear Programming The existing techniques can be roughly classified as follows: Rejecting strategy Repairing strategy Modifying genetic operators strategy Penalizing strategy Each of these strategies have advantages and disadvantages. 42

Rejecting Strategy Rejecting strategy discards all infeasible chromosomes created throughout the evolutionary process. This is a popular option in many genetic algorithms. The method may work reasonably well when the feasible search space is convex, and it creates a reasonable part of the whole search space. However, such an approach has serious limitations. 43

Rejecting Strategy For example, for many constrained optimization problems where the initial population consists of infeasible chromosomes only, it might be essential to improve them. Moreover, quite often the system can reach the optimum more easily if it is possible to "cross" an infeasible region (especially in nonconvex feasible search spaces). 44

Repairing Strategy Repairing a chromosome involves taking an infeasible chromosome and generating a feasible one through some repairing procedure. For many combinatorial optimization problems, it is relatively easy to create a repairing procedure. It has been shown that through an empirical test of genetic algorithms performance on a diverse set of constrained combinatorial optimization problems, that the repair strategy did indeed surpass other strategies in 45 both speed and performance.

Repairing Strategy Repairing strategy depends on the existence of a deterministic repair procedure to convert an infeasible offspring into a feasible one. The weakness of the method is in its problem dependence. For each particular problem, a specific repair algorithm should be designed. For some problems, the process of repairing infeasible chromosomes might be as complex as solving the 46 original problem.

One reasonable approach for dealing with the issue of feasibility is to develop problem-specific representation and specialized genetic operators to maintain the feasibility of chromosomes. Such systems are often much more reliable than any other genetic algorithms based on the penalty approach. Many users use problem-specific representation and specialized operators in building very successful genetic algorithms in many areas. However, the genetic search of this approach is confined within the feasible region. 47 Modifying Genetic Operator Strategy

Penalty Strategy These strategies above have the advantage that they never generate infeasible solutions but have the disadvantage that they consider no points outside the feasible regions. For highly constrained problem, infeasible solutions may take a relatively big portion of the population. In such case, feasible solutions may be difficult to be found if we just confine genetic search within feasible regions. 48

Penalty Strategy It has been suggested that constraint management techniques allowing movement through infeasible regions of the search space tend to yield more rapid optimization and produce better final solutions than do approaches limiting search trajectories only to feasible regions of the search space. The penalizing strategy is such kind of techniques proposed to consider infeasible solutions in genetic search. 49

Penalty Function The penalty technique is perhaps the most common technique used to handle infeasible solutions in the genetic algorithms for constrained optimization problems. In essence, this technique transforms the constrained problem into an unconstrained problem by penalizing infeasible solutions, in which a penalty term is added to the objective function for any violation of the constraints. 50

Penalty Function The basic idea of the penalty technique is borrowed from conventional optimization. It is a nature question: is there any difference when we use the penalty method in conventional optimization and in genetic algorithms? In conventional optimization, the penalty technique is used to generate a sequence of infeasible points whose limit is an optimal solution to the original problem. 51

Penalty Function The major concern is how to choose a proper value of penalty so as to speed convergence and avoid premature termination. In genetic algorithms, the penalty technique is used to keep a certain amount of infeasible solutions in each generation so as to enforce genetic search towards an optimal solution from both sides of feasible and infeasible regions. 52

Penalty Function We do not simply reject the infeasible solutions in each generation because some may provide much more useful information about optimal solution than some feasible solutions. The major concern is how to determine the penalty term so as to strike a balance between the information protection (keeping some infeasible solutions) and the selective pressure (rejecting some infeasible solutions), and cancelled both under-penalty and overpenalty. 53

Penalty Function In general, solution space contains two parts: feasible area and infeasible area. We do not make any assumption about these subspaces; in particular, they need be neither convex nor connected as shown in Figure 10. Handling infeasible chromosomes is insignificant. From the figure we can know that infeasible solution b is much near to optima a than infeasible solution d and feasiblesolution c. 54

Penalty Function 55 Fig. 10. Solution space: feasible area and infeasible area.

Penalty Function We may hope to give less penalty to b than to d even though it is a little farther from the feasible area than d. We also can believe that b contains much more information about optima than c even though it is infeasible. However, we have no a priori knowledge about optima, so generally it is very hard for us to judge which one is better than others. 56

Penalty Function 57 The main issue of penalty strategy is how to design the penalty function p(x) which can effectively guide genetic search toward the promising area of solution space. The relationship between infeasible chromosome and the feasible part of the search space plays a significant role in penalizing infeasible chromosomes: The penalty value corresponds to the "amount" of its infeasibility under some measurement. There is no general guideline on designing penalty function, and constructing an efficient penalty function is quite problemdependent.

Evaluation Function with Penalty Term Penalty techniques transform the constrained problem into an unconstrained problem by penalizing infeasible solutions. In general, there are two possible ways to construct the evaluation function with penalty term. 1) One is to take the addition form expressed as follows: eval(x) = f(x) + p(x) where x represents a chromosome, f(x) is the objective function of problem, and p(x) is the penalty term. 58

Evaluation Function with Penalty Term For maximization problems, we usually require that: p x = 0 if x is feasible p x < 0 otherwise Let l p(x) l max and I f(x) l min be the maximum of l p(x) l and minimum of l f(x) l among infeasible solutions in current population, respectively. We also require that l p(x) l max I f(x) l min to avoid negative fitness value. 59

Evaluation Function with Penalty Term For minimization problems, we usually require that: p x = 0 if x is feasible p x > 0 otherwise The second way is to take the multiplication form expressed as follows: eval(x) = f(x) + p(x) 60

Evaluation Function with Penalty Term In this case, for maximization problems we require that: p x = 1 if x is feasible 0 p x 1 otherwise and for minimization problems we require that p x = 1 if x is feasible p x > 1 otherwise 61

Evaluation Function with Penalty Term Note that for the minimization problems, the fitter chromosome has the lower value of eval(x). For some selection methods, it is required to transform the objective values into fitness values in such way that the fitter one has the larger fitness value. 62

Example Find the optimal value of the following constrained function. z=5-(x-2) 2-2(y-1) 2 x+4y=3 0 x, y 5 63

Constrained Optimization (Matlab Code) clear;clc; % Step 1 : Initialization a=0;b=10;n=2;rh=0.6667; G=100;pm=0.001;pc=.6;N=1000;fmin =[];fave=[];fmax=[];maxfit=0; x=rand(n,n); for k=1:n x(:,k)=linmap(x(:,k),a,b); % convert chromosome to real number in a range from a to b end 64

Constrained Optimization (Matlab Code) for g=1:g fprintf('g:%.0f\n',g); % Step 2 : Selection f=fitval3(x(:,1),x(:,2),rh); s=selpop(x,f); % Step 3 : Crossover c=artxover(s,pc); % Step 4 : Mutation x=pertmutate(c,pm,a,b); [maxfit x]=elit(x(:,1),x(:,2),maxfit,rh); f=fitval3(x(:,1),x(:,2),rh); fmin=[fmin maxfit]; fave=[fave mean(f)]; fmax=[fmax max(f)]; end % end the generation 65

Constrained Optimization (Matlab Code) g=1:g; plot(g,fmax,'r',g,fave,'b'); xlabel( Generation');ylabel( Fitness Value'); axis([1 100 0 2.1]) legend('max','ave','location','best');lege nd boxoff; f=fun3(x(:,1),x(:,2)); [fmx ind]=max(f); optx=x(ind(1),:) yoptx=fun3(optx(:,1),optx(:,2)) 66

Constrained Optimization (Matlab Code) function f=fitval3(x,y,rh) f=[];n=length(x); z=fun3(x,y); h=x+4*y; bh=3; for k=1:n if (h(k)~=bh) f(k)=z(k)-rh*h(k); else f(k)=z(k); end end 67

Constrained Optimization (Matlab Code) function z=fun3(x,y) z=5-(x-2).^2-2*(y-1).^2; 68

69 Convergence of Constrained Optimization