CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM

Similar documents
REAL-CODED GENETIC ALGORITHMS CONSTRAINED OPTIMIZATION. Nedim TUTKUN

International Journal of Current Research and Modern Education (IJCRME) ISSN (Online): & Impact Factor: Special Issue, NCFTCCPS -

International Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 7, February 2013)

GENETIC ALGORITHM VERSUS PARTICLE SWARM OPTIMIZATION IN N-QUEEN PROBLEM

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization

DERIVATIVE-FREE OPTIMIZATION

An Introduction to Evolutionary Algorithms

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

OPTIMIZATION METHODS. For more information visit: or send an to:

CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES

Data Mining Chapter 8: Search and Optimization Methods Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University

ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS

The Genetic Algorithm for finding the maxima of single-variable functions

ATI Material Do Not Duplicate ATI Material. www. ATIcourses.com. www. ATIcourses.com

Topological Machining Fixture Layout Synthesis Using Genetic Algorithms

A Comparative Study of Genetic Algorithm and Particle Swarm Optimization

GENETIC ALGORITHM with Hands-On exercise

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India.

CHAPTER 1 INTRODUCTION

Optimal Reactive Power Dispatch Using Hybrid Loop-Genetic Based Algorithm

Evolutionary Algorithms. CS Evolutionary Algorithms 1

March 19, Heuristics for Optimization. Outline. Problem formulation. Genetic algorithms

Research on time optimal trajectory planning of 7-DOF manipulator based on genetic algorithm

Chapter 14 Global Search Algorithms

MATLAB Based Optimization Techniques and Parallel Computing

Artificial Bee Colony (ABC) Optimization Algorithm for Solving Constrained Optimization Problems

Introduction to Optimization

A Steady-State Genetic Algorithm for Traveling Salesman Problem with Pickup and Delivery

Algorithm Design (4) Metaheuristics

A NEW APPROACH TO SOLVE ECONOMIC LOAD DISPATCH USING PARTICLE SWARM OPTIMIZATION

Active contour: a parallel genetic algorithm approach

Generation of Ultra Side lobe levels in Circular Array Antennas using Evolutionary Algorithms

Introduction to Genetic Algorithms

Handling Multi Objectives of with Multi Objective Dynamic Particle Swarm Optimization

Introduction to Optimization

Traffic Signal Control Based On Fuzzy Artificial Neural Networks With Particle Swarm Optimization

Comparison of Some Evolutionary Algorithms for Approximate Solutions of Optimal Control Problems

CHAPTER 6 REAL-VALUED GENETIC ALGORITHMS

CHAPTER 6 ORTHOGONAL PARTICLE SWARM OPTIMIZATION

BI-OBJECTIVE EVOLUTIONARY ALGORITHM FOR FLEXIBLE JOB-SHOP SCHEDULING PROBLEM. Minimizing Make Span and the Total Workload of Machines

HYBRID GENETIC ALGORITHM WITH GREAT DELUGE TO SOLVE CONSTRAINED OPTIMIZATION PROBLEMS

Constrained Functions of N Variables: Non-Gradient Based Methods

The k-means Algorithm and Genetic Algorithm

CHAPTER-5 APPLICATION OF SYMBIOTIC ORGANISMS SEARCH ALGORITHM

Model Parameter Estimation

Introduction to Design Optimization: Search Methods

CS5401 FS2015 Exam 1 Key

Mobile Robot Path Planning in Static Environments using Particle Swarm Optimization

METAHEURISTICS. Introduction. Introduction. Nature of metaheuristics. Local improvement procedure. Example: objective function

An evolutionary annealing-simplex algorithm for global optimisation of water resource systems

Metaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini

Heuristic Optimisation

International Journal of Information Technology and Knowledge Management (ISSN: ) July-December 2012, Volume 5, No. 2, pp.

AIRFOIL SHAPE OPTIMIZATION USING EVOLUTIONARY ALGORITHMS

Aero-engine PID parameters Optimization based on Adaptive Genetic Algorithm. Yinling Wang, Huacong Li

Optimal Power Flow Using Particle Swarm Optimization

The Binary Genetic Algorithm. Universidad de los Andes-CODENSA

Grid-Based Genetic Algorithm Approach to Colour Image Segmentation

Pre-requisite Material for Course Heuristics and Approximation Algorithms

SURVEY OF TEN NON-TRADITIONAL OPTIMIZATION TECHNIQUES

Classification of Optimization Problems and the Place of Calculus of Variations in it

3.6.2 Generating admissible heuristics from relaxed problems

EVOLUTIONARY OPTIMIZATION OF A FLOW LINE USED ExtendSim BUILT-IN OPTIMIZER

An Evolutionary Algorithm for Minimizing Multimodal Functions

A Modified PSO Technique for the Coordination Problem in Presence of DG

Particle Swarm Optimization applied to Pattern Recognition

Escaping Local Optima: Genetic Algorithm

Introduction to Optimization

Exploration vs. Exploitation in Differential Evolution

Outline. CS 6776 Evolutionary Computation. Numerical Optimization. Fitness Function. ,x 2. ) = x 2 1. , x , 5.0 x 1.

Discussion of Various Techniques for Solving Economic Load Dispatch

The movement of the dimmer firefly i towards the brighter firefly j in terms of the dimmer one s updated location is determined by the following equat

Graphical Approach to Solve the Transcendental Equations Salim Akhtar 1 Ms. Manisha Dawra 2

Multi-objective Optimization

CHAPTER 5 ENERGY MANAGEMENT USING FUZZY GENETIC APPROACH IN WSN

A Parallel Evolutionary Algorithm for Discovery of Decision Rules

Review: Final Exam CPSC Artificial Intelligence Michael M. Richter

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra

Lecture 8: Genetic Algorithms

Applying genetic algorithm on power system stabilizer for stabilization of power system

Genetic Algorithm Performance with Different Selection Methods in Solving Multi-Objective Network Design Problem

Application of Genetic Algorithm in Multiobjective Optimization of an Indeterminate Structure with Discontinuous Space for Support Locations

A *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-"&"3 -"(' ( +-" " " % '.+ % ' -0(+$,

Artificial Intelligence Application (Genetic Algorithm)

Evolutionary Computation Algorithms for Cryptanalysis: A Study

Using Genetic Algorithms to optimize ACS-TSP

Solving Economic Load Dispatch Problems in Power Systems using Genetic Algorithm and Particle Swarm Optimization

CHAPTER 4 GENETIC ALGORITHM

An Adaptive Genetic Algorithm for Solving N- Queens Problem

Kyrre Glette INF3490 Evolvable Hardware Cartesian Genetic Programming

Binary Differential Evolution Strategies

Evolutionary Methods for State-based Testing

Dr. Ramesh Kumar, Nayan Kumar (Department of Electrical Engineering,NIT Patna, India, (Department of Electrical Engineering,NIT Uttarakhand, India,

ABC Analysis For Economic operation of power system

Using CODEQ to Train Feed-forward Neural Networks

COMPARISON OF ALGORITHMS FOR NONLINEAR REGRESSION ESTIMATES

Particle Swarm Optimization

Lecture 4. Convexity Robust cost functions Optimizing non-convex functions. 3B1B Optimization Michaelmas 2017 A. Zisserman

Optimization Techniques for Design Space Exploration

Job Shop Scheduling Problem (JSSP) Genetic Algorithms Critical Block and DG distance Neighbourhood Search

Transcription:

20 CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 2.1 CLASSIFICATION OF CONVENTIONAL TECHNIQUES Classical optimization methods can be classified into two distinct groups: (1) Direct search methods and (2) Gradient based methods. In Direct search methods only the objective function and the constraint values are used to guide the search strategy. The direct search methods are usually slow, requires many function evaluation for convergence, since the derivative information is not used in the direct search method. The gradient-based methods use the first and /or second order derivatives of the objective function and/or constraints to guide the search process. The gradient based methods quickly converge to optimal solution if the objective function and the constraints are differentiable otherwise they fail to obtain even near optimal solution. Hence gradient based methods are not efficient for problems having non-differentiable or discontinuous functions and/ or constraint equations (Rao 2009). The most common difficulties associated with classical methods are: The convergence depends on the chosen initial solution. Most of the algorithms tend to get stuck to a sub-optimal solution.

21 Algorithms are not efficient in handling problems having a discrete search space. Algorithms cannot be efficiently used on a parallel machine. 2.2 CONVENTIONAL TECHNIQUES AND THEIR DRAWBACKS Because the ORPD problem is a large scale highly constrained, non-linear, non-convex optimization problem (Abou et al 2009), it has taken decades to develop efficient algorithms for its solution. Many different mathematical techniques have been employed for its solution. A wide variety of classical optimization techniques such as non-linear programming (Shoults and Sun 1982, Mamundur and Chenoweth 1981), quadratic programming (Burchett et al 1984, Aoki et al 1987), linear programming (Moto-palomino 1987, Alsac et al 1990), Newton-based techniques (Sun et al 1984, Santos 1995), sequential unconstrained minimization technique (Rahli 1999), interior point methods (Granville 1994, Momoh et al 1999) have been applied to solve OPF problems.. Generally, non-linear programming based procedures have many drawbacks such as insecure convergence and algorithmic complexity. The quadratic programming technique is a special form of non-linear programming technique whose objective function is quadratic with linear constraints. Quadratic programming based techniques have some disadvantages associated with the piecewise quadratic approximation. Newton-based techniques have a drawback of the convergence characteristics that are sensitive to the initial conditions and they may even fail to converge due to inappropriate initial conditions. The sequential unconstrained minimization optimization techniques are known to exhibit numerical difficulties when the penalty factors become extremely large. Although the

22 linear programming methods are fast and reliable, but they have some disadvantages associated with the piecewise linear cost approximation. The interior point method converts the inequality constraints to equalities by the introduction of non-negative slack variables. This method has been reported as computationally efficient; however, if the step size is not chosen properly, the sub-linear problem may have a solution that is infeasible in the original non-linear domain. In addition, this method suffers from the selection initial solution, termination condition, and optimality criteria and, in most of the cases, it is unable to solve non-linear quadratic objective functions (Abido 2002). A comprehensive survey on the disadvantages of the conventional optimization techniques are presented in (Momoh et al 1999). In general, most of the classical optimization techniques apply sensitivity analysis and gradient-based optimization algorithms solve the problem by linearizing the objective function and the system constraints around an operating point. Unfortunately, the ORPD problem is a highly nonlinear and a multimodal optimization problem, i.e. there exist more than one local optimum. Hence, local optimization techniques are not suitable for such problems. Moreover, there is no criterion to decide whether a local solution is also the global solution. Therefore, conventional optimization methods that make use of derivatives and gradients are not able to identify the global optimum (Abou et al 2009). Conversely, many mathematical assumptions such as convex, analytical, and differential objective functions have to be given to simplify the problem. However, the ORPD problem is an optimization problem with general non-convex, non-smooth, and nondifferentiable objective functions. It becomes essential to develop optimization techniques that are efficient to overcome these drawbacks and handle such difficulties.

23 2.3 META-HEURISTICS: EVOLUTIONARY ALGORITHMS Evolutionary Algorithms (EAs) are sub-class of meta-heuristic techniques. EAs based on Darwin's postulation of survival of the fittest. An evolutionary algorithm begins by initializing a population of candidate solutions to a problem. New solutions are then created by randomly varying the initial population. All solutions are measured with respect to how well they address the task. Finally, a selection criterion is applied to weed out those that are below par. The process is iterated using the selected set of solutions until a specific criterion is met. The advantages of EAs are their adaptability to change and ability to quickly generate good enough solutions. EAs differ from other optimization methods in possessing the following features (Ma and Lai 1995): EAs search from a population of points, rather than a single point. The population can move over hills and across valleys. EAs can, therefore, discover a globally or near globally optimum point. Because the computation for each individual in the population is independent of others, EAs have inherent parallel computation capability. EAs use payoff (fitness or objective functions) information directly for the search direction, neither derivatives nor other auxiliary knowledge. EAs, therefore, can deal with nonsmooth, non-continuous, and non-differentiable functions that are the real-life optimization problems. EAs use probabilistic transition rules, rather than deterministic rules, to select generations, so that they are a kind of stochastic optimization algorithm, which can search a complicated and uncertain area to find the global optimum. EAs are more flexible and robust than conventional optimization methods.

24 2.4 DIFFERENT METHODS OF EVOLUTIONARY ALGORITHMS algorithms. In this section, we present the overview of the major evolutionary 2.4.1 Evolutionary Programming Evolutionary Programming (EP) belongs to the class of populationbased search strategies. They operate on populations of real values (floating points) that represent the parameter set of the problem to be solved over some finite ranges. At the start of the EP run, the population is initialized with random individuals. The EP searches the space of possible real values for better individuals. The search is guided by fitness values returned by the environment. This gives a measure of how well adapted each individual is in terms of solving the problem and hence determines its probability of appearing in future generations. Two types of rules are used by EP in its search for highly fit individuals, namely the selection rule and combination rule (Lai 1998). The selection is used to determine the individuals that will represent the next generation. It includes competition in which each individual in the combined population has to compete with some other individuals to get a chance to enter into the next generation. The combination rule operates on selected individuals to produce new individuals that appear in the next generation. The selection mechanism is based on a fitness measure or objective function values, defined on each individual in the population. The combination rule is used to introduce new individuals into the current population or to create a new population based on the current population. EP uses only one evolutionary operator in the combination process. The most commonly used evolutionary operator is mutation. Mutation is the random occasional alteration of the information contained in the individual. The

25 combination rule acts on individuals that have been previously selected by the selection mechanism. 2.4.2 Particle Swarm Optimization PSO is one of the optimization techniques belonging to EAs. The method has been developed through a simulation of simplified social models. PSO combines social psychology principles in socio-cognition human agents and evolutionary computations and has been motivated by the behavior of organisms such as fish schooling and bird flocking. According to the research results for a flock of birds, birds find food by flocking (not by each individual). The observation leads to the assumption that the information is shared inside flocking. Moreover, according to observation of behavior of human groups, behavior of each individual (agent) is also based on the behavior patterns authorized by the groups such as customs and other behavior patterns according to the experiences by each individual. This assumption is the basic concept of PSO. PSO is basically developed through simulation of a flock of birds in twodimensional space. The position of each agent is represented as (x, y) in the XY plane and the velocity (displacement vector) is expressed by vx (the velocity along X-axis) and vy (the velocity along Y-axis). Modification of the position is realized using the position and the velocity information (Yoshida et al 2000). The features of PSO are given below: Generally, PSO is characterized as simple in concept, easy to implement, and computationally efficient. Unlike the other heuristic techniques, PSO has a flexible and well-balanced mechanism to enhance the global and local exploration abilities.

26 It is based on a simple concept. Therefore, the computation time is short and it requires few memories. The global and local best positions are computed at each instant of time (iteration), and the output is the new direction of search. Once this direction is computed, it is followed by the cluster of birds. It could differ from the ordinary genetic algorithm, since the crossover and mutation operators are not considered. All particles in PSO are kept as members of the population throughout the course of the run. PSO algorithm that does not implement concept the survival of the fittest. 2.4.3 Differential Evolution Differential evolution (DE) is a new technique of EAs proposed by Storn and Price in 1995. Differential evolution is a simple population based, stochastic parallel search evolutionary algorithm for global optimization and solves real valued problems based on the principles of natural evolution. DE is capable of handling non-differentiable, non-linear, and multimodal objective functions. In DE, the population consists of real valued vectors which equal the number of design parameters/control variables. The population of a DE algorithm is randomly initialized within the initial parameter bounds. The optimization process is conducted by means of three main operations: mutation, crossover and selection (Abou et al 2009). In each generation, each individual of the current population becomes a target vector. For each target vector, the mutation operation produces a mutant vector, by adding the weighted difference between two randomly chosen vectors to a

27 third vector. The crossover operation generates a new vector, called trial vector, by mixing the parameters of the mutant vector with those of the target vector. If the trial vector obtains a better fitness value than the target vector, then the trial vector replaces the target vector in the next generation (Varadharajan and Swarup 2008). The features of DE are given below: Extracting distance and direction information from the population to generate random deviations result in an adaptive scheme with excellent convergence properties. DE is generally more efficient and robust (with respect to reproducing close results in different runs) DE is simple to implement and needs little tuning on its parameters. DE differs from other EA in the mutation and recombination process. Unlike stochastic techniques such as GA and ES, where perturbation occurs in accordance with a random quantity, DE uses weighted differences between solution vectors to perturb the population. DE employs a greedy selection process with implicit elitist features. 2.4.4 Genetic Algorithm Genetic algorithm (GA) is an optimization algorithm based on the mechanics of natural selection and genetics. In GA, candidate solutions to the given problem are analogous to individuals in a population. Each individual is encoded as a string, called chromosome. New candidate solutions are produced from parent chromosomes by the crossover operator. The mutation operator is then applied to the population. The quality of each individual is

28 evaluated and rated by the so-called fitness function. Similar to the natural selection mechanism in the biological system, the fitter individuals have more chance to pass on information to the next generation. When a chromosome with the desired fitness is obtained, it will be taken as the optimal solution, and the optimization process is terminated. Otherwise, the process is repeated until the maximum number of generations is reached and the fittest chromosome so far obtained is taken to be the optimal solution. (Yan et al 2006) GA has the following features: GA searches from a population of points, not based on single point. GA does not require linearity, continuity, or differentiability of the objective function, nor do they need continuous variables. Treats integer and discrete variables naturally. GA has inherent parallel computation ability (Ma and Lai 1995). 2.5 CONCLUSION In this chapter, the applications of conventional optimization techniques to solve ORPD problem are discussed. The disadvantages and drawbacks of conventional optimization techniques to solve ORPD problem are also discussed. The suitability of the Evolutionary Algorithms such as EP, PSO, DE and GA to solve ORPD problems are presented. The salient features of the EAs are enumerated.