On a Class of Global Optimization Test Functions

Similar documents
Generating Uniformly Distributed Pareto Optimal Points for Constrained and Unconstrained Multicriteria Optimization

Artificial Bee Colony (ABC) Optimization Algorithm for Solving Constrained Optimization Problems

Metaheuristic Optimization with Evolver, Genocop and OptQuest

Multiobjective Optimization Using Adaptive Pareto Archived Evolution Strategy

A gradient-based multiobjective optimization technique using an adaptive weighting method

Exploration of Pareto Frontier Using a Fuzzy Controlled Hybrid Line Search

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization

Variable Neighborhood Particle Swarm Optimization for Multi-objective Flexible Job-Shop Scheduling Problems

A Hybrid Fireworks Optimization Method with Differential Evolution Operators


Hybrid Optimization Coupling Electromagnetism and Descent Search for Engineering Problems

Evolutionary operators in global optimization with dynamic search trajectories

Experimental Study on Bound Handling Techniques for Multi-Objective Particle Swarm Optimization

A PACKAGE FOR DEVELOPMENT OF ALGORITHMS FOR GLOBAL OPTIMIZATION 1

Evolutionary Algorithms

A Novel Hybrid Self Organizing Migrating Algorithm with Mutation for Global Optimization

Hybrid Feature Selection for Modeling Intrusion Detection Systems

A NEW APPROACH TO SOLVE ECONOMIC LOAD DISPATCH USING PARTICLE SWARM OPTIMIZATION

RECORD-TO-RECORD TRAVEL ALGORITHM FOR ATTRIBUTE REDUCTION IN ROUGH SET THEORY

An Exploration of Multi-Objective Scientific Workflow Scheduling in Cloud

Cooperative Coevolution using The Brain Storm Optimization Algorithm

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

FDR PSO-Based Optimization for Non-smooth Functions

Speculative Evaluation in Particle Swarm Optimization

A Modified Inertial Method for Loop-free Decomposition of Acyclic Directed Graphs

IMPROVING THE PARTICLE SWARM OPTIMIZATION ALGORITHM USING THE SIMPLEX METHOD AT LATE STAGE

Multi-Objective Memetic Algorithm using Pattern Search Filter Methods

EFFICIENT ATTRIBUTE REDUCTION ALGORITHM

HYBRID GENETIC ALGORITHM WITH GREAT DELUGE TO SOLVE CONSTRAINED OPTIMIZATION PROBLEMS

Revision of a Floating-Point Genetic Algorithm GENOCOP V for Nonlinear Programming Problems

Real Coded Genetic Algorithm Particle Filter for Improved Performance

The Design of Pole Placement With Integral Controllers for Gryphon Robot Using Three Evolutionary Algorithms

Optimization of fuzzy multi-company workers assignment problem with penalty using genetic algorithm

Learning Fuzzy Cognitive Maps by a Hybrid Method Using Nonlinear Hebbian Learning and Extended Great Deluge Algorithm

Small World Particle Swarm Optimizer for Global Optimization Problems

A STRUCTURAL OPTIMIZATION METHODOLOGY USING THE INDEPENDENCE AXIOM

Benchmark Functions for the CEC 2008 Special Session and Competition on Large Scale Global Optimization

Optimized Watermarking Using Swarm-Based Bacterial Foraging

THE NEW HYBRID COAW METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMS

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM

A Deterministic Dynamic Programming Approach for Optimization Problem with Quadratic Objective Function and Linear Constraints

Multiobjective Formulations of Fuzzy Rule-Based Classification System Design

Binary Differential Evolution Strategies

Performance Assessment of DMOEA-DD with CEC 2009 MOEA Competition Test Instances

Metaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini

A Particle Swarm Approach to Quadratic Assignment Problems

A study of hybridizing Population based Meta heuristics

Innovative Strategy of SOMA Control Parameter Setting

Mobile Robot Path Planning in Static Environments using Particle Swarm Optimization

Optimization of Benchmark Functions Using Artificial Bee Colony (ABC) Algorithm

Partial Opposition-based Learning Using Current Best Candidate Solution

An Evolutionary Algorithm for Minimizing Multimodal Functions

Prediction of traffic flow based on the EMD and wavelet neural network Teng Feng 1,a,Xiaohong Wang 1,b,Yunlai He 1,c

Solving A Nonlinear Side Constrained Transportation Problem. by Using Spanning Tree-based Genetic Algorithm. with Fuzzy Logic Controller

Inclusion of Aleatory and Epistemic Uncertainty in Design Optimization

Reducing Fitness Evaluations Using Clustering Techniques and Neural Network Ensembles

Advances in Military Technology Vol. 11, No. 1, June Influence of State Space Topology on Parameter Identification Based on PSO Method

GOAL GEOMETRIC PROGRAMMING PROBLEM (G 2 P 2 ) WITH CRISP AND IMPRECISE TARGETS

PROBLEM FORMULATION AND RESEARCH METHODOLOGY

CT79 SOFT COMPUTING ALCCS-FEB 2014

An Introduction to Evolutionary Algorithms

A Comparative Analysis on the Performance of Particle Swarm Optimization and Artificial Immune Systems for Mathematical Test Functions.

ACO and other (meta)heuristics for CO

International Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 7, February 2013)

Simultaneous Perturbation Stochastic Approximation Algorithm Combined with Neural Network and Fuzzy Simulation

A SIMULATED ANNEALING ALGORITHM FOR SOME CLASS OF DISCRETE-CONTINUOUS SCHEDULING PROBLEMS. Joanna Józefowska, Marek Mika and Jan Węglarz

Particle Swarm Optimization Approach for Scheduling of Flexible Job Shops

Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2

A Distance Metric for Evolutionary Many-Objective Optimization Algorithms Using User-Preferences

Code Design as an Optimization Problem: from Mixed Integer Programming to an Improved High Performance Randomized GRASP like Algorithm

Artificial bee colony algorithm with multiple onlookers for constrained optimization problems

Research Article A Novel Metaheuristic for Travelling Salesman Problem

ARMA MODEL SELECTION USING PARTICLE SWARM OPTIMIZATION AND AIC CRITERIA. Mark S. Voss a b. and Xin Feng.

The Modified IWO Algorithm for Optimization of Numerical Functions

A Cultivated Differential Evolution Algorithm using modified Mutation and Selection Strategy

A NOTE ON MIXED VARIABLE MATHEMATICAL PROGRAMS

Modeling with Uncertainty Interval Computations Using Fuzzy Sets

Three-Dimensional Off-Line Path Planning for Unmanned Aerial Vehicle Using Modified Particle Swarm Optimization

Step Size Optimization of LMS Algorithm Using Particle Swarm Optimization Algorithm in System Identification

THE DEVELOPMENT OF THE POTENTIAL AND ACADMIC PROGRAMMES OF WROCLAW UNIVERISTY OF TECHNOLOGY METAHEURISTICS

COMPARISON OF ALGORITHMS FOR NONLINEAR REGRESSION ESTIMATES

Telecommunication and Informatics University of North Carolina, Technical University of Gdansk Charlotte, NC 28223, USA

Particle Swarm Optimization Artificial Bee Colony Chain (PSOABCC): A Hybrid Meteahuristic Algorithm

Metamodeling. Modern optimization methods 1

Week 5. Convex Optimization

Fuzzy Inspired Hybrid Genetic Approach to Optimize Travelling Salesman Problem

Unit Maps: Grade 8 Math

A MULTI-SWARM PARTICLE SWARM OPTIMIZATION WITH LOCAL SEARCH ON MULTI-ROBOT SEARCH SYSTEM

Tools & Applications 1. Introduction 2. Design of Matrix Turbines

A New Method For Forecasting Enrolments Combining Time-Variant Fuzzy Logical Relationship Groups And K-Means Clustering

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 5, NO. 1, FEBRUARY

New Optimal Load Allocation for Scheduling Divisible Data Grid Applications

PARTICLES SWARM OPTIMIZATION FOR THE CRYPTANALYSIS OF TRANSPOSITION CIPHER

Machine Learning Techniques for the Smart Grid Modeling of Solar Energy using AI

Attribute Reduction using Forward Selection and Relative Reduct Algorithm

Offspring Generation Method using Delaunay Triangulation for Real-Coded Genetic Algorithms

REAL-CODED GENETIC ALGORITHMS CONSTRAINED OPTIMIZATION. Nedim TUTKUN

GT HEURISTIC FOR SOLVING MULTI OBJECTIVE JOB SHOP SCHEDULING PROBLEMS

A TRANSITION FROM TWO-PERSON ZERO-SUM GAMES TO COOPERATIVE GAMES WITH FUZZY PAYOFFS

Innovative Computing and Applications

Transcription:

On a Class of Global Optimization Test Functions Crina Grosan 1 and Ajith Abraham* 2 1 Department Of Computer Science Babes-Bolyai University, Cluj-Napoca, Romania Machine Intelligence Research Labs (MIR Labs) Scientific Network for Innovation and Research Excellence Auburn, Washington 98071, USA email: ajith.abraham@ieee.org *Corresponding Author Abstract: This paper provides a theoretical proof illustrating that for a certain class of functions having the property that the partial derivatives have the same equation with respect to all variables, the optimum value (minimum or maximum) takes place at a point where all the variables have the same value. This information will help the researchers working with high dimensional functions to minimize the computational burden due to the fact that the search has to be performed only with respect to one variable. Key words: multidimensional functions, global optimization, first partial derivatives 1. Introduction Among the global optimization test problems, there is a class of functions, which has not been treated until now separately by any of the conventional deterministic and stochastic approaches. In this context, we refer to the functions for which the first partial derivatives with respect to all variables have the same equation. For example, let us consider the Rastrigin function given by: F : [ 5.12, 5.12] n R F (x) = 10n + n (x 2 i 10 cos(2πx i )) i=1 for which the global optimal is at (0,0,...0). In the two-dimensional case, the Rastrigin function is represented as: F (x, y) = 20 + x 2 10 cos(2πx) + y 2 10 cos(2πy) c ICS AS CR 2008 1

Neural Network World?/08,?? The first partial derivatives are: f(x) = F (x, y) = 2x + 20π sin(2πx) x f(y) = F (x, y) = 2y + 20π sin(2πy) y From the equations above, it is evident that both derivatives have the same equation which is given by: f(t) = 2t + 20π sin(2πt) Thus, for all other similar examples for which the first partial derivatives have the same equation(s), the global optimum of the function will be at a point where all the variables have the same value. 2. Problem Formulation and Proof We attempt to formulate the problem and prove the following results: Theorem 2..1 Let F : Q R be a twice-differentiable continuous function, where Q is a bounded and closed subset of R n. If the first partial derivatives f : Q R n R have the same equation with respect to all variables then F takes its global optimum (maximum or minimum) in a point where all the variables have the same value and where g(t) = f(t)dt also takes its global optimum (maximum or minimum). Proof Without losing generality, we attempt to prove the theorem for three variables (for a better understanding) but a generalization is trivial. Suppose we have F = F (x, y, z) and all the three partial derivatives have the same equation f(t). Thus, from F (x, y, z) = f(x) (1) x we have: F (x, y, z) = f(x)dx + Φ(y, z) (2) From: and (2) we obtain: F (x, y, z) = f(y) (3) y 2

Grosan and Abraham: On a Class of Global Optimization test functions This implies: ( y ) f(x)dx + Φ(y, z) = (Φ(y, z)) = f(y) y (4) Φ(y, z) = f(y)dy + Ψ(z) (5) From: and (5) we obtain: ( z f(x)dx + F (x, y, z) = f(z) (6) z ) f(y)dy + Ψ(z) = Ψ(z) = f(z) (7) z This involves: Ψ(z) = f(z)dz + c (8) Thus from (2), (5) and (8) we obtain: F (x, y, z) = f(x)dx + f(y)dy + f(z)dz + c (9) where c is a constant. Thus, to find the global minimum or maximum, we should take the same value for x, y, and z and these are also the values where f takes its global minimum or maximum. A graphical example is illustrated in Figure 2. for the sum square function comprising of two variables. In the 2-dimensional case, this function is given by F (x, y) = x 2 + y 2. For this function, the optimum of F will the the same as the optimum of either x 2 or y 2. All the three functions (F, x 2, y 2 ) are depicted and presented from different angles. We considered the case of minimization of F within the range [-10, 10] for which the optimum point is at x = (0, 0) with F (x ) = 0. 3. Perspectives and Usefulness There is a huge amount of work dealing with global optimization. The results presented in this paper are addressed especially to researchers using various kind of optimization techniques. On one hand, it is irrelevant to use these test functions and use immense computational efforts searching the solution space for each variable while, in fact, it is only required to search for one of them. On the other hand, the complexity of these approaches increases with the number of variables and that is one of the limitations why we are only able to deal with functions having only a few hundreds variables. We believe that researchers are not taking advantage of this property. In the literature, we could see only few works [12] dealing with some correlations among the variables, without explicitly mentioning this property 3

Neural Network World?/08,?? Fig. 1 Example of initial function and function decomposition for the sum squares function. or making use of it. In order to eliminate such correlations, there have been some attempts like modification of the function itself by rotation etc. Floudas and Pardalos [3],Hedar and Fukushima [6] illustrated a list of the most used test functions. Not all the functions described have this property, but many of them and many more real-world problems. As an example, some common functions are: Sum Squares, Rastrigin, Levi, Quadric, Sphere function and many more. The aim of this paper is to draw the optimization community s attention to the possibility of reducing of the complexity in order to obtain faster and better results by considering some important properties of the functions as described above. This propriety of this type of functions should be exploited before an optimization algorithm is applied so that the complexity of the problem could be drastically reduced. From the huge amount of work existing in the literature we can observe that the approaches encounter difficulties in dealing with high dimensional functions no matter the kind of technique employed [1], [2], [4], [5], [7]-[11], [13]-[20]. The findings in this paper can help at least in dealing with practical applications which require fast results. 4. Discussions and Conclusions This paper addresses a certain class of optimization functions which are twice continuously differentiable. It is observed and proved that the functions whose first partial derivatives have the same equation, have an optimum at a point where all the variables have the same value. The authors findings can help in approaching 4

Grosan and Abraham: On a Class of Global Optimization test functions complicated test functions having this feature, which unfortunately continues to be ignored at least by a large research community employing various computationally intelligent optimization algorithms. The class of functions where the assumption discussed in this paper is immediately seen is very special. For some of them this can be seen immediately (either the functions have less variables or they have a simple form). But, in general, this property is not difficult to observe by any mathematician. Even though these findings, in essence, are very simple and absolutely trivial from a mathematical point of view, its advantages have never been exploited and made use of. Acknowledgement We wish to thank Professor Negash G. Medhin (North Carolina State University, USA) and Professor M. Sambandham (Moorehouse College, USA) for some technical clarifications related to this manuscript. This first author acknowledges the support from the research grant CNCSIS IDEI-543. References [1] A. Addis, S. Leyffer, A Trust-Region Algorithm for Global Optimization, Computational Optimization and Applications, 35, pp. 287-304, 2006. [2] M.T.M. Emmerich, K.C. Giannakoglou, B. Naujoks, Single and Multiobjective Evolutionary Optimization Assisted by Gaussian Random Field Metamodels, IEEE Transactions on Evolutionary Computation, 10(4), pp. 421-439, 2006. [3] C. A. Floudas, P. M. Pardalos, A collection of test problems for constrained global optimization algorithms. In G. Goods and J. Hartmanis, editors, Lecture Notes in Computer Science, Volume 455, Springer Verlag, Berlin, 1990. [4] C. Grosan, M. Oltean, Adaptive Representation for Single Objective Optimization, Soft Computing, Springer-Verlag, 9(8), pp. 594-605, 2005. [5] C. Grosan, A Abraham, A Novel Global Optimization Technique for High Dimensional Functions, International Journal of Intelligent Systems, John Wiley and Sons, USA, 24(4), pp. 421-440, 2009. [6] A. Hedar, M. Fukushima, Heuristic pattern search and its hybridization with simulated annealing for nonlinear global optimization, Optimization Methods and Software 19, pp. 291-308, 2004. [7] M. J. Hirsch, C. N. Meneses, P. M. Pardalos, M. G. C. Resende, Global optimization by continuous grasp, Optimization Letters, 1, pp. 201-212, 2007. [8] S. Hofinger, T. Schindler, A. Aszodi, Parallel Global Optimization of High-Dimensional Problems, Euro PVM/MPI, D. Kranzlmuller et al. (Eds.): LNCS 2474, pp. 148-155, 2002. [9] A. Ismael, F. Vaz, L. N. Vicente, A particle swarm pattern search method for bound constrained global optimization, Journal of Global Optimization, 2007. [10] V.K. Koumousis, C.P. Katsaras, A Saw-Tooth Genetic Algorithm Combining the Effects of Variable Population Size and Reinitialization to Enhance Performance, IEEE Transactions on Evolutionary Computation, 10(1), pp. 19-28, 2006. [11] K. Krishnakumar, Micro-genetic algorithms for stationary and nonstationary function optimization, In Proceedings of SPIE Intelligent Control Adaptive Systems, 1989, pp. 289-296. 5

Neural Network World?/08,?? [12] J.J. Liang, P.N. Suganthan, K. Deb, Novel composition test functions for numerical global optimization, Proceedings of IEEE Swarm Intelligence Symposium, pp. 68-75, 2005. [13] J. J. Liang, A. K. Qin, P.N. Suganthan, S. Baskar, Comprehensive Learning Particle Swarm Optimizer for Global Optimization of Multimodal Functions, IEEE Transactions on Evolutionary Computation, 10(3), pp. 281-295, 2006. [14] H. Liu, A. Abraham, W. Zhang, A Fuzzy Adaptive Turbulent Particle Swarm Optimization, International Journal of Innovative Computing and Applications, 1(1), 2007. [15] H. Maaranen, K. Miettinen, A. Penttinen, On initial populations of a genetic algorithm for continuous optimization problems, Journal of Global Optimization, 37, pp. 405-436, 2007. [16] A. Migdalas, P.M. Pardalos, S. Storoy, (Eds.), Parallel Computing in Optimization, Kluwer Academic Publishers, 1997. [17] K.E. Parsopoulos, M.N. Vrahitis, Recent approaches to global optimization problems through Particle Swarm Optimization, Natural Computing, 1, pp. 235-306, 2002. [18] J. F. Schutte, J. A. Reinbolt, B. J. Fregly, R. T. Haftka, A. D. George, Parallel global optimization with the particle swarm algorithm, International Journal for Numerical Methods in Engineering, 61, pp. 2296-2315, 2004 [19] S. Stepanenco, B. Engels, Gradient tabu search, Journal of Computational Chemistry, 28 (2), pp.601-611, 2007. [20] T.B. Trafalis, S. Kasap, A novel metaheuristic approach for continuous global optimization, Journal of Global Optimization, 23, pp. 171-190, 2002. 6