Measurability of evaluative criteria used by optimization methods
|
|
- Flora Rodgers
- 5 years ago
- Views:
Transcription
1 Proceedings of the th WSEAS International Confenrence on APPLIED MAHEMAICS, Dallas, exas, USA, November -3, Measurability of evaluative criteria used by optimization methods JAN PANUS, SANISLAVA SIMONOVA Institute of System Engineering and Informatics Faculty of Economics and Administration University of Pardubice Studentska 95, Pardubice CZECH REPUBLIC Abstract: - he fundamental requirements are to optimize certain suggested solution in directive and decision-making activities. Used methods are derived on mathematical basis and their typical implementation is searching of optimal traffic connection between the two junctions. It means we look for the shortest distance among starting and final points. Evaluative indicator can be another criterion than distance, however. If weighted standpoint is price of realization of certain projects or number of workers that are necessary for proposed business then it is possible to utilize optimization methods for supporting of decision-making operation in common manager profession. Key-Words: - Simulated Annealing Method, Optimization Global Optimization, Measurable Criteria Introduction Optimization methods evaluate optimal variation of solution on their defined basis. Reviewing criterion is able to be measurable but also hardly measurable or nonmeasurable sometimes. Measurable criterions could be financial costs, time, number of workers etc. Hardly measurable values could be social influences, ethical standpoint etc. Sometimes it is necessary to make provision for scalable or non-measurable standpoints for findings of optimal solution. hese standpoints could be e.g. numbers of interested workers or reputation ( good name of company ). All indexes are necessary to assign to unified form of worth representation for using of optimization methods. In general, it is not an easy task to formulate mathematical models that are suitable for such complex problems. If the model is found then it may be described with the use of an algebraic modeling language and hence, solved by numerical methods implemented by solvers. Having the optimal solution, it may be interesting to change model parameters and focus on the sensitivity analysis results. his may be realized with current modeling languages and their theoretical background without significant problems. Efficient techniques for restarting the solution process are known for a broad class of optimization models. 2 Problem Formulation Measurable criteria can be valuable expressed like results of measurement or calculation. Difficult measurable or non-measurable criteria are possible to express by expert qualified estimation. Criterion that is expressing of manager opinion about credibility of company is not possible to numerically quantify. But it is good to determine unit process of assessment by transferring into non-dimensional numbers for using in worth formulation of criteria. It means, we choose unit process of worth expression for measurable and nonmeasurable criteria in some interval (e.g. from zero to number one). Every criterion is able to gather two extreme values criterion is totally replete: criterion gather the value of number one, it is the most positive classification, criterion is totally unfulfilled: criterion gather the value of number zero, it is the most negative classification, however it is unpractical to work with the value of zero therefore the value of criteria at this point is set to the number that is close to zero but it is greater than zero. of criterion middle Fig. Assessment of Criterion criterion measurable non-measurable
2 Proceedings of the th WSEAS International Confenrence on APPLIED MAHEMAICS, Dallas, exas, USA, November -3, It is evident that criterion is able to gather many values when it is replete just in part way. Fig. shows transmission of measurable and non-measurable criteria into unit process of formulation. Measurable criterion (e.g. number of concerned workers on the project) is expressed exactly defined values (, 25, 5, 75 and see fig. ). Criteria that are hardly measurable (credibility of company) record uncertain expressed valuables (since low credibility that is expressed by symbol as far as high credibility that is expressed by symbol + + +). assessment of criterion,75,5, middle criterion measurable non-measur. Fig. 2: Assessment of criterion non-dimensional number he criterion is assign to non-dimensional number in interval of zero to one. It is possible to start from all scale of interval zero to one or it is possible to take in the way of expression on predefine number of particular values within the interval of zero to number one see fig. 2. One of the possible ways how to set the interval is to set the criteria into five possible values. here are two boundary values (zero and one) and three values among this boundary values (,25,,5,,75). his way is better to use for hardly measurable criteria whereas the way keeps necessary diversification of resultant valuation. Adjustment of the range of valuation schedule of criterion is optional of course and it depends on what size of diversification every manager has to make provision for utilization of optimization method. 2. Optimization methods he goal of an optimization problem can be formulated as follows: find the combination of parameters (independent variables) which optimize a given quantity, possibly subject to some restrictions on the allowed parameter ranges. he quantity to be optimized (maximized or minimized) is termed the objective function; the parameters which may be changed in the quest for the optimum are called control or decision variables; the restrictions on allowed parameter values are known as constraints. A general constrained nonlinear programming problem (NLP) takes the following form minimize f(x) subject to h(x) = x = (x,..., xn) () g(x) where f(x) is an objective function that we want to minimize. h(x) = [h (x),...., h m (x)] is a set of m equality constraints, and g(x) = [g (x),..., g k (x)] is a set of k inequality constraints. All f(x), h(x), and g(x) are either linear or nonlinear, convex or nonconvex, continuous or discontinuous, analytic (i.e., in closedform) or procedural (i.e., evaluated by some procedure or simulation). Variable space X is composed of all possible combinations of variables x i, i =, 2,...., n. In contrast to many existing NLP theory and methods, our formulation has no requirements on convexity, differentiability, and continuity of the objective and constraint functions. With respect to minimization problem (), we make the following assumptions. Objective function f(x) is lower bounded, but constraints h(x) and g(x) can be either bounded or unbounded. All variables x i (i =, 2,...., n) are bounded. All functions f(x), h(x), and g(x) can be either linear or nonlinear, convex or nonconvex, continuous or discontinuous, differentiable or non-differentiable. Active research in the past four decades has produced a variety of methods for solving general constrained nonlinear programming problems. hey fall into one of two general formulations, direct solution or transformation-based. he former aims to directly solve constrained NLP () by searching its feasible regions, while the latter first transforms () into another form before solving it. ransformation-based formulations can be further divided into penalty-based and Lagrangian-based. For each formulation, strategies that can be applied are classified as local search, global search, and global optimization. 2.. Local search Local search methods use local information, such as gradients and Hessian matrices, to generate iterative points and attempt to locate constrained local minima (CLM) quickly. Local search methods may not guarantee to find CLM, and their solution quality is heavily dependent on starting points. hese CLM are constrained global minima (CGM) Nocedal and Wright (3) only if () is convex, namely, the objective function f(x) is convex, every inequality constraint g i (x) is convex, and every equality constraint h i (x) is linear.
3 Proceedings of the th WSEAS International Confenrence on APPLIED MAHEMAICS, Dallas, exas, USA, November -3, Global search Global search methods employ local search methods to find CLM and, as they get stuck at local minima, utilize some mechanisms, such as multistart, to escape from these local minima. Hence, one can seek as many local minima as possible and pick the best one as the result. hese mechanisms can be either deterministic or probabilistic and do not guarantee to find CGM Global optimization Global optimization methods are methods that are able to find CGM of constrained NLPs. hey can either hit a CGM during their search or converge to a CGM when they stop. In this paper, we survey one of the existing methods for solving each class of discrete, continuous, and mixed-integer constrained NLPs. It is simulated annealing (SA), that was developed for solving unconstrained NLPs. SA searches in variable space X and it (SA) does probalistic descents in variable x space with acceptance probalistic governed by a temperature. 3 he problem of Monitored Criterion In general, it is not an easy task to formulate mathematical models that are suitable for such complex problems using in regional desicions. Optimalization methods mostly deal with one basic criterion; it means one characteristic is changing for finding the best solution. But the solution of regional questions often means wide complex of input reflected characteristic. It is dificult question how to take into account all input regional characteristic and create one non-dimensional number. his number is our criterion. Regional data sources involve huge number of data, it means there is no problem in quantity of data. But a big problem is in the area of strategic question Simonova and Capek (6); it means it is difficult to find optimal solution for particular regional strategic processes. More mathematical methods make possible to search optimum in problem solution. One of them is method of Simulated Annealing that finds global optimum. 3. Method of Simulated Annealing Annealing is the metallurgical process of heating up a solid and then cooling slowly until it crystallizes. he atoms of this material have high energies at very high temperatures. his gives the atoms a great deal of freedom in their ability to restructure themselves. As the temperature is reduced the energy of these atoms decreases. If this cooling process is carried out too quickly many irregularities and defects will be seen in the crystal structure. he process of too rapid of cooling is known as rapid quenching. Ideally the temperature should be deceased at a slower rate. A slower fall to the lower energy rates will allow a more consistent crystal structure to form. his more stable crystal form will allow the metal to be much more durable. Simulated annealing seeks to emulate this process. Simulated annealing begins at a very high temperature where the input values are allowed to assume a great range of random values. As the training progresses the temperature is allowed to fall. his restricts the degree to which the inputs are allowed to vary. his often leads the simulated annealing algorithm to a better solution, just as a metal achieves a better crystal structure through the actual annealing process. We briefly overview SA and its theory Berthsekas () for solving discrete unconstrained NLPs or combinatorial optimization problems. A general unconstrained NLP is defined as minimize i f(i) for i S. procedure SA 2. set starting point i = i ; 3. set starting temperature = and cooling rate < α < ; 4. set N (number of trials per temperature); 5. while stopping condition is not satisfied do 6. for k to N do 7. generate trial point i` from S i using q(i; i`); 8. accept i` with probability A (i; i`) 9. end for. reduce temperature by α x ;. end while 2. end procedure Fig. 3: Simulated annealing (SA) algorithm. where f(i) is an objective function to be minimized, and S is the solution space denoting the finite set of all possible solutions. A solution i opt is called a global minimum if it satisfies f(i opt ) f(i), for all i S. Let S opt be the set of all the global minima and f opt = f(i opt ) be their objective value. Neighborhood S i of solution i is the set of discrete points j satisfying j Si, i Sj. Fig. shows the procedure of SA for solving unconstrained problem (2). q(i; i`), the generation probability, is defined as q(i; i`) = / Si for all i` Si, and A (i; i`), the acceptance probability of accepting solution point i, is defined by: A (i; i) = exp ( f ( i`) f ( i) ), where a + = a if a >, and a+ = otherwise. + (2) (3)
4 Proceedings of the th WSEAS International Confenrence on APPLIED MAHEMAICS, Dallas, exas, USA, November -3, Accordingly, SA works as follows. Given current solution i, SA first generates trial point i`. If f(i`) < f(i), i` is accepted as a starting point for the next iteration; otherwise, solution i` is accepted with probability exp ( i ) f ( i) f ; N k log k + e ( ),. (4) he worse the i` is, the smaller is the probability that i` is accepted for the next iteration. he above procedure is repeated N times until temperature is reduced. heoretically, if is reduced sufficiently slowly in logarithmic scale, then SA will converge asymptotically to an optimal solution i opt S opt Bertsekas (). In practice, a geometric cooling schedule, α, is generally utilized to have SA settle down at some solution i* in a finite amount of time. SA can be modeled by an inhomogeneous Markov chain that consists of a sequence of homogeneous Markov chains of finite length, each at a specific temperature in a given temperature schedule. According to generation probability q(i; i`) and acceptance probability A (i; i`), the one-step transition probability of the Markov chain is: ( i,i`) ) A ( i,i`) P ( i, j) q( if i` Si j Si,j i if i`= i otherwise P (i; i`) = (5) and the corresponding transition matrix is P = [P (i; i`)]. It is assumed that, by choosing neighborhood S i properly, the Markov chain is irreducible, meaning that for each pair of solutions i and j, there is a positive probability of reaching j from i in a finite number of steps. Consider the sequence of temperatures { k ; k =,, 2,...}, where k > k+ and lim k k =, and choose N to be the maximum of the minimum number of steps required to reach an i opt from every j S. Since the Markov is irreducible and search space S is finite, such N always exists. he asymptotic convergence theorem of SA is stated as follows. 3.2 heorem he Markov chain modeling SA converges asymptotically to a global minimum of S opt if the sequence of temperatures satisfies: where Δ = max i,j S {f(j) - f(i) j S i }. (6) he proof of this theorem is based on so-called local balance equation Bertsekas(), meaning that: π ( i )P ( i, i`) = π ( i`)p ( i`, i ), (7) where π (i) is the stationary probability of state i at temperature. Although SA works for solving unconstrained NLPs, it cannot be used directly to solve constrained NLPs that have a set of constraints to be satisfied, in addition to minimizing the objective. he widely used strategy is to transform constrained NLP () into an unconstrained NLP using penalty formulations Luenberger (2). For static penalty formulation Luenberger(2), it is very difficult to choose suitable penalty γ: if the penalty is too large, SA tends to find feasible solutions than optimal solutions. For dynamic penalty formulation Humpries and Hawkins (4), unconstrained problem (5) at every stage of λ(k) has to be solved optimally (, 2) in order to have asymptotic convergence. However, this requirement is difficult to achieve in practice, given only a finite amount of time in each stage. If the result in one stage is not a global minimum, then the process cannot be guaranteed to find constrained global minima. herefore, applying SA to a dynamic-penalty formulation does not always lead to asymptotic convergence. Besides, SA cannot be used to search in a Lagrangian space, because minimizing Lagrangian function. For special case of (), the generalized discrete augmented Lagrangian function is defined as L d (x, λ) = f(x) + λ H(h(x)) +/2 h(x) 2 (8) where λ = { λ,λ 2,... λ m } is a set of Lagrange multipliers, H is a continuous transformation function that satisfies H(x) =, x =, and h(x) 2 = ( ) m 2 i = hi x 4 Pre-Computation he basic principle of the method of Simulated Annealing deals with the optimum searching by help of criteria changing. his principle seems to be suitable for strategic regional decisions. he management of local authorities needs optimum solution for particular problems in the context of various regional indicators. Example of these strategic regional questions is using of investment grants in the region. One example could be construction of new highway. One large project is divided into smaller projects (or stages see fig. 4). here are many companies (Com, Com2,...) in the project. Every company is possible to work on specific stage of (9)
5 Proceedings of the th WSEAS International Confenrence on APPLIED MAHEMAICS, Dallas, exas, USA, November -3, whole project. Company Com is able to complete all project (with every stages) but company Com2 is able to complete just stage S, S2 and S4. Others companies is able to complete just a few stages or that stages for that they have sufficient devices (vehicles, material, knowledge etc.) Region (R) x Services (W) x Extra (A) x Reliability () x Com Com2 Com3 Com4 Com5 Com6 Com7 S S S2 S3 S4 A2 A3 A4 A5 A B B2 B4 C C3 C4 D2 D3 E2 E5 F F4 G3 F5 G Fig. 4: Separate stages of the project S5 5 Conclusion Using of optimization methods is necessary for efficient managing of special areas. One of these areas is managing of huge projects for developing of region. hat problem can contain huge number of data which hides large potential for answering of strategic inquiries. Management of institution that manages the projects often needs to find optimum with solving concrete problem if the optimum is contingent by various criteria. Funds of criteria are available in line time, structural rows and at others rows. Method of SA offers possibility of very effective algorithm to solving combinatorial exercise and gained solutions are either identical or very near to optimum solution. Regional authority can make some conditions and set criterions for every stage of whole project. It is no problem to evaluate table where we can find whether concrete company is able to realize defined conditions. able shows one stage of whole project with possibility of every company to realize this stage. Company Com A is transnational company that is able to realize the project for a good price. But the problem is that the company is not able to employ workers who are from region where is the project realized. he valuating of criterion R is very low. On the opposite way is reliability () or Extra services (A) of the company. It is because the company is very strong with established resources or knowledge. here are many different companies in the table with classification of their possibilities to realize the stage. he classification depends on manager decision and on the priority of every local authority. It is no problem to make another classification depends on this set criterions. hat classification can use applicable matrix for simulated annealing as optimizing method. It is just about determining of utilizable criterions. It means we have to determine symbols using in the table to specific numbers (matrix) and then use them in the optimization method. able : Setting of Criterions for one stage of the project Company A B C D E F 6 References. Bertsekas, D. P.: Constrained Optimization and Lagrange Multiplier Methods. Academic Press, Luenberger, D. G.: Linear and Nonlinear Programming. Addison-Wesley Publishing Company, Reading, MA, Nocedal, J., Wright, S. J.: Numerical Optimization. Springer Verlag, New York Rardin, R. L.: Optimization in Operations Research. Upper Saddle River, New Jersey: Prentice Hall, Humphries, M., Hawkins, M, W.: Data warehousing Architecture and implementation (Czech translation) Computer Press Prague Simonova, S., Capek, J. Fuzzy Approach in Enquiry to Regional Data Sources for Municipalities, WSEAS ransactions on Information Science and Applications, Vol.3, No.2, 24, pp Ait-Aoudia, S., Mana I.: Numerical Solving of Geometric Constraints by bisection A Distributed Approach, WSEAS ransactions on Information Science and Applications, Vol., No.5, 24, pp Popela, P.: Computer Aided Optimization. In Proceedings of 3 rd Conference, p. 3-. Ostrava, Czech republic, 998. Price (F) 4 x 6 4
Simulated Annealing Method for Regional Analysis
Simulated Annealing Method for Regional Analysis JAN PANUS, STANISLAVA SIMONOVA Institute of System Engineering and Informatics University of Pardubice Studentská 84, 532 10 Pardubice CZECH REPUBLIC http://www.upce.cz
More informationModule 1 Lecture Notes 2. Optimization Problem and Model Formulation
Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization
More informationProgramming, numerics and optimization
Programming, numerics and optimization Lecture C-4: Constrained optimization Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428 June
More informationOptimization Techniques for Design Space Exploration
0-0-7 Optimization Techniques for Design Space Exploration Zebo Peng Embedded Systems Laboratory (ESLAB) Linköping University Outline Optimization problems in ERT system design Heuristic techniques Simulated
More informationMachine Learning for Software Engineering
Machine Learning for Software Engineering Single-State Meta-Heuristics Prof. Dr.-Ing. Norbert Siegmund Intelligent Software Systems 1 2 Recap: Goal is to Find the Optimum Challenges of general optimization
More informationLecture 25 Nonlinear Programming. November 9, 2009
Nonlinear Programming November 9, 2009 Outline Nonlinear Programming Another example of NLP problem What makes these problems complex Scalar Function Unconstrained Problem Local and global optima: definition,
More informationUnconstrained Optimization Principles of Unconstrained Optimization Search Methods
1 Nonlinear Programming Types of Nonlinear Programs (NLP) Convexity and Convex Programs NLP Solutions Unconstrained Optimization Principles of Unconstrained Optimization Search Methods Constrained Optimization
More informationCOMS 4771 Support Vector Machines. Nakul Verma
COMS 4771 Support Vector Machines Nakul Verma Last time Decision boundaries for classification Linear decision boundary (linear classification) The Perceptron algorithm Mistake bound for the perceptron
More informationClassification of Optimization Problems and the Place of Calculus of Variations in it
Lecture 1 Classification of Optimization Problems and the Place of Calculus of Variations in it ME256 Indian Institute of Science G. K. Ananthasuresh Professor, Mechanical Engineering, Indian Institute
More informationLECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach
LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach Basic approaches I. Primal Approach - Feasible Direction
More informationCHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM
20 CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 2.1 CLASSIFICATION OF CONVENTIONAL TECHNIQUES Classical optimization methods can be classified into two distinct groups:
More informationA Short SVM (Support Vector Machine) Tutorial
A Short SVM (Support Vector Machine) Tutorial j.p.lewis CGIT Lab / IMSC U. Southern California version 0.zz dec 004 This tutorial assumes you are familiar with linear algebra and equality-constrained optimization/lagrange
More informationME 555: Distributed Optimization
ME 555: Distributed Optimization Duke University Spring 2015 1 Administrative Course: ME 555: Distributed Optimization (Spring 2015) Instructor: Time: Location: Office hours: Website: Soomin Lee (email:
More informationChapter 3 Numerical Methods
Chapter 3 Numerical Methods Part 1 3.1 Linearization and Optimization of Functions of Vectors 1 Problem Notation 2 Outline 3.1.1 Linearization 3.1.2 Optimization of Objective Functions 3.1.3 Constrained
More informationConstrained and Unconstrained Optimization
Constrained and Unconstrained Optimization Carlos Hurtado Department of Economics University of Illinois at Urbana-Champaign hrtdmrt2@illinois.edu Oct 10th, 2017 C. Hurtado (UIUC - Economics) Numerical
More informationChapter II. Linear Programming
1 Chapter II Linear Programming 1. Introduction 2. Simplex Method 3. Duality Theory 4. Optimality Conditions 5. Applications (QP & SLP) 6. Sensitivity Analysis 7. Interior Point Methods 1 INTRODUCTION
More informationSurrogate Gradient Algorithm for Lagrangian Relaxation 1,2
Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2 X. Zhao 3, P. B. Luh 4, and J. Wang 5 Communicated by W.B. Gong and D. D. Yao 1 This paper is dedicated to Professor Yu-Chi Ho for his 65th birthday.
More informationOptimization. Industrial AI Lab.
Optimization Industrial AI Lab. Optimization An important tool in 1) Engineering problem solving and 2) Decision science People optimize Nature optimizes 2 Optimization People optimize (source: http://nautil.us/blog/to-save-drowning-people-ask-yourself-what-would-light-do)
More informationMATH3016: OPTIMIZATION
MATH3016: OPTIMIZATION Lecturer: Dr Huifu Xu School of Mathematics University of Southampton Highfield SO17 1BJ Southampton Email: h.xu@soton.ac.uk 1 Introduction What is optimization? Optimization is
More information15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs
15.082J and 6.855J Lagrangian Relaxation 2 Algorithms Application to LPs 1 The Constrained Shortest Path Problem (1,10) 2 (1,1) 4 (2,3) (1,7) 1 (10,3) (1,2) (10,1) (5,7) 3 (12,3) 5 (2,2) 6 Find the shortest
More informationA SIMULATED ANNEALING ALGORITHM FOR SOME CLASS OF DISCRETE-CONTINUOUS SCHEDULING PROBLEMS. Joanna Józefowska, Marek Mika and Jan Węglarz
A SIMULATED ANNEALING ALGORITHM FOR SOME CLASS OF DISCRETE-CONTINUOUS SCHEDULING PROBLEMS Joanna Józefowska, Marek Mika and Jan Węglarz Poznań University of Technology, Institute of Computing Science,
More informationEvolutionary Computation Algorithms for Cryptanalysis: A Study
Evolutionary Computation Algorithms for Cryptanalysis: A Study Poonam Garg Information Technology and Management Dept. Institute of Management Technology Ghaziabad, India pgarg@imt.edu Abstract The cryptanalysis
More informationComputational Methods. Constrained Optimization
Computational Methods Constrained Optimization Manfred Huber 2010 1 Constrained Optimization Unconstrained Optimization finds a minimum of a function under the assumption that the parameters can take on
More informationBilinear Programming
Bilinear Programming Artyom G. Nahapetyan Center for Applied Optimization Industrial and Systems Engineering Department University of Florida Gainesville, Florida 32611-6595 Email address: artyom@ufl.edu
More informationContents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.
page v Preface xiii I Basics 1 1 Optimization Models 3 1.1 Introduction... 3 1.2 Optimization: An Informal Introduction... 4 1.3 Linear Equations... 7 1.4 Linear Optimization... 10 Exercises... 12 1.5
More informationNumerical Method in Optimization as a Multi-stage Decision Control System
Numerical Method in Optimization as a Multi-stage Decision Control System B.S. GOH Institute of Mathematical Sciences University of Malaya 50603 Kuala Lumpur MLYSI gohoptimum@gmail.com bstract: - Numerical
More informationTask Allocation for Minimizing Programs Completion Time in Multicomputer Systems
Task Allocation for Minimizing Programs Completion Time in Multicomputer Systems Gamal Attiya and Yskandar Hamam Groupe ESIEE Paris, Lab. A 2 SI Cité Descartes, BP 99, 93162 Noisy-Le-Grand, FRANCE {attiyag,hamamy}@esiee.fr
More informationCharacterizing Improving Directions Unconstrained Optimization
Final Review IE417 In the Beginning... In the beginning, Weierstrass's theorem said that a continuous function achieves a minimum on a compact set. Using this, we showed that for a convex set S and y not
More informationLocal and Global Minimum
Local and Global Minimum Stationary Point. From elementary calculus, a single variable function has a stationary point at if the derivative vanishes at, i.e., 0. Graphically, the slope of the function
More informationB553 Lecture 12: Global Optimization
B553 Lecture 12: Global Optimization Kris Hauser February 20, 2012 Most of the techniques we have examined in prior lectures only deal with local optimization, so that we can only guarantee convergence
More informationLecture 2 September 3
EE 381V: Large Scale Optimization Fall 2012 Lecture 2 September 3 Lecturer: Caramanis & Sanghavi Scribe: Hongbo Si, Qiaoyang Ye 2.1 Overview of the last Lecture The focus of the last lecture was to give
More informationINF Biologically inspired computing Lecture 1: Marsland chapter 9.1, Optimization and Search Jim Tørresen
INF3490 - Biologically inspired computing Lecture 1: Marsland chapter 9.1, 9.4-9.6 2017 Optimization and Search Jim Tørresen Optimization and Search 2 Optimization and Search Methods (selection) 1. Exhaustive
More informationDepartment of Mathematics Oleg Burdakov of 30 October Consider the following linear programming problem (LP):
Linköping University Optimization TAOP3(0) Department of Mathematics Examination Oleg Burdakov of 30 October 03 Assignment Consider the following linear programming problem (LP): max z = x + x s.t. x x
More informationSolution Methods Numerical Algorithms
Solution Methods Numerical Algorithms Evelien van der Hurk DTU Managment Engineering Class Exercises From Last Time 2 DTU Management Engineering 42111: Static and Dynamic Optimization (6) 09/10/2017 Class
More informationIntroduction to Linear Programming. Algorithmic and Geometric Foundations of Optimization
Introduction to Linear Programming Algorithmic and Geometric Foundations of Optimization Optimization and Linear Programming Mathematical programming is a class of methods for solving problems which ask
More informationInterpretation of Dual Model for Piecewise Linear. Programming Problem Robert Hlavatý
Interpretation of Dual Model for Piecewise Linear 1 Introduction Programming Problem Robert Hlavatý Abstract. Piecewise linear programming models are suitable tools for solving situations of non-linear
More informationTheoretical Concepts of Machine Learning
Theoretical Concepts of Machine Learning Part 2 Institute of Bioinformatics Johannes Kepler University, Linz, Austria Outline 1 Introduction 2 Generalization Error 3 Maximum Likelihood 4 Noise Models 5
More informationChapter 14 Global Search Algorithms
Chapter 14 Global Search Algorithms An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Introduction We discuss various search methods that attempts to search throughout the entire feasible set.
More informationDistance-to-Solution Estimates for Optimization Problems with Constraints in Standard Form
Distance-to-Solution Estimates for Optimization Problems with Constraints in Standard Form Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report
More informationInteger Programming Theory
Integer Programming Theory Laura Galli October 24, 2016 In the following we assume all functions are linear, hence we often drop the term linear. In discrete optimization, we seek to find a solution x
More informationMachine Learning. Topic 5: Linear Discriminants. Bryan Pardo, EECS 349 Machine Learning, 2013
Machine Learning Topic 5: Linear Discriminants Bryan Pardo, EECS 349 Machine Learning, 2013 Thanks to Mark Cartwright for his extensive contributions to these slides Thanks to Alpaydin, Bishop, and Duda/Hart/Stork
More informationLec13p1, ORF363/COS323
Lec13 Page 1 Lec13p1, ORF363/COS323 This lecture: Semidefinite programming (SDP) Definition and basic properties Review of positive semidefinite matrices SDP duality SDP relaxations for nonconvex optimization
More informationA penalty based filters method in direct search optimization
A penalty based filters method in direct search optimization ALDINA CORREIA CIICESI/ESTG P.PORTO Felgueiras PORTUGAL aic@estg.ipp.pt JOÃO MATIAS CM-UTAD Vila Real PORTUGAL j matias@utad.pt PEDRO MESTRE
More informationConstraints in Particle Swarm Optimization of Hidden Markov Models
Constraints in Particle Swarm Optimization of Hidden Markov Models Martin Macaš, Daniel Novák, and Lenka Lhotská Czech Technical University, Faculty of Electrical Engineering, Dep. of Cybernetics, Prague,
More informationSupport Vector Machines. James McInerney Adapted from slides by Nakul Verma
Support Vector Machines James McInerney Adapted from slides by Nakul Verma Last time Decision boundaries for classification Linear decision boundary (linear classification) The Perceptron algorithm Mistake
More informationREAL-CODED GENETIC ALGORITHMS CONSTRAINED OPTIMIZATION. Nedim TUTKUN
REAL-CODED GENETIC ALGORITHMS CONSTRAINED OPTIMIZATION Nedim TUTKUN nedimtutkun@gmail.com Outlines Unconstrained Optimization Ackley s Function GA Approach for Ackley s Function Nonlinear Programming Penalty
More informationConvex Optimization MLSS 2015
Convex Optimization MLSS 2015 Constantine Caramanis The University of Texas at Austin The Optimization Problem minimize : f (x) subject to : x X. The Optimization Problem minimize : f (x) subject to :
More informationOPTIMIZATION METHODS
D. Nagesh Kumar Associate Professor Department of Civil Engineering, Indian Institute of Science, Bangalore - 50 0 Email : nagesh@civil.iisc.ernet.in URL: http://www.civil.iisc.ernet.in/~nagesh Brief Contents
More informationPRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction
PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING KELLER VANDEBOGERT AND CHARLES LANNING 1. Introduction Interior point methods are, put simply, a technique of optimization where, given a problem
More informationCMU-Q Lecture 9: Optimization II: Constrained,Unconstrained Optimization Convex optimization. Teacher: Gianni A. Di Caro
CMU-Q 15-381 Lecture 9: Optimization II: Constrained,Unconstrained Optimization Convex optimization Teacher: Gianni A. Di Caro GLOBAL FUNCTION OPTIMIZATION Find the global maximum of the function f x (and
More informationApplied Lagrange Duality for Constrained Optimization
Applied Lagrange Duality for Constrained Optimization Robert M. Freund February 10, 2004 c 2004 Massachusetts Institute of Technology. 1 1 Overview The Practical Importance of Duality Review of Convexity
More informationSequential Coordinate-wise Algorithm for Non-negative Least Squares Problem
CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Sequential Coordinate-wise Algorithm for Non-negative Least Squares Problem Woring document of the EU project COSPAL IST-004176 Vojtěch Franc, Miro
More informationLECTURE NOTES Non-Linear Programming
CEE 6110 David Rosenberg p. 1 Learning Objectives LECTURE NOTES Non-Linear Programming 1. Write out the non-linear model formulation 2. Describe the difficulties of solving a non-linear programming model
More informationRecent Developments in Model-based Derivative-free Optimization
Recent Developments in Model-based Derivative-free Optimization Seppo Pulkkinen April 23, 2010 Introduction Problem definition The problem we are considering is a nonlinear optimization problem with constraints:
More informationToday. Golden section, discussion of error Newton s method. Newton s method, steepest descent, conjugate gradient
Optimization Last time Root finding: definition, motivation Algorithms: Bisection, false position, secant, Newton-Raphson Convergence & tradeoffs Example applications of Newton s method Root finding in
More informationIntroduction to Optimization Problems and Methods
Introduction to Optimization Problems and Methods wjch@umich.edu December 10, 2009 Outline 1 Linear Optimization Problem Simplex Method 2 3 Cutting Plane Method 4 Discrete Dynamic Programming Problem Simplex
More informationThe AIMMS Outer Approximation Algorithm for MINLP
The AIMMS Outer Approximation Algorithm for MINLP (using GMP functionality) By Marcel Hunting marcel.hunting@aimms.com November 2011 This document describes how to use the GMP variant of the AIMMS Outer
More informationHYBRID GENETIC ALGORITHM WITH GREAT DELUGE TO SOLVE CONSTRAINED OPTIMIZATION PROBLEMS
HYBRID GENETIC ALGORITHM WITH GREAT DELUGE TO SOLVE CONSTRAINED OPTIMIZATION PROBLEMS NABEEL AL-MILLI Financial and Business Administration and Computer Science Department Zarqa University College Al-Balqa'
More information(1) Given the following system of linear equations, which depends on a parameter a R, 3x y + 5z = 2 4x + y + (a 2 14)z = a + 2
(1 Given the following system of linear equations, which depends on a parameter a R, x + 2y 3z = 4 3x y + 5z = 2 4x + y + (a 2 14z = a + 2 (a Classify the system of equations depending on the values of
More informationOptimization Problems Under One-sided (max, min)-linear Equality Constraints
WDS'12 Proceedings of Contributed Papers, Part I, 13 19, 2012. ISBN 978-80-7378-224-5 MATFYZPRESS Optimization Problems Under One-sided (max, min)-linear Equality Constraints M. Gad Charles University,
More informationComparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles
INTERNATIONAL JOURNAL OF MATHEMATICS MODELS AND METHODS IN APPLIED SCIENCES Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles M. Fernanda P.
More informationInteger Programming ISE 418. Lecture 1. Dr. Ted Ralphs
Integer Programming ISE 418 Lecture 1 Dr. Ted Ralphs ISE 418 Lecture 1 1 Reading for This Lecture N&W Sections I.1.1-I.1.4 Wolsey Chapter 1 CCZ Chapter 2 ISE 418 Lecture 1 2 Mathematical Optimization Problems
More informationAlgorithms for convex optimization
Algorithms for convex optimization Michal Kočvara Institute of Information Theory and Automation Academy of Sciences of the Czech Republic and Czech Technical University kocvara@utia.cas.cz http://www.utia.cas.cz/kocvara
More informationCollege of Computer & Information Science Fall 2007 Northeastern University 14 September 2007
College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007 CS G399: Algorithmic Power Tools I Scribe: Eric Robinson Lecture Outline: Linear Programming: Vertex Definitions
More informationRobust time-varying shortest path with arbitrary waiting time at vertices
Croatian Operational Research Review 525 CRORR 8(2017), 525 56 Robust time-varying shortest path with arbitrary waiting time at vertices Gholamhassan Shirdel 1, and Hassan Rezapour 1 1 Department of Mathematics,
More informationNon-deterministic Search techniques. Emma Hart
Non-deterministic Search techniques Emma Hart Why do local search? Many real problems are too hard to solve with exact (deterministic) techniques Modern, non-deterministic techniques offer ways of getting
More informationPerformance Evaluation of an Interior Point Filter Line Search Method for Constrained Optimization
6th WSEAS International Conference on SYSTEM SCIENCE and SIMULATION in ENGINEERING, Venice, Italy, November 21-23, 2007 18 Performance Evaluation of an Interior Point Filter Line Search Method for Constrained
More informationLagrangian Relaxation: An overview
Discrete Math for Bioinformatics WS 11/12:, by A. Bockmayr/K. Reinert, 22. Januar 2013, 13:27 4001 Lagrangian Relaxation: An overview Sources for this lecture: D. Bertsimas and J. Tsitsiklis: Introduction
More informationData Mining Chapter 8: Search and Optimization Methods Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University
Data Mining Chapter 8: Search and Optimization Methods Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University Search & Optimization Search and Optimization method deals with
More informationLecture 4 Duality and Decomposition Techniques
Lecture 4 Duality and Decomposition Techniques Jie Lu (jielu@kth.se) Richard Combes Alexandre Proutiere Automatic Control, KTH September 19, 2013 Consider the primal problem Lagrange Duality Lagrangian
More informationTruss structural configuration optimization using the linear extended interior penalty function method
ANZIAM J. 46 (E) pp.c1311 C1326, 2006 C1311 Truss structural configuration optimization using the linear extended interior penalty function method Wahyu Kuntjoro Jamaluddin Mahmud (Received 25 October
More informationAdvanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs
Advanced Operations Research Techniques IE316 Quiz 1 Review Dr. Ted Ralphs IE316 Quiz 1 Review 1 Reading for The Quiz Material covered in detail in lecture. 1.1, 1.4, 2.1-2.6, 3.1-3.3, 3.5 Background material
More informationIntroduction to Optimization Using Metaheuristics. Thomas J. K. Stidsen
Introduction to Optimization Using Metaheuristics Thomas J. K. Stidsen Outline General course information Motivation, modelling and solving Hill climbers Simulated Annealing 1 Large-Scale Optimization
More informationLP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008
LP-Modelling dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven January 30, 2008 1 Linear and Integer Programming After a brief check with the backgrounds of the participants it seems that the following
More informationDavid G. Luenberger Yinyu Ye. Linear and Nonlinear. Programming. Fourth Edition. ö Springer
David G. Luenberger Yinyu Ye Linear and Nonlinear Programming Fourth Edition ö Springer Contents 1 Introduction 1 1.1 Optimization 1 1.2 Types of Problems 2 1.3 Size of Problems 5 1.4 Iterative Algorithms
More informationLecture 10: SVM Lecture Overview Support Vector Machines The binary classification problem
Computational Learning Theory Fall Semester, 2012/13 Lecture 10: SVM Lecturer: Yishay Mansour Scribe: Gitit Kehat, Yogev Vaknin and Ezra Levin 1 10.1 Lecture Overview In this lecture we present in detail
More informationLinear Programming. Linear programming provides methods for allocating limited resources among competing activities in an optimal way.
University of Southern California Viterbi School of Engineering Daniel J. Epstein Department of Industrial and Systems Engineering ISE 330: Introduction to Operations Research - Deterministic Models Fall
More informationIZAR THE CONCEPT OF UNIVERSAL MULTICRITERIA DECISION SUPPORT SYSTEM
Jana Kalčevová Petr Fiala IZAR THE CONCEPT OF UNIVERSAL MULTICRITERIA DECISION SUPPORT SYSTEM Abstract Many real decision making problems are evaluated by multiple criteria. To apply appropriate multicriteria
More informationSome Advanced Topics in Linear Programming
Some Advanced Topics in Linear Programming Matthew J. Saltzman July 2, 995 Connections with Algebra and Geometry In this section, we will explore how some of the ideas in linear programming, duality theory,
More informationMulticriterial Optimization Using Genetic Algorithm
Multicriterial Optimization Using Genetic Algorithm 180 175 170 165 Fitness 160 155 150 145 140 Best Fitness Mean Fitness 135 130 0 Page 1 100 200 300 Generations 400 500 600 Contents Optimization, Local
More informationThe AIMMS Outer Approximation Algorithm for MINLP
The AIMMS Outer Approximation Algorithm for MINLP (using GMP functionality) By Marcel Hunting Paragon Decision Technology BV An AIMMS White Paper November, 2011 Abstract This document describes how to
More informationLecture 2 Optimization with equality constraints
Lecture 2 Optimization with equality constraints Constrained optimization The idea of constrained optimisation is that the choice of one variable often affects the amount of another variable that can be
More informationLinear Optimization. Andongwisye John. November 17, Linkoping University. Andongwisye John (Linkoping University) November 17, / 25
Linear Optimization Andongwisye John Linkoping University November 17, 2016 Andongwisye John (Linkoping University) November 17, 2016 1 / 25 Overview 1 Egdes, One-Dimensional Faces, Adjacency of Extreme
More informationAPPLIED OPTIMIZATION WITH MATLAB PROGRAMMING
APPLIED OPTIMIZATION WITH MATLAB PROGRAMMING Second Edition P. Venkataraman Rochester Institute of Technology WILEY JOHN WILEY & SONS, INC. CONTENTS PREFACE xiii 1 Introduction 1 1.1. Optimization Fundamentals
More informationConstrained Optimization
Constrained Optimization Dudley Cooke Trinity College Dublin Dudley Cooke (Trinity College Dublin) Constrained Optimization 1 / 46 EC2040 Topic 5 - Constrained Optimization Reading 1 Chapters 12.1-12.3
More informationMATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS
MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS GRADO EN A.D.E. GRADO EN ECONOMÍA GRADO EN F.Y.C. ACADEMIC YEAR 2011-12 INDEX UNIT 1.- AN INTRODUCCTION TO OPTIMIZATION 2 UNIT 2.- NONLINEAR PROGRAMMING
More informationMathematical Programming and Research Methods (Part II)
Mathematical Programming and Research Methods (Part II) 4. Convexity and Optimization Massimiliano Pontil (based on previous lecture by Andreas Argyriou) 1 Today s Plan Convex sets and functions Types
More informationAlgorithm Design (4) Metaheuristics
Algorithm Design (4) Metaheuristics Takashi Chikayama School of Engineering The University of Tokyo Formalization of Constraint Optimization Minimize (or maximize) the objective function f(x 0,, x n )
More informationINTRODUCTION TO LINEAR AND NONLINEAR PROGRAMMING
INTRODUCTION TO LINEAR AND NONLINEAR PROGRAMMING DAVID G. LUENBERGER Stanford University TT ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California London Don Mills, Ontario CONTENTS
More informationHybrid Optimization Coupling Electromagnetism and Descent Search for Engineering Problems
Proceedings of the International Conference on Computational and Mathematical Methods in Science and Engineering, CMMSE 2008 13 17 June 2008. Hybrid Optimization Coupling Electromagnetism and Descent Search
More informationISM206 Lecture, April 26, 2005 Optimization of Nonlinear Objectives, with Non-Linear Constraints
ISM206 Lecture, April 26, 2005 Optimization of Nonlinear Objectives, with Non-Linear Constraints Instructor: Kevin Ross Scribe: Pritam Roy May 0, 2005 Outline of topics for the lecture We will discuss
More informationCONLIN & MMA solvers. Pierre DUYSINX LTAS Automotive Engineering Academic year
CONLIN & MMA solvers Pierre DUYSINX LTAS Automotive Engineering Academic year 2018-2019 1 CONLIN METHOD 2 LAY-OUT CONLIN SUBPROBLEMS DUAL METHOD APPROACH FOR CONLIN SUBPROBLEMS SEQUENTIAL QUADRATIC PROGRAMMING
More informationNOTATION AND TERMINOLOGY
15.053x, Optimization Methods in Business Analytics Fall, 2016 October 4, 2016 A glossary of notation and terms used in 15.053x Weeks 1, 2, 3, 4 and 5. (The most recent week's terms are in blue). NOTATION
More informationBayesian Methods in Vision: MAP Estimation, MRFs, Optimization
Bayesian Methods in Vision: MAP Estimation, MRFs, Optimization CS 650: Computer Vision Bryan S. Morse Optimization Approaches to Vision / Image Processing Recurring theme: Cast vision problem as an optimization
More informationEnsemble methods in machine learning. Example. Neural networks. Neural networks
Ensemble methods in machine learning Bootstrap aggregating (bagging) train an ensemble of models based on randomly resampled versions of the training set, then take a majority vote Example What if you
More informationThe Cross-Entropy Method
The Cross-Entropy Method Guy Weichenberg 7 September 2003 Introduction This report is a summary of the theory underlying the Cross-Entropy (CE) method, as discussed in the tutorial by de Boer, Kroese,
More informationModern Methods of Data Analysis - WS 07/08
Modern Methods of Data Analysis Lecture XV (04.02.08) Contents: Function Minimization (see E. Lohrmann & V. Blobel) Optimization Problem Set of n independent variables Sometimes in addition some constraints
More informationProbabilistic Graphical Models
School of Computer Science Probabilistic Graphical Models Theory of Variational Inference: Inner and Outer Approximation Eric Xing Lecture 14, February 29, 2016 Reading: W & J Book Chapters Eric Xing @
More informationSimplicial Global Optimization
Simplicial Global Optimization Julius Žilinskas Vilnius University, Lithuania September, 7 http://web.vu.lt/mii/j.zilinskas Global optimization Find f = min x A f (x) and x A, f (x ) = f, where A R n.
More informationDiscrete Optimization. Lecture Notes 2
Discrete Optimization. Lecture Notes 2 Disjunctive Constraints Defining variables and formulating linear constraints can be straightforward or more sophisticated, depending on the problem structure. The
More information