3.3 Function minimization
|
|
- Roland Smith
- 5 years ago
- Views:
Transcription
1 3.3. Function minimization Function minimization Beneath the problem of root-finding, minimizing functions constitutes a major problem in computational economics. Let f () : X R a function that maps an n-dimensional vector X R n into the real line R. Then the corresponding minimization problem is defined as min f (). X Note that minimization and maimization do not have to be considered separately, as minimizing the function f () is eactly equal to its maimization. Obviously, minimization is very closely related to root-finding, as the derivative of a function becomes zero in a minimum. Consequently, if derivatives can be easily calculated, root-finding is a fair alternative to using a minimization method. However there are cases where using a minimization method is superior, e.g. if derivatives can t be calculated in a reasonable amount of time. In addition, non-differentiability of f eliminates the option of choosing a root-finding instead of a minimization algorithm. Last but not least, minimization methods often give us the possibility of constraining the set of possible values for. This is obviously not possible with root-finding algorithms. Figure 3.4 addresses this issue. Looking for the unconstraint minimum of f, we can see that we could use either a root f () * a b Figure 3.4: Constraint minimization finding or a minimization algorithm. However, if we would like to find the minimum of f only on the interval [a, b], root-finding will not help us, as f is positive on the whole interval. A constraint minimization method could however give us the actual solution a to this problem.
2 56 Chapter 3 Numerical Solution Methods Eample A household can consume two goods 1 and 2. He values the consumption of those goods with the joint utility function u( 1, 2 )= (1 + 2 ) 0.5. Here 2 acts as a luury good, i.e. the household will only consume 2, if his available resources W are large enough. 1 on the other hand is a normal good and will always be consumed. Naturally, we have to assume that 1, 2 0. With the prices for the goods being p 1 and p 2, the households has to solve the optimization problem ma 1, (1 + 2 ) 0.5 s.t. p p 2 2 = W. Note that there is no analytical solution to this problem. Due to marginal utility increasing to with 1 approaching zero, the optimal 1 will always be strictly larger than zero. However, we might result in a corner solution 2 = 0. As the set of allowed values for 2 is constraint, we will not be able to use a root-finding method to solve for the optimal choice of 1 and 2. Hence, we need a minimization routine. When using a minimization algorithm with an equality constraint, it is always useful to first plug in the constraint into the utility function and therefore reduce the dimension of the optimization problem. We therefore reformulate the above problem to ma 2 0 [ ] W p (1 + 2 ) 0.5. This problem can be solved with various minimization routines. p The Golden-Search method The Golden-Search method minimizes a one-dimensional function on the initially defined interval [a, b]. The ideas behind this method is quite similar to the one of bisection search. Golden-Search however divides the interval [a i, b i ] into two sub-intervals by using the points i,1 and i,2 with i,j = a i + α j (b i a i ) with α 1 = and α 2 = 1 α 1 = 5 1. (3.4) 2 The values α 1 and α 2 are chosen in a way, such that the interval [a i, b i ] is intersected according to the golden ratio. 13 We now compute the function values f ( i,j ) for j = 1, 2 and compare them. The net iteration s interval is then chosen according to a i+1 = a i and b i+1 = i,2 if f ( i,1 ) < f ( i,2 ), a i+1 = i,1 and b i+1 = b i otherwise.
3 3.3. Function minimization 57 a= a i,1 * = i i+1 i,2 b i+1 b i Figure 3.5: Golden search method for finding minima The idea behind this iteration rule is quite simple. If f ( i,1 ) < f ( i,2 ), the lower values of f will be located near i,1,not i,2. Consequently, one chooses the interval [a i, i,2 ] as new iteration interval and therefore rules out the greater values of f, see Figure 3.5. Program 3.9shows how to apply the Golden-Search method to the above problem. At the beginning of the program we choose values for the model parameter p 1, p 2 and W. Weassume the price of the luury good to be twice the price of the normal good and normalize the available resources W = 1. We then have to set the starting interval. Note that setting a = 0 restricts 2 to be non-negative. b is finally initialized in a way that guarantees the consumption of good 1 to be positive for any 2 [a, b]. In the iteration process we calculate i,1 and i,2 and the respective function values as shown in (3.4). Note that we always use the negative of the actual function value in order to assure that we maimize household s utility. In opposite to the previous section, we define our tolerance level as b i a i. This is because we now have two values i,1 and i,2 in every iteration and don t know which one to choose as approimation to our minimum. Consequently, taking the interval width of [a i, b i ] as tolerance criterion insures that we can take any value within this interval and meet our tolerance criterion. If our criterion is satisfied, we print the minimum and the respective function value on the console. If not, we set the new iteration s interval [a i+1, b i+1 ] according to the above iteration rule. Taking a look at the output of the program, we find that 1 = 1and 2 = 0istheoptimal solution to our optimization problem. This indicates that the constraint 2 > 0 is actually binding for this price and resource combination. We can test this by etending the initial interval [a, b] and setting a = -0.9d0 instead of 0. Doing this, Golden-Search return the unconstraint optimum and of the optimization problem in our eample. Note that, similar as in the bisection search method, convergence speed of Golden-Search is linear. 13 Note that α 2 eactly is the inverse of the golden ratio (1 + 5)/ 2.
4 58 Chapter 3 Numerical Solution Methods Program 3.9: Golden-Search method in one dimension program golden! variable declaration implicit none real*8, parameter :: p(2) = (/1d0, 2d0/) real*8, parameter :: W = 1d0 real*8 :: a, b, 1, 2, f1, f2 integer :: iter! initial interval and function values a = 0d0 b = (W-p(1)*0.01d0)/p(2)! start iteration process do iter = 1, 200! calculate 1 and 2 and function values 1 = a+(3d0-sqrt(5d0))/2d0*(b-a) 2 = a+(sqrt(5d0)-1d0)/2d0*(b-a) f1 = -(((W-p(2)*1)/p(1))**0.4d0+(1d0+1)**0.5d0) f2 = -(((W-p(2)*2)/p(1))**0.4d0+(1d0+2)**0.5d0) write(*, (i4,f12.7) )iter, abs(b-a)! check for convergence if(abs(b-a) < 1d-6)then write(*, (/a,f12.7) ) _1 =,(W-p(2)*1)/p(1) write(*, (a,f12.7) ) _2 =,1 stop endif write(*, (a,f12.7) ) u! get new values if(f1 < f2)then b = 2 else a = 1 endif enddo end program =,-f Brent s and Powell s algorithms The problem with Golden-Search is it s slow convergence. Therefore the function f is called quite often. For well-behaved functions a more sophisticated algorithm is based on Brent s method. It relies on parabolic approimations of the actual function f. Finding
5 3.3. Function minimization 59 the minimum of a parabola is quite easy. If the parabola is given by a + b + c 2,thenits minimum is located at = b 2c. Figure 3.6 shows how Brent s method proceeds in finding a minimum. We start with a * 2 1 b Figure 3.6: Brent smethodfor finding minima the initial interval [a, b] and compute the intersection point 1 = a+b 2.Wethencompute a parabola that eactly contains the three points (a, f (a)), (b, f (b)) and ( 1, f ( 1 )). The minimum of this parabola can be calculated and is denoted with 2. We then replace b with 2 and again compute a parabola through our new points. The method is repeated until we reach convergence. Note that in every iteration step, we now have to calculate only one function value, namely the value at the new point. For applying Golden-Search, we needed two function values in every iteration step. There also is a multidimensional etension of Brent s method called Powell s algorithm. This algorithm minimizes a multidimensional function by taking a set of n-dimensional, linearly independent direction vectors and minimizing f along the lines spanned by the n different vectors. Figure 3.7 demonstrates how Powell s algorithm works. We start at an initial guess 1. We then take our first optimization direction, represented by the arrow leaving 1 and minimize the function along this direction. Given this minimum, we take the net direction and minimize along this one. We then again step back to the first direction etc. We iterate until we finally reach convergence. For the minimization along the different directions, we can again use Brent s method. Both Brent s and Powell s method are included in the module minimization. It contains a subroutine fminsearch that is called with five arguments. Program 3.10 demonstrates its use taking the above eample. The program also incorporates a module globals that stores the parameters p 1, p 2 and W of the model. We also have to define a function that
6 60 Chapter 3 Numerical Solution Methods 1 Figure 3.7: Powell s algorithm for finding minima in multi dimensions we want fminsearch to minimize. The function utility eactly matches the declared interface and returns the negative of the utility function u( 1, 2 ). Note that, if we wanted to define a multidimensional optimization problem, we would have declared the input variable to the function as an assumed-size vector by using (:). We now set the optimization interval borders a and b and make an initial guess. fminsearch is then called with the initial guess, a scalar f in which the function value in the minimum is stored, the interval borders and the function to minimize. The solution of the minimization problem is returned in. If we had a multi-dimensional problem, a and b should be vectors ofthesamesizeas. The result if finally printed on the console The problem of local and global minima Unfortunately, there is no guarantee that a minimization method will find the global minimum of a function. It might well be that one ends in a local minimum. Figure 3.8 shows this problem with Golden-Search. We can see that f has a local minimum on the left and the global minimum on the right side of the optimization interval [a, b]. When we perform the first step of Golden-Search, we calculate the points 1 and 2 and the respective function values. For a net iteration s optimization interval, we would then choose [a, 1 ],asf ( 1 ) < f ( 2 ). Obviously, with this step, we have already eliminated the chance of finding the global minimum, as after the first step we concentrate on the left hand side of the interval [a, b]. Aneasilyimplementableapproach to overcome thisproblem istofirst divide the original interval [a, b] into n sub-intervals [ i, i+1 ] of equal size with 1 = a and n+1 = b. On every of these sub-intervals, one then performs a full minimization procedure to find the
7 3.3. Function minimization 61 Program 3.10: Brent and Powell for finding minima program brentmin! module use use globals use minimization! variable declaration and interface implicit none real*8 ::, f, a, b interface function utility() real*8, intent(in) :: real*8 :: utility end function end interface! initial interval and function values a = 0d0 b = (W-p(1)*0.01d0)/p(2)! set starting point = (a+b)/2d0! call minimizing routine call fminsearch(, f, a, b, utility)! output write(*, (/a,f12.7) ) _1 =,(W-p(2)*)/p(1) write(*, (a,f12.7) ) _2 =, write(*, (a,f12.7) ) u =,-f end program minimum i of f on the interval [ i, i+1 ]. Now we have a set of minima {i }n i=1.our global minimum finally is the value i with the smallest function value f (i ). Figure 3.9 demonstrates the approach. We used a division into five intervals in this case. The green dots mark the local minima on the sub-intervals. We can see that the global minimum is also contained in this set. Hence, with this procedure, we are able to find the global minimum of a function. Note, however, that the procedure is quite costly, as we have to perform 5 minimizations instead of only one. Consequently, it should only be used when one is sure that the problem of local optima may arise.
8 62 Chapter 3 Numerical Solution Methods a 1 2 * b Figure 3.8: The problem of local minima * 3 * 1 * 2 * 4 * 5 a * b Figure 3.9: Sub-interval division for solving the problem of local minima
Today. Golden section, discussion of error Newton s method. Newton s method, steepest descent, conjugate gradient
Optimization Last time Root finding: definition, motivation Algorithms: Bisection, false position, secant, Newton-Raphson Convergence & tradeoffs Example applications of Newton s method Root finding in
More informationAn interesting related problem is Buffon s Needle which was first proposed in the mid-1700 s.
Using Monte Carlo to Estimate π using Buffon s Needle Problem An interesting related problem is Buffon s Needle which was first proposed in the mid-1700 s. Here s the problem (in a simplified form). Suppose
More informationFebruary 10, 2005
15.053 February 10, 2005 The Geometry of Linear Programs the geometry of LPs illustrated on DTC Announcement: please turn in homework solutions now with a cover sheet 1 Goal of this Lecture 3 mathematical
More informationConstrained and Unconstrained Optimization
Constrained and Unconstrained Optimization Carlos Hurtado Department of Economics University of Illinois at Urbana-Champaign hrtdmrt2@illinois.edu Oct 10th, 2017 C. Hurtado (UIUC - Economics) Numerical
More informationModern Methods of Data Analysis - WS 07/08
Modern Methods of Data Analysis Lecture XV (04.02.08) Contents: Function Minimization (see E. Lohrmann & V. Blobel) Optimization Problem Set of n independent variables Sometimes in addition some constraints
More informationRendering Algebraic Surfaces CS348B Final Project
Rendering Algebraic Surfaces CS348B Final Project Gennadiy Chuyeshov Overview This project is devoted to the visualization of a class consisting of implicit algebraic surfaces in three-dimensional space,
More informationLimits and Derivatives (Review of Math 249 or 251)
Chapter 3 Limits and Derivatives (Review of Math 249 or 251) 3.1 Overview This is the first of two chapters reviewing material from calculus; its and derivatives are discussed in this chapter, and integrals
More information1.1 What is Microeconomics?
1.1 What is Microeconomics? Economics is the study of allocating limited resources to satisfy unlimited wants. Such a tension implies tradeoffs among competing goals. The analysis can be carried out at
More information4 Using The Derivative
4 Using The Derivative 4.1 Local Maima and Minima * Local Maima and Minima Suppose p is a point in the domain of f : f has a local minimum at p if f (p) is less than or equal to the values of f for points
More informationBiostatistics 615/815 Lecture 13: Numerical Optimization
Biostatistics 615/815 Lecture 13: Numerical Optimization Hyun Min Kang October 27th, 2011 Hyun Min Kang Biostatistics 615/815 - Lecture 13 October 27th, 2011 1 / 35 The Problem Hyun Min Kang Biostatistics
More informationKey Concepts: Economic Computation, Part II
Key Concepts: Economic Computation, Part II Brent Hickman Fall, 2009 The purpose of the second section of these notes is to give you some further practice with numerical computation in MATLAB, and also
More informationChapter 3 Path Optimization
Chapter 3 Path Optimization Background information on optimization is discussed in this chapter, along with the inequality constraints that are used for the problem. Additionally, the MATLAB program for
More informationWhat Secret the Bisection Method Hides? by Namir Clement Shammas
What Secret the Bisection Method Hides? 1 What Secret the Bisection Method Hides? by Namir Clement Shammas Introduction Over the past few years I have modified the simple root-seeking Bisection Method
More informationu u 1 u (c) Distributive property of multiplication over subtraction
ADDITIONAL ANSWERS 89 Additional Answers Eercises P.. ; All real numbers less than or equal to 4 0 4 6. ; All real numbers greater than or equal to and less than 4 0 4 6 7. ; All real numbers less than
More informationIntroduction to Optimization
Introduction to Optimization Constrained Optimization Marc Toussaint U Stuttgart Constrained Optimization General constrained optimization problem: Let R n, f : R n R, g : R n R m, h : R n R l find min
More informationDECISION-TREE-BASED MULTICLASS SUPPORT VECTOR MACHINES. Fumitake Takahashi, Shigeo Abe
DECISION-TREE-BASED MULTICLASS SUPPORT VECTOR MACHINES Fumitake Takahashi, Shigeo Abe Graduate School of Science and Technology, Kobe University, Kobe, Japan (E-mail: abe@eedept.kobe-u.ac.jp) ABSTRACT
More informationEvaluating the polynomial at a point
Evaluating the polynomial at a point Recall that we have a data structure for each piecewise polynomial (linear, quadratic, cubic and cubic Hermite). We have a routine that sets evenly spaced interpolation
More information1.00 Lecture 19. Numerical Methods: Root Finding
1.00 Lecture 19 Numerical Methods: Root Finding short int Remember Java Data Types Type byte long float double char boolean Size (bits) 8 16 32 64 32 64 16 1-128 to 127-32,768 to 32,767-2,147,483,648 to
More informationSimplex Method. Introduction:
Introduction: Simple Method In the previous chapter, we discussed about the graphical method for solving linear programming problems. Although the graphical method is an invaluable aid to understand the
More informationLecture 25: Bezier Subdivision. And he took unto him all these, and divided them in the midst, and laid each piece one against another: Genesis 15:10
Lecture 25: Bezier Subdivision And he took unto him all these, and divided them in the midst, and laid each piece one against another: Genesis 15:10 1. Divide and Conquer If we are going to build useful
More informationChapter - 2 Complexity of Algorithms for Iterative Solution of Non-Linear Equations
Chapter - Compleity of Algorithms for Iterative Solution of Non-Linear Equations Compleity of Algorithms for Iterative... 19 CHAPTER - Compleity of Algorithms for Iterative Solution of Non-Linear Equations.1
More informationConstrained Optimization Unconstrained Optimization
Athena A Visual Studio Nonlinear Optimization Tutorial Start Athena Visual Studio The basic elements of an From the File menu select New. optimization problem are listed You are in the Process Modeling
More informationGeneralized Network Flow Programming
Appendix C Page Generalized Network Flow Programming This chapter adapts the bounded variable primal simplex method to the generalized minimum cost flow problem. Generalized networks are far more useful
More informationRemember Java Data Types
1.00 Lecture 19 October 24, 2005 Numerical Methods: Root Finding Remember Java Data Types Size Type (bits) Range byte 8-128 to 127 short 16-32,768 to 32,767 int 32-2,147,483,648 to 2,147,483,647 long 64-9,223,372,036,854,775,808L
More informationPart 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm
In the name of God Part 4. 4.1. Dantzig-Wolf Decomposition Algorithm Spring 2010 Instructor: Dr. Masoud Yaghini Introduction Introduction Real world linear programs having thousands of rows and columns.
More informationTRICAP Chania. Typical rank when arrays have symmetric slices, and the Carroll & Chang conjecture of equal CP components.
TRICAP 26 Chania Typical rank when arrays have symmetric slices, and the Carroll & Chang conjecture of equal CP components Jos ten Berge Research with Henk Kiers, Nikos Sidiropoulos, Roberto Rocci, and
More informationNumerical Optimization
Numerical Optimization Quantitative Macroeconomics Raül Santaeulàlia-Llopis MOVE-UAB and Barcelona GSE Fall 2018 Raül Santaeulàlia-Llopis (MOVE-UAB,BGSE) QM: Numerical Optimization Fall 2018 1 / 46 1 Introduction
More informationLecture VIII. Global Approximation Methods: I
Lecture VIII Global Approximation Methods: I Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Global Methods p. 1 /29 Global function approximation Global methods: function
More informationPrecalculus Notes Unit 1 Day 1
Precalculus Notes Unit Day Rules For Domain: When the domain is not specified, it consists of (all real numbers) for which the corresponding values in the range are also real numbers.. If is in the numerator
More information5.4 Pure Minimal Cost Flow
Pure Minimal Cost Flow Problem. Pure Minimal Cost Flow Networks are especially convenient for modeling because of their simple nonmathematical structure that can be easily portrayed with a graph. This
More informationTest functions for optimization needs
Test functions for optimization needs Marcin Molga, Czesław Smutnicki 3 kwietnia 5 Streszczenie This paper provides the review of literature benchmarks (test functions) commonl used in order to test optimization
More informationReview Functions Subroutines Flow Control Summary
OUTLINE 1 REVIEW 2 FUNCTIONS Why use functions How do they work 3 SUBROUTINES Why use subroutines? How do they work 4 FLOW CONTROL Logical Control Looping 5 SUMMARY OUTLINE 1 REVIEW 2 FUNCTIONS Why use
More informationIntro to Linear Programming. The problem that we desire to address in this course is loosely stated below.
. Introduction Intro to Linear Programming The problem that we desire to address in this course is loosely stated below. Given a number of generators make price-quantity offers to sell (each provides their
More informationAlgorithms for Integer Programming
Algorithms for Integer Programming Laura Galli November 9, 2016 Unlike linear programming problems, integer programming problems are very difficult to solve. In fact, no efficient general algorithm is
More informationSome Advanced Topics in Linear Programming
Some Advanced Topics in Linear Programming Matthew J. Saltzman July 2, 995 Connections with Algebra and Geometry In this section, we will explore how some of the ideas in linear programming, duality theory,
More informationIntroduction to optimization methods and line search
Introduction to optimization methods and line search Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi How to find optimal solutions? Trial and error widely used in practice, not efficient and
More informationContents. Hilary Term. Summary of Numerical Analysis for this term. Sources of error in numerical calculation. Solving Problems
Contents Hilary Term 1 Root Finding 4 11 Bracketing and Bisection 5 111 Finding the root numerically 5 112 Pseudo BRACKET code 7 113 Drawbacks 8 114 Tips for success with Bracketing & Bisection 9 115 Virtues
More informationNumerical Methods Lecture 7 - Optimization
Numerical Methods Lecture 7 - Optimization Topics: numerical optimization - Newton again - Random search - Golden Section Search READING : text pgs. 331-349 Optimization - motivation What? Locating where
More informationarxiv:cs/ v1 [cs.gr] 22 Mar 2005
arxiv:cs/0503054v1 [cs.gr] 22 Mar 2005 ANALYTIC DEFINITION OF CURVES AND SURFACES BY PARABOLIC BLENDING by A.W. Overhauser Mathematical and Theoretical Sciences Department Scientific Laboratory, Ford Motor
More informationIntroduction to Optimization Problems and Methods
Introduction to Optimization Problems and Methods wjch@umich.edu December 10, 2009 Outline 1 Linear Optimization Problem Simplex Method 2 3 Cutting Plane Method 4 Discrete Dynamic Programming Problem Simplex
More information(ii) Use Simpson s rule with two strips to find an approximation to Use your answers to parts (i) and (ii) to show that ln 2.
C umerical Methods. June 00 qu. 6 (i) Show by calculation that the equation tan = 0, where is measured in radians, has a root between.0 and.. [] Use the iteration formula n+ = tan + n with a suitable starting
More informationConvex Optimization MLSS 2015
Convex Optimization MLSS 2015 Constantine Caramanis The University of Texas at Austin The Optimization Problem minimize : f (x) subject to : x X. The Optimization Problem minimize : f (x) subject to :
More informationTable 2 1. F90/95 Data Types and Pointer Attributes. Data Option. (Default Precision) Selected-Int-Kind
Chapter 2 Data Types Any computer program is going to have to operate on the available data. The valid data types that are available will vary from one language to another. Here we will examine the intrinsic
More informationFortran 90 Two Commonly Used Statements
Fortran 90 Two Commonly Used Statements 1. DO Loops (Compiled primarily from Hahn [1994]) Lab 6B BSYSE 512 Research and Teaching Methods The DO loop (or its equivalent) is one of the most powerful statements
More informationReals 1. Floating-point numbers and their properties. Pitfalls of numeric computation. Horner's method. Bisection. Newton's method.
Reals 1 13 Reals Floating-point numbers and their properties. Pitfalls of numeric computation. Horner's method. Bisection. Newton's method. 13.1 Floating-point numbers Real numbers, those declared to be
More information1 2 (3 + x 3) x 2 = 1 3 (3 + x 1 2x 3 ) 1. 3 ( 1 x 2) (3 + x(0) 3 ) = 1 2 (3 + 0) = 3. 2 (3 + x(0) 1 2x (0) ( ) = 1 ( 1 x(0) 2 ) = 1 3 ) = 1 3
6 Iterative Solvers Lab Objective: Many real-world problems of the form Ax = b have tens of thousands of parameters Solving such systems with Gaussian elimination or matrix factorizations could require
More informationMATH3016: OPTIMIZATION
MATH3016: OPTIMIZATION Lecturer: Dr Huifu Xu School of Mathematics University of Southampton Highfield SO17 1BJ Southampton Email: h.xu@soton.ac.uk 1 Introduction What is optimization? Optimization is
More information1.00 Lecture 25. Root Finding
1.00 Lecture 25 Numerical Methods: Root Finding Reading for next time: Big Java: section 19.4 Root Finding Two cases: One dimensional function: f(x)= 0 Systems of equations (F(X)= 0), where X and 0 are
More informationUnconstrained Optimization Principles of Unconstrained Optimization Search Methods
1 Nonlinear Programming Types of Nonlinear Programs (NLP) Convexity and Convex Programs NLP Solutions Unconstrained Optimization Principles of Unconstrained Optimization Search Methods Constrained Optimization
More informationThese notes are in two parts: this part has topics 1-3 above.
IEEM 0: Linear Programming and Its Applications Outline of this series of lectures:. How can we model a problem so that it can be solved to give the required solution 2. Motivation: eamples of typical
More informationDescription Syntax Remarks and examples Conformability Diagnostics References Also see
Title stata.com solvenl( ) Solve systems of nonlinear equations Description Syntax Remarks and examples Conformability Diagnostics References Also see Description The solvenl() suite of functions finds
More information9.1 Bracketing and Bisection
350 Chapter 9. Root Finding and Nonlinear Sets of Equations for (i=1;i
More informationSIZE PRESERVING MESH GENERATION IN ADAPTIVITY PROCESSES
Congreso de Métodos Numéricos en Ingeniería 25-28 junio 2013, Bilbao, España c SEMNI, 2013 SIZE PRESERVING MESH GENERATION IN ADAPTIVITY PROCESSES Eloi Ruiz-Gironés 1, Xevi Roca 2 and Josep Sarrate 1 1:
More informationLecture 18 Solving Shortest Path Problem: Dijkstra s Algorithm. October 23, 2009
Solving Shortest Path Problem: Dijkstra s Algorithm October 23, 2009 Outline Lecture 18 Focus on Dijkstra s Algorithm Importance: Where it has been used? Algorithm s general description Algorithm steps
More informationBiostatistics 615/815 Lecture 16: Importance sampling Single dimensional optimization
Biostatistics 615/815 Lecture 16: Single dimensional optimization Hyun Min Kang November 1st, 2012 Hyun Min Kang Biostatistics 615/815 - Lecture 16 November 1st, 2012 1 / 59 The crude Monte-Carlo Methods
More informationkd-trees Idea: Each level of the tree compares against 1 dimension. Let s us have only two children at each node (instead of 2 d )
kd-trees Invented in 1970s by Jon Bentley Name originally meant 3d-trees, 4d-trees, etc where k was the # of dimensions Now, people say kd-tree of dimension d Idea: Each level of the tree compares against
More informationMath : Differentiation
EP-Program - Strisuksa School - Roi-et Math : Differentiation Dr.Wattana Toutip - Department of Mathematics Khon Kaen University 00 :Wattana Toutip wattou@kku.ac.th http://home.kku.ac.th/wattou. Differentiation
More information1.00 Lecture 19. Packaging Functions in Objects
1.00 Lecture 19 Numerical Methods: Root Finding Packaging Functions in Objects Consider writing method that finds root of function or evaluates a function, e.g., f(x)= 0 on some interval [a, b], or find
More informationMultiple-Choice Test Chapter Golden Section Search Method Optimization COMPLETE SOLUTION SET
Mltiple-Choice Test Chapter 09.0 Golden Section Search Method Optimization COMPLETE SOLUTION SET. Which o the ollowing statements is incorrect regarding the Eqal Interval Search and Golden Section Search
More informationLecture 2 Optimization with equality constraints
Lecture 2 Optimization with equality constraints Constrained optimization The idea of constrained optimisation is that the choice of one variable often affects the amount of another variable that can be
More information9.1 Bracketing and Bisection
9.1 Bracketing and Bisection 343 CITED REFERENCES AND FURTHER READING: Stoer, J., and Bulirsch, R. 1980, Introduction to Numerical Analysis (New York: Springer-Verlag), Chapter 5. Acton, F.S. 1970, Numerical
More informationCME307/MS&E311 Optimization Theory Summary
CME307/MS&E311 Optimization Theory Summary Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/~yyye http://www.stanford.edu/class/msande311/
More informationAvailable Optimization Methods
URL: http://cxc.harvard.edu/sherpa3.4/methods/methods.html Last modified: 11 January 2007 Return to: Optimization Methods Index Available Optimization Methods The primary task of Sherpa is to fit a model
More informationLecture 25 Nonlinear Programming. November 9, 2009
Nonlinear Programming November 9, 2009 Outline Nonlinear Programming Another example of NLP problem What makes these problems complex Scalar Function Unconstrained Problem Local and global optima: definition,
More informationlim x c x 2 x +2. Suppose that, instead of calculating all the values in the above tables, you simply . What do you find? x +2
MA123, Chapter 3: The idea of its (pp. 47-67, Gootman) Chapter Goals: Evaluate its. Evaluate one-sided its. Understand the concepts of continuity and differentiability and their relationship. Assignments:
More informationB553 Lecture 12: Global Optimization
B553 Lecture 12: Global Optimization Kris Hauser February 20, 2012 Most of the techniques we have examined in prior lectures only deal with local optimization, so that we can only guarantee convergence
More informationGraphing Calculator How To Packet
Graphing Calculator How To Packet The following outlines some of the basic features of your TI Graphing Calculator. The graphing calculator is a useful tool that will be used extensively in this class
More informationA = [1, 6; 78, 9] Note: everything is case-sensitive, so a and A are different. One enters the above matrix as
1 Matlab Primer The purpose of these notes is a step-by-step guide to solving simple optimization and root-finding problems in Matlab To begin, the basic object in Matlab is an array; in two dimensions,
More informationInverse and Implicit functions
CHAPTER 3 Inverse and Implicit functions. Inverse Functions and Coordinate Changes Let U R d be a domain. Theorem. (Inverse function theorem). If ϕ : U R d is differentiable at a and Dϕ a is invertible,
More informationLinear Programming has been used to:
Linear Programming Linear programming became important during World War II: used to solve logistics problems for the military. Linear Programming (LP) was the first widely used form of optimization in
More informationAppendix A Using a Graphing Calculator. Section 4: The CALCULATE Menu
Appendix A Using a Graphing Calculator Section 4: The CALCULATE Menu The CALC menu provides access to many features that will be regularly used in the class. value returns a single y value when the user
More informationIntroduction to Fortran Programming. -Internal subprograms (1)-
Introduction to Fortran Programming -Internal subprograms (1)- Subprograms Subprograms are used to split the program into separate smaller units. Internal subprogram is not an independent part of a program.
More informationAll use is subject to licence, see For any commercial application, a separate licence must be signed.
HS PAKAGE SPEIFIATION HS 2007 1 SUMMARY This routine uses the Generalized Minimal Residual method with restarts every m iterations, GMRES(m), to solve the n n unsymmetric linear system Ax = b, optionally
More informationNote on Neoclassical Growth Model: Value Function Iteration + Discretization
1 Introduction Note on Neoclassical Growth Model: Value Function Iteration + Discretization Makoto Nakajima, UIUC January 27 We study the solution algorithm using value function iteration, and discretization
More informationSECTION 3-4 Rational Functions
20 3 Polnomial and Rational Functions 0. Shipping. A shipping bo is reinforced with steel bands in all three directions (see the figure). A total of 20. feet of steel tape is to be used, with 6 inches
More informationModule 1 Lecture Notes 2. Optimization Problem and Model Formulation
Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization
More informationSection 2.2: Absolute Value Functions, from College Algebra: Corrected Edition by Carl Stitz, Ph.D. and Jeff Zeager, Ph.D. is available under a
Section.: Absolute Value Functions, from College Algebra: Corrected Edition b Carl Stitz, Ph.D. and Jeff Zeager, Ph.D. is available under a Creative Commons Attribution-NonCommercial-ShareAlike.0 license.
More informationCurves and Surfaces. Chapter 7. Curves. ACIS supports these general types of curves:
Chapter 7. Curves and Surfaces This chapter discusses the types of curves and surfaces supported in ACIS and the classes used to implement them. Curves ACIS supports these general types of curves: Analytic
More information2.2 Absolute Value Functions
. Absolute Value Functions 7. Absolute Value Functions There are a few was to describe what is meant b the absolute value of a real number. You ma have been taught that is the distance from the real number
More informationNotes on Project 1. version September 01, 2014
Notes on Project 1 version 1.44 September 01, 2014 1 Definitions Your program will keep a collection of rectangles which we refer to by C and a rectangle-quadtree which we refer to by T. The collection
More informationA Linear-Time Heuristic for Improving Network Partitions
A Linear-Time Heuristic for Improving Network Partitions ECE 556 Project Report Josh Brauer Introduction The Fiduccia-Matteyses min-cut heuristic provides an efficient solution to the problem of separating
More informationCOURSE: NUMERICAL ANALYSIS. LESSON: Methods for Solving Non-Linear Equations
COURSE: NUMERICAL ANALYSIS LESSON: Methods for Solving Non-Linear Equations Lesson Developer: RAJNI ARORA COLLEGE/DEPARTMENT: Department of Mathematics, University of Delhi Page No. 1 Contents 1. LEARNING
More informationInteger Programming Theory
Integer Programming Theory Laura Galli October 24, 2016 In the following we assume all functions are linear, hence we often drop the term linear. In discrete optimization, we seek to find a solution x
More information9. p(x) = x 3 8x 2 5x p(x) = x 3 + 3x 2 33x p(x) = x x p(x) = x 3 + 5x x p(x) = x 4 50x
Section 6.3 Etrema and Models 593 6.3 Eercises In Eercises 1-8, perform each of the following tasks for the given polnomial. i. Without the aid of a calculator, use an algebraic technique to identif the
More informationLinear Programming. Readings: Read text section 11.6, and sections 1 and 2 of Tom Ferguson s notes (see course homepage).
Linear Programming Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory: Feasible Set, Vertices, Existence of Solutions. Equivalent formulations. Outline
More informationLinear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.
Linear Programming Widget Factory Example Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory:, Vertices, Existence of Solutions. Equivalent formulations.
More information2-D Geometry for Programming Contests 1
2-D Geometry for Programming Contests 1 1 Vectors A vector is defined by a direction and a magnitude. In the case of 2-D geometry, a vector can be represented as a point A = (x, y), representing the vector
More informationOptimization. (Lectures on Numerical Analysis for Economists III) Jesús Fernández-Villaverde 1 and Pablo Guerrón 2 February 20, 2018
Optimization (Lectures on Numerical Analysis for Economists III) Jesús Fernández-Villaverde 1 and Pablo Guerrón 2 February 20, 2018 1 University of Pennsylvania 2 Boston College Optimization Optimization
More informationMath 414 Lecture 30. The greedy algorithm provides the initial transportation matrix.
Math Lecture The greedy algorithm provides the initial transportation matrix. matrix P P Demand W ª «2 ª2 «W ª «W ª «ª «ª «Supply The circled x ij s are the initial basic variables. Erase all other values
More informationConcept as a Generalization of Class and Principles of the Concept-Oriented Programming
Computer Science Journal of Moldova, vol.13, no.3(39), 2005 Concept as a Generalization of Class and Principles of the Concept-Oriented Programming Alexandr Savinov Abstract In the paper we describe a
More informationOptimization Methods: Optimization using Calculus-Stationary Points 1. Module - 2 Lecture Notes 1
Optimization Methods: Optimization using Calculus-Stationary Points 1 Module - Lecture Notes 1 Stationary points: Functions of Single and Two Variables Introduction In this session, stationary points of
More informationApplied Lagrange Duality for Constrained Optimization
Applied Lagrange Duality for Constrained Optimization Robert M. Freund February 10, 2004 c 2004 Massachusetts Institute of Technology. 1 1 Overview The Practical Importance of Duality Review of Convexity
More informationLecture notes for Topology MMA100
Lecture notes for Topology MMA100 J A S, S-11 1 Simplicial Complexes 1.1 Affine independence A collection of points v 0, v 1,..., v n in some Euclidean space R N are affinely independent if the (affine
More informationAllocating Storage for 1-Dimensional Arrays
Allocating Storage for 1-Dimensional Arrays Recall that if we know beforehand what size we want an array to be, then we allocate storage in the declaration statement, e.g., real, dimension (100 ) :: temperatures
More informationNAG Library Routine Document C05RBF.1
C05 Roots of One or More Transcendental Equations NAG Library Routine Document Note: before using this routine, please read the Users Note for your implementation to check the interpretation of bold italicised
More informationMTAEA Convexity and Quasiconvexity
School of Economics, Australian National University February 19, 2010 Convex Combinations and Convex Sets. Definition. Given any finite collection of points x 1,..., x m R n, a point z R n is said to be
More informationNAG Library Routine Document D04AAF.1
D04 Numerical Differentiation NAG Library Routine Document Note: before using this routine, please read the Users Note for your implementation to check the interpretation of bold italicised terms and other
More informationLecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.
Cornell University, Fall 2017 CS 6820: Algorithms Lecture notes on the simplex method September 2017 1 The Simplex Method We will present an algorithm to solve linear programs of the form maximize subject
More informationNMath Analysis User s Guide
NMath Analysis User s Guide Version 2.0 CenterSpace Software Corvallis, Oregon NMATH ANALYSIS USER S GUIDE 2009 Copyright CenterSpace Software, LLC. All Rights Reserved. The correct bibliographic reference
More informationDetermination of 6D-workspaces of Gough-type parallel. manipulator and comparison between different geometries. J-P. Merlet
Determination of 6D-workspaces of Gough-type parallel manipulator and comparison between different geometries J-P. Merlet INRIA Sophia-Antipolis, France Abstract: We consider in this paper a Gough-type
More information