User s Guide to Climb High. (A Numerical Optimization Routine) Introduction

Size: px
Start display at page:

Download "User s Guide to Climb High. (A Numerical Optimization Routine) Introduction"

Transcription

1 User s Guide to Climb High (A Numerical Optimization Routine) Hang Qian Iowa State University Introduction Welcome to Climb High, a numerical routine for unconstrained non-linear optimization problems on the basis of the BFGS algorithm. It is especially ideal for problems in applied econometrics such as maximum likelihood or simulated likelihood estimation. It also works well in solving non-linear equations and non-linear least squares. This is a gradient-based algorithm, so local information is used to infer the global shape of the objective function. When the author wrote this program, he imagined himself as an alpinist equipped with an altimeter and clinometer, and he tried to climb as high as possible, and as efficiently as possible. All he knew is the shape of the earth under his feet, so the essential task is to decide what direction to proceed and when to recalibrate his direction. The landscape can be complicated and he might encounter rugged terrain and pits, rivers and other obstacles, and even cliffs. As a result, scientists cannot provide him with a complete recipe on how to climb. He must learn by doing, and rise from tumble. In the same vein, no prescribed algorithm can always find out the optimum. A good computer routine to implement the algorithm must be adaptive and resilient. When the author wrote this program, he found that numerical optimization is more like an art than a science. The altitude of climbing is the touchstone of the program. Climb High blends a variety techniques to flexibly adjust the step size in the line search of the maximum / minimum. Climb High is written in the MATLAB language. Compared with the canned optimization routines in MATLAB, the Climb High has the following features: Less picky on the starting values More tolerant of poorly scaled problems Stable and seldom produce error messages and stop halfway in the optimization Able to find an optimum at least as good as the canned routines in most cases Reasonable in speed Just in case -- suppose you find the canned routines in MATLAB does not work satisfactorily for your optimization problem at hand, may I suggest you give Climb High a try?

2 Usage Climb High optimization package contains two major MATLAB functions, one is MAXIMIZE.m and the other is MINIMIZE.m. In addition, it has a bonus function EQUATION.m to solve non-linear equation systems. Users can call the functions in the same way as the MATLAB canned optimization routines such as fminunc() and fminsearch(). The grammar looks like: [x, fval, exitflag, Gradient, Hessian] = MAXIMIZE(fun, x0, options) [x, fval, exitflag, Gradient, Hessian] = MINIMIZE(fun, x0, options) [x, fval, exitflag, Gradient, Hessian] = EQUATION(fun, x0, options) where fun is the user supplied objective function, which can be a function handle, anonymous function, or a character string specifying a function (see below). x0 is the starting value. If x0 is not specified, the program will try to generate one automatically. options is a structure containing optimization config. If options is not specified, the default values will be loaded. options consist of the following: MethodGradient: 0 = user s supplied analytic gradients (fun must return in the second output argument) 1 = numerical gradients with forward difference 2 = numerical gradients with central difference (default) 3 = numerical gradients with Richardson extrapolation of order 1 4 = numerical gradients with Richardson extrapolation of order 2 5 = numerical gradients with Richardson extrapolation of order 3 6 = numerical gradients with Richardson extrapolation on central difference DiffDelta: The small increment to compute numerical gradients ( default = 1e-6) Display: 1 = display iteration results on screen (default), 0 = write into a plain text file MaxIter: Maximum number of iterations (default = 200) TolFun: Convergence criterion for both parameters and function values (default = 1e-6) MaxAttempt: Maximum attempts to adjust line search window and darts shoot (default = 15) MaxDarts: In each attempt of darts shoot, the number of darts (default=5) NelderMead: If line search fails, whether to switch to simplex search mode (default = 1) Minutes: If x0 is not specified, the time allowing computer to search an initial value (default = 1, but not precise) FinalCheck: Post convergence random check of global maximum / minimum (default = 1) Rounding: If final parameters are very close to integers, whether to round results (default = 1) You may set options by options = struct('methodgradient',1,'display',0);fields are case sensitive. x is optimized parameters which maximize / minimize the objective function

3 fval is optimum function value that Climb High reaches exitflag is the marker of how the optimization routine terminates 1 = Function value, parameters, gradients all converge 2 = Both function value and parameters converge, but gradients remain non-zero 3 = Function value cannot further improve 4 = Maximum iteration reached, but not converged 0 = Error occurs somewhere Gradient is the gradients evaluated at optimum Hessian is the Hessian evaluated at optimum (It takes time to compute. Do not add this output argument if you do not need Hessian) The author tries to make the program as friendly as possible. The least requirement is that you just provide an objective function. For a complicated problem, you might restore your objective function in a stand-alone MATLAB file in the current folder. For example, you can create a function called myfun(): y = myfun(x) y = - (3 * x(1)^2 + 2 * x(1) * x(2) + x(2)^2) Then you can use MAXIMIZE(myfun) to find optimum, which are trivially two zeros analytically. If you do not provide a starting value, the program will try to generate one. However, the ability of automatic generation is limited. For complicated problems, it may or may not work. Since this problem is so easy, we can also use an anonymous function: MAXIMIZE(@(x) - (3 * x(1)^2 + 2 * x(1) * x(2) + x(2)^2)) Lastly, you can also use character strings to represent the optimization problem. For example: MAXIMIZE('- (3 * x(1)^2 + 2 * x(1) * x(2) + x(2)^2)') MAXIMIZE('- (3 * x1^2 + 2 * x1 * x2 + x2^2)') MAXIMIZE('- (3 * x^2 + 2 * x* y + y^2)') In addition, Climb High provides a graphic interface GUI.m to optimize or solve problems. It is convenience to use graphic interface when you have a small-scaled problem, say finding the root of a function, maximizing an objective function with one or two variables etc. GUI.m is a Matlab script, and once you open it, press F5 to run the script. (Alternatively, there are buttons on the taskbar and in the menu to run the script, or simply type GUI in the command window.)

4 Algorithm Starting Values For complicated problems, users are recommended to provide a decent starting value to ensure the fast convergence of the optimization problem. However, when Climb High is designed, it takes into account the fact that most users do not want to provide starting values. This is understandable, since users do not know the optimal values and feel afraid they may provide a bad guess at the beginning. They are forced to look for a good one manually only if the optimization routine gets stuck. Climb High is less picky on the starting values since its algorithm is adaptive and resilient. Even if the starting value is not specified, the program will try to generate one by brute force search for a while. This is nothing but a random and wild guess, and it is likely to work for small-scaled problems. However, for large-scaled problem, it may or may not work. During the specified search time (see options Minutes), the program will compare candidates, if any, and pick the one with best function value. If user specified initial value does not return a finite function value, the program will also search for an appropriate one with brute force. Gradients Climb High uses BFGS algorithm to locate the optimum, so gradients are necessary. Users are recommended to provide analytic gradients if it is feasible to do so, since analytic gradients will greatly improve the quality of the iteration. However, it is understood that most users feel reluctant to provide analytic gradients. So the Climb High provides an option of numerical gradients with several varieties, which differs in speed and accuracy. The first one is the forward difference. To fix ideas, suppose we are going to approximate the derivative of evaluated at some point. By Taylor expansion of, we have where ( ), is between. The reminder is of order. Put, we obtain the forward difference formula:

5 In complicated numerical optimization, function evaluations take up most of the computation time. Therefore, the speed of the codes is roughly proportional to how many times we evaluate the objective function. No matter whether gradients are computed, we nevertheless need to compute, thus it is a sunk cost. The variable cost of computing gradients by forward difference is one additional evaluation of for each dimension. This is the most parsimonious way to compute gradients, and the precision is also lowest. The theoretical error shrinks at the rate as. However, the fundamental problem of computer is that it has truncation errors in computation, since a double floating number can only keep 15 to 16 digits. To compute, we have to take a difference of the form. When is too small, many digits will be lost. If all digits are lost, it will return something absurd and undermine the numerical optimization procedure. For example, if and, then we do not have many digits left after taking the difference. However, if we increase, we might alleviate the truncation error, say. However, a large leads to more prominent theoretical errors since differs from by. In summary, using a small to compute gradient is accurate (smaller in theoretical error) but dangerous (larger in truncation error), while using a larger is less precise (larger in theoretical error) but safer (smaller in truncation error). Climb High has default choice of increment for forward difference. The second option of numerical gradients is central difference. Recall the Taylor expansion: Both and are of order. Put and take the difference. Note that both and terms will cancel. So we obtain the central difference formula:

6 Note that central difference does not take advantage of the free item, and it evaluates and instead. As a result, the variable cost of central difference is twice as high as forward difference, but the accuracy improves to as well. The third option of numerical gradients is Richardson extrapolation of order 1. Richardson extrapolation is a refinement of the gradients, and can be based on either forward difference or central difference. Again, we start from the Taylor expansion and this time we put. It follows that Cut by an half, we obtain. /. / Since we intend to cancel term, we define Richardson extrapolation of order 1 as. / Since is, and. / is, it follows that. Richardson extrapolation of order 1 takes advantage of the freebie, and evaluate two more points. / and, and the precision improves to. It is interesting to compare the central difference and Richardson extrapolation of order 1. Both of them will have two more points to evaluate, and both have an error of order. For the central difference, we can rewrite the reminder terms as extrapolation has a reminder term. The Richardson. However, that does not implies Richardson extrapolation is twice as accurate as central difference. Recall that central difference evaluates, so the span on the horizontal axis is, while in Richardson formula, evaluation. / has the span and evaluation has the span. So the mirage of smaller reminder in Richardson formula is merely caused by a small span. Actually, if we fix the span in the two approaches, the central difference should have a smaller reminder terms. However, Richardson formula has the

7 advantage of evaluating points on one side of instead of two sides, thus useful for evaluating points in the corner. In fact, central difference and Richardson extrapolation of order 1 are the same thing. Both can be interpreted quadratic approximate to and use the slope of quadratic function at to substitute. Central difference uses three points and evaluate at, and Richardson extrapolation of order 1 uses and evaluate at. The fourth option of numerical gradients is Richardson extrapolation of order 2. Note that Cut by an half, and take the difference with suitable multiplication, we obtain Richardson extrapolation of order 2:. / Richardson extrapolation of order 2 takes advantage of the freebie, and evaluate three more points. /,. /,, and the precision improves to. The fifth option of numerical gradients is Richardson extrapolation of order 3, which is another level of refinement.. / Richardson extrapolation of order 3 takes advantage of the freebie, and evaluate four more points. /,. /,. /,, and the precision improves to. The sixth option of numerical gradients is Richardson extrapolation on the basis of central difference. Note that once we take the difference of and, all the derivatives of even order should cancel, so we have So we can refine by. /

8 Richardson extrapolation on the basis of central difference has no freebie, and evaluate four more points,. /,. /,, and the precision improves to. Method Precision Free item Cost Forward difference 1 1 Central difference 0 2 Richardson extrapolation Richardson extrapolation Richardson extrapolation Richardson extrapolation CD 0 4 As the table suggests, you will not find cash on the Wall Street, since every improvement comes with a price, without any arbitrage opportunity at all. The default choice of Climb High is central difference. Also, considering the fact that Richardson extrapolation is precise in general, the choice of should be larger so as to reduce the truncation error. So if you set the DiffDelta option, the program uses for forward and central difference, and for Richardson extrapolation. Newton Type Iteration Once the objective function and gradients can be evaluated, the alpinist has equipped himself with an altimeter and clinometer. The next step is to climb the mountain efficiently. Rather than scramble randomly, the alpinist had better to predict the ensemble landscape from the shape of the earth underneath his feet. However, the path may be complicated so that he has to update his beliefs and predictions from time to time. An obvious way to infer the global shape from the local is by quadratic approximation. It has the advantage that the maximum / minimum of a quadratic function can be computed analytically. Suppose our objective function is, and we are currently at a point. We Taylor-expand the objective function up to the second order. ( ) where for some, -. are gradients and Hessian evaluated at. It is possible to write down the reminder term ( ) explicitly. However, we are not going to analyze the errors here, so we simply treat the quadratic term

9 as an approximation to the objective function. Our goal is to find the maximum of, so we solve optimum analytically for the quadratic function. FOC: The solution is This is the Newton type formula. Note that is the optimum for the quadratic approximation to, not itself. Once we arrive at, there is a need to reevaluate the landscape ahead and iterate the quadratic approximation. The iteration formula looks like If the objective function is globally concave, it should converge to the maximum of eventually. Unfortunately, the above procedure cannot work well for many empirical problems. First, virtually no one is willing to provide analytic Hessian for their optimization problems. However, due to precision loss in finite difference, numerical Hessian is usually of poor quality and very computationally expensive. Second, it is just a wishful thinking that our is globally concave (Consider the pits, rivers, cliffs facing the alpinist). Thus there is no guarantee the dogmatic Newton type iteration will succeed with maximum. The second problem can be partially addressed by line search of step size, while the first problem can be well addressed by BFGS algorithm. BFGS Algorithm Proposed independently by Broyden, Fletcher, Goldfarb and Shanno around the year 1970, BFGS is a power algorithm to learn the Hessian from the gradients, when the analytical or numerical Hessian cannot be computed directly. During the iteration, we obtain gradients, either analytically or numerically. Suppose is one-dimensional, despite precision loss we may use to approximate the second derivative. However, for multi-dimensional problem, it is impossible to infer Hessian merely from,, since their difference reflects the change of all variables from to. However, a first order approximation of suggest that

10 If we propose a method to infer from the information of historical gradients * +, it should satisfy that equality. BFGS uses an iterative procedure to gradually learn the Hessian where,. When the iteration converges, should well approximzte. It is easy to verify that the converged does satisfy the above identity. Therefore, * + is used as a proxy for Hessian in the Newton-type iteration. Step Size The alpinist might face all kinds of difficulty in climbing. A rigid procedure is not likely to lead alpinist far. There is only one golden principle: try whatever path that goes towards a higher altitude. The common practice is to use the adapted BFGS formula where is the step size, to update the parameters. We centainly want to choose a we want to such that the objective function is as high as possible. Therefore, Since the analytic solution is usually unavailable, Climb High uses a variety of ways to choose a. Climb High first tries the exact line search. Setting an initial window such as { }, the program compares which can bring a highest function value. If that happens to be on the margin, the window will be resized. For example, if will test is the best one among the five candidates, the program as well. The MaxAttempt option is used to control the maximum attempt of resizing the window. Its trial process will also be displayed on the screen, if you do not manually turn it off by the Display option.

11 Since exact line search is costly, it is not appropriate to use it every time. If the exact line search successfully identifies an optimal, in the next iteration the program will switch to the inexact line search mode. Since we expect eventually will be a proper choice, this step size will be tested first. To decide whether to accept it or not, Climb High uses the Wolfe condition, which includes the Armijo test and the strong curvature test. i) Armijo test:, where is a small positive number, say 1e-4 ii) Curvature test:, where is a positive number, say 0.9 If passes the two tests (as is usually true in the epilogue of iterations), it is accepted. Otherwise, Climb High will search for other candidates. One method of inexact line search is the quadratic approximation. Let We use a quadratic function w.r.t. to approximate, say We need to pin down the coefficients first. What is the least costly way to do so? At this moment fails the tests, but at least we have already computed the objective function and its gradients at as well as the objective function evaluated at. That is, we have where. It follows that The quadratic function has the maximizer, which is the step size we will test next time. If quadratic approximation fails to pass the tests, we can further use cubic approximation, say

12 The readily available points,,, can be used to pin down the coefficients. To be concrete,. / [ ] The cubic function has the maximizer + If cubic approximation fails, Climb High will try cubic approximation again with,,,. The trial process will be displayed on the screen, showing Cubic I approximation, Cubic II approximation. If all of these inexact line search attempts fail, Climb High will go back to the exact line search mode. If for any reason even the exact line search fails to improve the objective function, Climb High will switch to the Nelder Mead simplex search mode, which is explained below. Simplex search Nelder Mead simplex search is a gradient-free optimization algorithm. It actually can work standalone to locate the optimum. Climb High uses this algorithm only if the BFGS line search fails to improve the function value, which might occur when data are poorly scaled or there is randomness in the objective function as in the simulated likelihood. Simplex search may help to resolve those complexities. The heuristic Nelder Mead algorithm relies on the patterns of objective function values to search for a better function value. Suppose the optimization problem is where is an n-dimensional vector. In an n-dimensional plane, n+1 points can form a simplex. For example, in a 2D plane, 3 points constitute the vertices of a triangle. Nelder Mead algorithm starts from an initial simplex, and replace the worst vertex with a new point so as to form a better simplex.

13 First of all, we pick n+1 points, evaluate their function values, and sort them from worst to best, say,, where. We are going to replace with a better point. To this end, we first define the centroid (center of mass) of the rest n points The new point will be chosen somewhere along the line connecting and. Let this new point be defines a reflection point, since and reflects each other with respect to the centroid. defines an extended reflection point, since the distance, - is twice as long as that of, -. defines an outward contraction point, since the distance, - is half of, -. defines an inward contraction point, since lies at the midpoint of and. Essentially, we compare these four candidates and pick the best one. However, that means four additional evaluations of the objective function. Nelder Mead algorithm actually computes them in order. If one point is desirable, we will accept it, without evaluating the rest. To be exact, If reflection point satisfies, we just accept and use it to form a new simplex. Since is better than, the overall quality of the simplex is improved. If reflection point satisfies, we doubt whether extended reflection point (, call it ) can do an even better job, so we have a test. If, we surely will use to form a new simplex. Otherwise, we still use to form the new simplex. No matter which one is larger, the quality of the simplex is greatly improved since we directly observe function value rises. If reflection point satisfies, which implies travels too far. So we cut the reflection by an half and test outward contraction point ( call it ). If, we then accept to form new simplex. Otherwise, this is a failed contraction and we have to shrink the simplex (see below). If reflection point is so poor such that, we completely discard and try the midpoint of and in the hope that it could be better than. This is the inward contraction point ( call it ). If, we then accept to form new simplex. Otherwise, this is a failed contraction and we have to shrink the simplex (see below). In the unfortunate case when we have to shrink the simplex, we just cut the length of each side of the simplex by an half towards. That is, Once we have a new simplex, we repeat the above process. Hopefully the simplex may converge eventually. Since we always pick the best vertex of the simplex, simplex iteration will never deteriorate.

14 Darts shoot If line search fails and simplex search also fails to improve the function value, the last resort is the darts-throwing mode. The program will randomly try the points nearby, less nearby, and relatively remote from the current point. If it happens to increase the objective function, that point will be used in the next iteration. The MaxDarts option is used to control the number of darts in each trial, MaxAttempt also controls the maximum rounds of dart shooting game. Of course, the results of darts-shooting depend on your luck. If all these attempts fail, Climb High is unable to climb higher and has to regretfully exit. In this case, the exitflag is 3. Convergence Hopefully, after N iterations we will welcome the convergence, which means both parameters and the function values level off. The MaxIter, TolFun options are used to control the maximum iterations and convergence criterion respectively. For an unconstrained problem, convergence should also imply gradients become zero. If everything converges, the exitflag is 1. However, in the presence of rivers, cliffs and other obstacles on the mountain, gradients can be non-zero. In that case, Climb High will report both function value and parameters converge, but gradients remain non-zero with exitflag equal to 2. Lastly, if the FinalCheck option is on, Climb High will randomly check the points nearby, less nearby, and relatively remote from the converged point to ensure this is the global maximum. If there is no negative evidence, Climb High will exit will success. Otherwise, the converged point is likely to be a local maximum, and the program will continue to search until new converge or maximum iteration, whichever comes first. Right before Climb High reports its final results to the commander, it checks the optimized parameters. If it is close enough to an integer (discrepancy small than TolFun), it will round the results. You can turn off this feature by resetting the option Rounding. Climb High reports on request optimized parameters, optimum function value, type of exit as well as gradients evaluated at the optimum at no additional cost. Also, if the Hessian evaluated at the optimum is need, Climb High will compute it using finite differencing in the same way as it computes gradients. However, please note that for large scale problems with many parameters, computing Hessian takes time.

15 Miscellaneous As a by-produce of the optimization routine, Climb High can also be used to solve non-linear equations. The bonus EQUATION.m recasts non-linear equations into a non-linear least square problem and minimize its function value. EQUATION.m can be called in the same way as Matlab s lsqnonlin.m. The users supplied objective function should be vector-valued, representing the values of each equation evaluated at current parameter values. If the user also provides analytic Jacobian, the Jacobian matrix (each row is for an equation, each column is for a parameter) should be returned in a second output argument of the user-supplied function. Admittedly, non-linear equation systems might be better solved by Newton method directly. Even if it is rewritten into a non-linear least squares, algorithms like Levenberg Marquardt would be more appropriate since Hessian can be well approximated by a matrix-matrix multiplication of Jacobian. Nevertheless, there is nothing wrong to solve non-linear equations as if it is an optimization problem. It actually works well with Climb High. During the iterations, users can watch the change of squared sum of function values. If it shrinks towards zero and eventually become negligible, the equation systems are successful solved. In the output arguments of EQUATION.m, x stands for optimized parameters, and fval is a vector of the residual of each equation. Note that sometimes non-linear equations do not have solutions, in that case function values will not drop to zero and optimized parameters are not solutions. Also, if equations have more than one theoretical solutions, Climb High can report at most one, unless multiple starting values are tested in the hope of converging to other solutions. Contact information Written by Hang Qian, Iowa State University Contact me: matlabist@gmail.com More econometrics routines and pedagogical economics software are available at

Numerical Optimization

Numerical Optimization Numerical Optimization Quantitative Macroeconomics Raül Santaeulàlia-Llopis MOVE-UAB and Barcelona GSE Fall 2018 Raül Santaeulàlia-Llopis (MOVE-UAB,BGSE) QM: Numerical Optimization Fall 2018 1 / 46 1 Introduction

More information

Constrained and Unconstrained Optimization

Constrained and Unconstrained Optimization Constrained and Unconstrained Optimization Carlos Hurtado Department of Economics University of Illinois at Urbana-Champaign hrtdmrt2@illinois.edu Oct 10th, 2017 C. Hurtado (UIUC - Economics) Numerical

More information

Today. Golden section, discussion of error Newton s method. Newton s method, steepest descent, conjugate gradient

Today. Golden section, discussion of error Newton s method. Newton s method, steepest descent, conjugate gradient Optimization Last time Root finding: definition, motivation Algorithms: Bisection, false position, secant, Newton-Raphson Convergence & tradeoffs Example applications of Newton s method Root finding in

More information

Modern Methods of Data Analysis - WS 07/08

Modern Methods of Data Analysis - WS 07/08 Modern Methods of Data Analysis Lecture XV (04.02.08) Contents: Function Minimization (see E. Lohrmann & V. Blobel) Optimization Problem Set of n independent variables Sometimes in addition some constraints

More information

Simplex of Nelder & Mead Algorithm

Simplex of Nelder & Mead Algorithm Simplex of N & M Simplex of Nelder & Mead Algorithm AKA the Amoeba algorithm In the class of direct search methods Unconstrained (although constraints can be added as part of error function) nonlinear

More information

Chapter 14 Global Search Algorithms

Chapter 14 Global Search Algorithms Chapter 14 Global Search Algorithms An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Introduction We discuss various search methods that attempts to search throughout the entire feasible set.

More information

Key Concepts: Economic Computation, Part II

Key Concepts: Economic Computation, Part II Key Concepts: Economic Computation, Part II Brent Hickman Fall, 2009 The purpose of the second section of these notes is to give you some further practice with numerical computation in MATLAB, and also

More information

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture 16 Cutting Plane Algorithm We shall continue the discussion on integer programming,

More information

Solving for dynamic user equilibrium

Solving for dynamic user equilibrium Solving for dynamic user equilibrium CE 392D Overall DTA problem 1. Calculate route travel times 2. Find shortest paths 3. Adjust route choices toward equilibrium We can envision each of these steps as

More information

(Refer Slide Time: 00:02:02)

(Refer Slide Time: 00:02:02) Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 20 Clipping: Lines and Polygons Hello and welcome everybody to the lecture

More information

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture - 35 Quadratic Programming In this lecture, we continue our discussion on

More information

Design and Analysis of Algorithms Prof. Madhavan Mukund Chennai Mathematical Institute. Week 02 Module 06 Lecture - 14 Merge Sort: Analysis

Design and Analysis of Algorithms Prof. Madhavan Mukund Chennai Mathematical Institute. Week 02 Module 06 Lecture - 14 Merge Sort: Analysis Design and Analysis of Algorithms Prof. Madhavan Mukund Chennai Mathematical Institute Week 02 Module 06 Lecture - 14 Merge Sort: Analysis So, we have seen how to use a divide and conquer strategy, we

More information

Theoretical Concepts of Machine Learning

Theoretical Concepts of Machine Learning Theoretical Concepts of Machine Learning Part 2 Institute of Bioinformatics Johannes Kepler University, Linz, Austria Outline 1 Introduction 2 Generalization Error 3 Maximum Likelihood 4 Noise Models 5

More information

Chapter 5 Hashing. Introduction. Hashing. Hashing Functions. hashing performs basic operations, such as insertion,

Chapter 5 Hashing. Introduction. Hashing. Hashing Functions. hashing performs basic operations, such as insertion, Introduction Chapter 5 Hashing hashing performs basic operations, such as insertion, deletion, and finds in average time 2 Hashing a hash table is merely an of some fixed size hashing converts into locations

More information

Experimental Data and Training

Experimental Data and Training Modeling and Control of Dynamic Systems Experimental Data and Training Mihkel Pajusalu Alo Peets Tartu, 2008 1 Overview Experimental data Designing input signal Preparing data for modeling Training Criterion

More information

Fundamentals of Operations Research. Prof. G. Srinivasan. Department of Management Studies. Indian Institute of Technology Madras.

Fundamentals of Operations Research. Prof. G. Srinivasan. Department of Management Studies. Indian Institute of Technology Madras. Fundamentals of Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology Madras Lecture No # 06 Simplex Algorithm Initialization and Iteration (Refer Slide

More information

PARALLELIZATION OF THE NELDER-MEAD SIMPLEX ALGORITHM

PARALLELIZATION OF THE NELDER-MEAD SIMPLEX ALGORITHM PARALLELIZATION OF THE NELDER-MEAD SIMPLEX ALGORITHM Scott Wu Montgomery Blair High School Silver Spring, Maryland Paul Kienzle Center for Neutron Research, National Institute of Standards and Technology

More information

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture 18 All-Integer Dual Algorithm We continue the discussion on the all integer

More information

06: Logistic Regression

06: Logistic Regression 06_Logistic_Regression 06: Logistic Regression Previous Next Index Classification Where y is a discrete value Develop the logistic regression algorithm to determine what class a new input should fall into

More information

Introduction. hashing performs basic operations, such as insertion, better than other ADTs we ve seen so far

Introduction. hashing performs basic operations, such as insertion, better than other ADTs we ve seen so far Chapter 5 Hashing 2 Introduction hashing performs basic operations, such as insertion, deletion, and finds in average time better than other ADTs we ve seen so far 3 Hashing a hash table is merely an hashing

More information

Catalan Numbers. Table 1: Balanced Parentheses

Catalan Numbers. Table 1: Balanced Parentheses Catalan Numbers Tom Davis tomrdavis@earthlink.net http://www.geometer.org/mathcircles November, 00 We begin with a set of problems that will be shown to be completely equivalent. The solution to each problem

More information

Introduction. Optimization

Introduction. Optimization Introduction to Optimization Amy Langville SAMSI Undergraduate Workshop N.C. State University SAMSI 6/1/05 GOAL: minimize f(x 1, x 2, x 3, x 4, x 5 ) = x 2 1.5x 2x 3 + x 4 /x 5 PRIZE: $1 million # of independent

More information

Lecture 6 - Multivariate numerical optimization

Lecture 6 - Multivariate numerical optimization Lecture 6 - Multivariate numerical optimization Björn Andersson (w/ Jianxin Wei) Department of Statistics, Uppsala University February 13, 2014 1 / 36 Table of Contents 1 Plotting functions of two variables

More information

Vertex Cover Approximations

Vertex Cover Approximations CS124 Lecture 20 Heuristics can be useful in practice, but sometimes we would like to have guarantees. Approximation algorithms give guarantees. It is worth keeping in mind that sometimes approximation

More information

5 Machine Learning Abstractions and Numerical Optimization

5 Machine Learning Abstractions and Numerical Optimization Machine Learning Abstractions and Numerical Optimization 25 5 Machine Learning Abstractions and Numerical Optimization ML ABSTRACTIONS [some meta comments on machine learning] [When you write a large computer

More information

Simple Graph. General Graph

Simple Graph. General Graph Graph Theory A graph is a collection of points (also called vertices) and lines (also called edges), with each edge ending at a vertex In general, it is allowed for more than one edge to have the same

More information

Homework 4: Clustering, Recommenders, Dim. Reduction, ML and Graph Mining (due November 19 th, 2014, 2:30pm, in class hard-copy please)

Homework 4: Clustering, Recommenders, Dim. Reduction, ML and Graph Mining (due November 19 th, 2014, 2:30pm, in class hard-copy please) Virginia Tech. Computer Science CS 5614 (Big) Data Management Systems Fall 2014, Prakash Homework 4: Clustering, Recommenders, Dim. Reduction, ML and Graph Mining (due November 19 th, 2014, 2:30pm, in

More information

Artificial Intelligence Prof. Deepak Khemani Department of Computer Science and Engineering Indian Institute of Technology, Madras

Artificial Intelligence Prof. Deepak Khemani Department of Computer Science and Engineering Indian Institute of Technology, Madras Artificial Intelligence Prof. Deepak Khemani Department of Computer Science and Engineering Indian Institute of Technology, Madras (Refer Slide Time: 00:17) Lecture No - 10 Hill Climbing So, we were looking

More information

Newton and Quasi-Newton Methods

Newton and Quasi-Newton Methods Lab 17 Newton and Quasi-Newton Methods Lab Objective: Newton s method is generally useful because of its fast convergence properties. However, Newton s method requires the explicit calculation of the second

More information

Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

More information

Finite Element Analysis Prof. Dr. B. N. Rao Department of Civil Engineering Indian Institute of Technology, Madras. Lecture - 36

Finite Element Analysis Prof. Dr. B. N. Rao Department of Civil Engineering Indian Institute of Technology, Madras. Lecture - 36 Finite Element Analysis Prof. Dr. B. N. Rao Department of Civil Engineering Indian Institute of Technology, Madras Lecture - 36 In last class, we have derived element equations for two d elasticity problems

More information

Data Mining Chapter 8: Search and Optimization Methods Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University

Data Mining Chapter 8: Search and Optimization Methods Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University Data Mining Chapter 8: Search and Optimization Methods Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University Search & Optimization Search and Optimization method deals with

More information

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization

More information

Multidimensional Minimization

Multidimensional Minimization C. C. Kankelborg, 1999-11-18 (rev. 2009-Mar-24) I. INTRODUCTION Multidimensional Minimization This lecture is devoted to the task of minimization in N dimensions, which may be stated as follows: For a

More information

Energy Minimization -Non-Derivative Methods -First Derivative Methods. Background Image Courtesy: 3dciencia.com visual life sciences

Energy Minimization -Non-Derivative Methods -First Derivative Methods. Background Image Courtesy: 3dciencia.com visual life sciences Energy Minimization -Non-Derivative Methods -First Derivative Methods Background Image Courtesy: 3dciencia.com visual life sciences Introduction Contents Criteria to start minimization Energy Minimization

More information

Definition: A data structure is a way of organizing data in a computer so that it can be used efficiently.

Definition: A data structure is a way of organizing data in a computer so that it can be used efficiently. The Science of Computing I Lesson 4: Introduction to Data Structures Living with Cyber Pillar: Data Structures The need for data structures The algorithms we design to solve problems rarely do so without

More information

Multivariate Numerical Optimization

Multivariate Numerical Optimization Jianxin Wei March 1, 2013 Outline 1 Graphics for Function of Two Variables 2 Nelder-Mead Simplex Method 3 Steepest Descent Method 4 Newton s Method 5 Quasi-Newton s Method 6 Built-in R Function 7 Linear

More information

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang 600.469 / 600.669 Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang 9.1 Linear Programming Suppose we are trying to approximate a minimization

More information

Improving the way neural networks learn Srikumar Ramalingam School of Computing University of Utah

Improving the way neural networks learn Srikumar Ramalingam School of Computing University of Utah Improving the way neural networks learn Srikumar Ramalingam School of Computing University of Utah Reference Most of the slides are taken from the third chapter of the online book by Michael Nielson: neuralnetworksanddeeplearning.com

More information

Space-Progressive Value Iteration: An Anytime Algorithm for a Class of POMDPs

Space-Progressive Value Iteration: An Anytime Algorithm for a Class of POMDPs Space-Progressive Value Iteration: An Anytime Algorithm for a Class of POMDPs Nevin L. Zhang and Weihong Zhang lzhang,wzhang @cs.ust.hk Department of Computer Science Hong Kong University of Science &

More information

Introduction to Optimization

Introduction to Optimization Introduction to Optimization Second Order Optimization Methods Marc Toussaint U Stuttgart Planned Outline Gradient-based optimization (1st order methods) plain grad., steepest descent, conjugate grad.,

More information

DOWNLOAD PDF BIG IDEAS MATH VERTICAL SHRINK OF A PARABOLA

DOWNLOAD PDF BIG IDEAS MATH VERTICAL SHRINK OF A PARABOLA Chapter 1 : BioMath: Transformation of Graphs Use the results in part (a) to identify the vertex of the parabola. c. Find a vertical line on your graph paper so that when you fold the paper, the left portion

More information

Design and Analysis of Algorithms Prof. Madhavan Mukund Chennai Mathematical Institute

Design and Analysis of Algorithms Prof. Madhavan Mukund Chennai Mathematical Institute Design and Analysis of Algorithms Prof. Madhavan Mukund Chennai Mathematical Institute Module 07 Lecture - 38 Divide and Conquer: Closest Pair of Points We now look at another divide and conquer algorithm,

More information

16.410/413 Principles of Autonomy and Decision Making

16.410/413 Principles of Autonomy and Decision Making 16.410/413 Principles of Autonomy and Decision Making Lecture 17: The Simplex Method Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology November 10, 2010 Frazzoli (MIT)

More information

Parameters Estimation of Material Constitutive Models using Optimization Algorithms

Parameters Estimation of Material Constitutive Models using Optimization Algorithms The University of Akron IdeaExchange@UAkron Honors Research Projects The Dr. Gary B. and Pamela S. Williams Honors College Spring 2015 Parameters Estimation of Material Constitutive Models using Optimization

More information

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize. Cornell University, Fall 2017 CS 6820: Algorithms Lecture notes on the simplex method September 2017 1 The Simplex Method We will present an algorithm to solve linear programs of the form maximize subject

More information

MAFS Algebra 1. Quadratic Functions. Day 17 - Student Packet

MAFS Algebra 1. Quadratic Functions. Day 17 - Student Packet MAFS Algebra 1 Quadratic Functions Day 17 - Student Packet Day 17: Quadratic Functions MAFS.912.F-IF.3.7a, MAFS.912.F-IF.3.8a I CAN graph a quadratic function using key features identify and interpret

More information

Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 14

Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 14 Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 14 Scan Converting Lines, Circles and Ellipses Hello everybody, welcome again

More information

Linear Methods for Regression and Shrinkage Methods

Linear Methods for Regression and Shrinkage Methods Linear Methods for Regression and Shrinkage Methods Reference: The Elements of Statistical Learning, by T. Hastie, R. Tibshirani, J. Friedman, Springer 1 Linear Regression Models Least Squares Input vectors

More information

Chapter Multidimensional Gradient Method

Chapter Multidimensional Gradient Method Chapter 09.04 Multidimensional Gradient Method After reading this chapter, you should be able to: 1. Understand how multi-dimensional gradient methods are different from direct search methods. Understand

More information

Evaluating Classifiers

Evaluating Classifiers Evaluating Classifiers Charles Elkan elkan@cs.ucsd.edu January 18, 2011 In a real-world application of supervised learning, we have a training set of examples with labels, and a test set of examples with

More information

AP Computer Science Unit 3. Programs

AP Computer Science Unit 3. Programs AP Computer Science Unit 3. Programs For most of these programs I m asking that you to limit what you print to the screen. This will help me in quickly running some tests on your code once you submit them

More information

(Refer Slide Time: 01:25)

(Refer Slide Time: 01:25) Computer Architecture Prof. Anshul Kumar Department of Computer Science and Engineering Indian Institute of Technology, Delhi Lecture - 32 Memory Hierarchy: Virtual Memory (contd.) We have discussed virtual

More information

Optimizing the TracePro Optimization Process

Optimizing the TracePro Optimization Process Optimizing the TracePro Optimization Process A TracePro Webinar December 17, 2014 Presenter Presenter Dave Jacobsen Sr. Application Engineer Lambda Research Corporation Moderator Mike Gauvin Vice President

More information

Floating-Point Numbers in Digital Computers

Floating-Point Numbers in Digital Computers POLYTECHNIC UNIVERSITY Department of Computer and Information Science Floating-Point Numbers in Digital Computers K. Ming Leung Abstract: We explain how floating-point numbers are represented and stored

More information

Computer Architecture Prof. Smruthi Ranjan Sarangi Department of Computer Science and Engineering Indian Institute of Technology, Delhi

Computer Architecture Prof. Smruthi Ranjan Sarangi Department of Computer Science and Engineering Indian Institute of Technology, Delhi Computer Architecture Prof. Smruthi Ranjan Sarangi Department of Computer Science and Engineering Indian Institute of Technology, Delhi Lecture 32 The Memory Systems Part III Welcome back. (Refer Slide

More information

Chapter 3 Path Optimization

Chapter 3 Path Optimization Chapter 3 Path Optimization Background information on optimization is discussed in this chapter, along with the inequality constraints that are used for the problem. Additionally, the MATLAB program for

More information

Design and Analysis of Algorithms Prof. Madhavan Mukund Chennai Mathematical Institute. Module 02 Lecture - 45 Memoization

Design and Analysis of Algorithms Prof. Madhavan Mukund Chennai Mathematical Institute. Module 02 Lecture - 45 Memoization Design and Analysis of Algorithms Prof. Madhavan Mukund Chennai Mathematical Institute Module 02 Lecture - 45 Memoization Let us continue our discussion of inductive definitions. (Refer Slide Time: 00:05)

More information

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm In the name of God Part 4. 4.1. Dantzig-Wolf Decomposition Algorithm Spring 2010 Instructor: Dr. Masoud Yaghini Introduction Introduction Real world linear programs having thousands of rows and columns.

More information

UNC Charlotte 2010 Comprehensive

UNC Charlotte 2010 Comprehensive 00 Comprehensive with solutions March 8, 00. A cubic equation x 4x x + a = 0 has three roots, x, x, x. If x = x + x, what is a? (A) 4 (B) 8 (C) 0 (D) (E) 6 Solution: C. Because the coefficient of x is

More information

Programming Exercise 4: Neural Networks Learning

Programming Exercise 4: Neural Networks Learning Programming Exercise 4: Neural Networks Learning Machine Learning Introduction In this exercise, you will implement the backpropagation algorithm for neural networks and apply it to the task of hand-written

More information

2017 Mathematics Paper 1 (Non-calculator) Finalised Marking Instructions

2017 Mathematics Paper 1 (Non-calculator) Finalised Marking Instructions National Qualifications 017 017 Mathematics Paper 1 (Non-calculator) N5 Finalised Marking Instructions Scottish Qualifications Authority 017 The information in this publication may be reproduced to support

More information

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited. page v Preface xiii I Basics 1 1 Optimization Models 3 1.1 Introduction... 3 1.2 Optimization: An Informal Introduction... 4 1.3 Linear Equations... 7 1.4 Linear Optimization... 10 Exercises... 12 1.5

More information

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu FMA901F: Machine Learning Lecture 3: Linear Models for Regression Cristian Sminchisescu Machine Learning: Frequentist vs. Bayesian In the frequentist setting, we seek a fixed parameter (vector), with value(s)

More information

ÇANKAYA UNIVERSITY Department of Industrial Engineering SPRING SEMESTER

ÇANKAYA UNIVERSITY Department of Industrial Engineering SPRING SEMESTER TECHNIQUES FOR CONTINOUS SPACE LOCATION PROBLEMS Continuous space location models determine the optimal location of one or more facilities on a two-dimensional plane. The obvious disadvantage is that the

More information

An introduction to plotting data

An introduction to plotting data An introduction to plotting data Eric D. Black California Institute of Technology February 25, 2014 1 Introduction Plotting data is one of the essential skills every scientist must have. We use it on a

More information

Floating-Point Numbers in Digital Computers

Floating-Point Numbers in Digital Computers POLYTECHNIC UNIVERSITY Department of Computer and Information Science Floating-Point Numbers in Digital Computers K. Ming Leung Abstract: We explain how floating-point numbers are represented and stored

More information

Repetition Through Recursion

Repetition Through Recursion Fundamentals of Computer Science I (CS151.02 2007S) Repetition Through Recursion Summary: In many algorithms, you want to do things again and again and again. For example, you might want to do something

More information

( ) = Y ˆ. Calibration Definition A model is calibrated if its predictions are right on average: ave(response Predicted value) = Predicted value.

( ) = Y ˆ. Calibration Definition A model is calibrated if its predictions are right on average: ave(response Predicted value) = Predicted value. Calibration OVERVIEW... 2 INTRODUCTION... 2 CALIBRATION... 3 ANOTHER REASON FOR CALIBRATION... 4 CHECKING THE CALIBRATION OF A REGRESSION... 5 CALIBRATION IN SIMPLE REGRESSION (DISPLAY.JMP)... 5 TESTING

More information

The Partitioning Problem

The Partitioning Problem The Partitioning Problem 1. Iterative Improvement The partitioning problem is the problem of breaking a circuit into two subcircuits. Like many problems in VLSI design automation, we will solve this problem

More information

3.6.2 Generating admissible heuristics from relaxed problems

3.6.2 Generating admissible heuristics from relaxed problems 3.6.2 Generating admissible heuristics from relaxed problems To come up with heuristic functions one can study relaxed problems from which some restrictions of the original problem have been removed The

More information

LECTURES 3 and 4: Flows and Matchings

LECTURES 3 and 4: Flows and Matchings LECTURES 3 and 4: Flows and Matchings 1 Max Flow MAX FLOW (SP). Instance: Directed graph N = (V,A), two nodes s,t V, and capacities on the arcs c : A R +. A flow is a set of numbers on the arcs such that

More information

Unit 1, Lesson 1: Moving in the Plane

Unit 1, Lesson 1: Moving in the Plane Unit 1, Lesson 1: Moving in the Plane Let s describe ways figures can move in the plane. 1.1: Which One Doesn t Belong: Diagrams Which one doesn t belong? 1.2: Triangle Square Dance m.openup.org/1/8-1-1-2

More information

Finite Element Analysis Prof. Dr. B. N. Rao Department of Civil Engineering Indian Institute of Technology, Madras. Lecture - 24

Finite Element Analysis Prof. Dr. B. N. Rao Department of Civil Engineering Indian Institute of Technology, Madras. Lecture - 24 Finite Element Analysis Prof. Dr. B. N. Rao Department of Civil Engineering Indian Institute of Technology, Madras Lecture - 24 So in today s class, we will look at quadrilateral elements; and we will

More information

Introduction to Optimization Problems and Methods

Introduction to Optimization Problems and Methods Introduction to Optimization Problems and Methods wjch@umich.edu December 10, 2009 Outline 1 Linear Optimization Problem Simplex Method 2 3 Cutting Plane Method 4 Discrete Dynamic Programming Problem Simplex

More information

More Ways to Solve & Graph Quadratics The Square Root Property If x 2 = a and a R, then x = ± a

More Ways to Solve & Graph Quadratics The Square Root Property If x 2 = a and a R, then x = ± a More Ways to Solve & Graph Quadratics The Square Root Property If x 2 = a and a R, then x = ± a Example: Solve using the square root property. a) x 2 144 = 0 b) x 2 + 144 = 0 c) (x + 1) 2 = 12 Completing

More information

Lesson 20: Every Line is a Graph of a Linear Equation

Lesson 20: Every Line is a Graph of a Linear Equation Student Outcomes Students know that any non vertical line is the graph of a linear equation in the form of, where is a constant. Students write the equation that represents the graph of a line. Lesson

More information

Lagrange Multipliers and Problem Formulation

Lagrange Multipliers and Problem Formulation Lagrange Multipliers and Problem Formulation Steven J. Miller Department of Mathematics and Statistics Williams College Williamstown, MA 01267 Abstract The method of Lagrange Multipliers (and its generalizations)

More information

Lesson 1: Analyzing Quadratic Functions

Lesson 1: Analyzing Quadratic Functions UNIT QUADRATIC FUNCTIONS AND MODELING Lesson 1: Analyzing Quadratic Functions Common Core State Standards F IF.7 F IF.8 Essential Questions Graph functions expressed symbolically and show key features

More information

[1] CURVE FITTING WITH EXCEL

[1] CURVE FITTING WITH EXCEL 1 Lecture 04 February 9, 2010 Tuesday Today is our third Excel lecture. Our two central themes are: (1) curve-fitting, and (2) linear algebra (matrices). We will have a 4 th lecture on Excel to further

More information

MATH2070: LAB 4: Newton s method

MATH2070: LAB 4: Newton s method MATH2070: LAB 4: Newton s method 1 Introduction Introduction Exercise 1 Stopping Tests Exercise 2 Failure Exercise 3 Introduction to Newton s Method Exercise 4 Writing Matlab code for functions Exercise

More information

Surveying Prof. Bharat Lohani Indian Institute of Technology, Kanpur. Lecture - 1 Module - 6 Triangulation and Trilateration

Surveying Prof. Bharat Lohani Indian Institute of Technology, Kanpur. Lecture - 1 Module - 6 Triangulation and Trilateration Surveying Prof. Bharat Lohani Indian Institute of Technology, Kanpur Lecture - 1 Module - 6 Triangulation and Trilateration (Refer Slide Time: 00:21) Welcome to this another lecture on basic surveying.

More information

Engineering Mechanics Prof. Siva Kumar Department of Civil Engineering Indian Institute of Technology, Madras Statics - 4.3

Engineering Mechanics Prof. Siva Kumar Department of Civil Engineering Indian Institute of Technology, Madras Statics - 4.3 Engineering Mechanics Prof. Siva Kumar Department of Civil Engineering Indian Institute of Technology, Madras Statics - 4.3 In this case let s say delta B and delta C are the kinematically consistent displacements.

More information

General Method for Exponential-Type Equations. for Eight- and Nine-Point Prismatic Arrays

General Method for Exponential-Type Equations. for Eight- and Nine-Point Prismatic Arrays Applied Mathematical Sciences, Vol. 3, 2009, no. 43, 2143-2156 General Method for Exponential-Type Equations for Eight- and Nine-Point Prismatic Arrays G. L. Silver Los Alamos National Laboratory* P.O.

More information

Title. Description. stata.com. intro 12 Convergence problems and how to solve them

Title. Description. stata.com. intro 12 Convergence problems and how to solve them Title statacom intro 12 Convergence problems and how to solve them Description or, or, Description Remarks and examples Also see It can be devilishly difficult for software to obtain results for SEMs Here

More information

(Refer Slide Time: 01:00)

(Refer Slide Time: 01:00) Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture minus 26 Heuristics for TSP In this lecture, we continue our discussion

More information

A Brief Look at Optimization

A Brief Look at Optimization A Brief Look at Optimization CSC 412/2506 Tutorial David Madras January 18, 2018 Slides adapted from last year s version Overview Introduction Classes of optimization problems Linear programming Steepest

More information

Basics of Computational Geometry

Basics of Computational Geometry Basics of Computational Geometry Nadeem Mohsin October 12, 2013 1 Contents This handout covers the basic concepts of computational geometry. Rather than exhaustively covering all the algorithms, it deals

More information

4. Use a loop to print the first 25 Fibonacci numbers. Do you need to store these values in a data structure such as an array or list?

4. Use a loop to print the first 25 Fibonacci numbers. Do you need to store these values in a data structure such as an array or list? 1 Practice problems Here is a collection of some relatively straightforward problems that let you practice simple nuts and bolts of programming. Each problem is intended to be a separate program. 1. Write

More information

Numerical Methods in Scientific Computation

Numerical Methods in Scientific Computation Numerical Methods in Scientific Computation Programming and Software Introduction to error analysis 1 Packages vs. Programming Packages MATLAB Excel Mathematica Maple Packages do the work for you Most

More information

6.001 Notes: Section 4.1

6.001 Notes: Section 4.1 6.001 Notes: Section 4.1 Slide 4.1.1 In this lecture, we are going to take a careful look at the kinds of procedures we can build. We will first go back to look very carefully at the substitution model,

More information

B553 Lecture 12: Global Optimization

B553 Lecture 12: Global Optimization B553 Lecture 12: Global Optimization Kris Hauser February 20, 2012 Most of the techniques we have examined in prior lectures only deal with local optimization, so that we can only guarantee convergence

More information

10.7 Variable Metric Methods in Multidimensions

10.7 Variable Metric Methods in Multidimensions 10.7 Variable Metric Methods in Multidimensions 425 *fret=dbrent(ax,xx,bx,f1dim,df1dim,tol,&xmin); for (j=1;j

More information

A Short SVM (Support Vector Machine) Tutorial

A Short SVM (Support Vector Machine) Tutorial A Short SVM (Support Vector Machine) Tutorial j.p.lewis CGIT Lab / IMSC U. Southern California version 0.zz dec 004 This tutorial assumes you are familiar with linear algebra and equality-constrained optimization/lagrange

More information

MATH3016: OPTIMIZATION

MATH3016: OPTIMIZATION MATH3016: OPTIMIZATION Lecturer: Dr Huifu Xu School of Mathematics University of Southampton Highfield SO17 1BJ Southampton Email: h.xu@soton.ac.uk 1 Introduction What is optimization? Optimization is

More information

Mathematics Background

Mathematics Background Finding Area and Distance Students work in this Unit develops a fundamentally important relationship connecting geometry and algebra: the Pythagorean Theorem. The presentation of ideas in the Unit reflects

More information

Programs. Introduction

Programs. Introduction 16 Interior Point I: Linear Programs Lab Objective: For decades after its invention, the Simplex algorithm was the only competitive method for linear programming. The past 30 years, however, have seen

More information

Fall 09, Homework 5

Fall 09, Homework 5 5-38 Fall 09, Homework 5 Due: Wednesday, November 8th, beginning of the class You can work in a group of up to two people. This group does not need to be the same group as for the other homeworks. You

More information

Introduction to Linear Programming. Algorithmic and Geometric Foundations of Optimization

Introduction to Linear Programming. Algorithmic and Geometric Foundations of Optimization Introduction to Linear Programming Algorithmic and Geometric Foundations of Optimization Optimization and Linear Programming Mathematical programming is a class of methods for solving problems which ask

More information

Logistic Regression

Logistic Regression Logistic Regression ddebarr@uw.edu 2016-05-26 Agenda Model Specification Model Fitting Bayesian Logistic Regression Online Learning and Stochastic Optimization Generative versus Discriminative Classifiers

More information