Xpress NonLinear. Manual. Release version FICO TM Xpress Optimization Suite. Last update 30 October,
|
|
- June Mathews
- 5 years ago
- Views:
Transcription
1 FICO TM Xpress Optimization Suite Xpress NonLinear Manual Release version Last update 30 October, Make every decision count TM
2 This document is the confidential, proprietary, and unpublished property of Fair Isaac Corporation. Receipt or possession of it does not convey rights to divulge, reproduce, use, or allow others to use it without the specific written authorization of Fair Isaac Corporation and use must conform strictly to the license agreement. The information in this document is subject to change without notice. If you find any problems in this documentation, please report them to us in writing. Neither Fair Isaac Corporation nor its affiliates warrant that this documentation is error-free, nor are there any other warranties with respect to the documentation except as may be provided in the license agreement. c Fair Isaac Corporation. All rights reserved. Permission to use this software and its documentation is governed by the software license agreement between the licensee and Fair Isaac Corporation (or its affiliate). Portions of the program copyright various authors and are licensed under certain open source licenses identified in the software itself in the <XPRESSDIR>/licenses/readme.txt file. In no event shall Fair Isaac Corporation or its affiliates be liable to any person or direct, indirect, special, incidental, or consequential damages, including lost profits, arising out of the use of this software and its documentation, even if Fair Isaac Corporation or its affiliates have been advised of the possibility of such damage. The rights and allocation of risk between the licensee and Fair Isaac Corporation (or its affiliate) are governed by the respective identified licenses in the <XPRESSDIR>/licenses/readme.txt file. Fair Isaac Corporation and its affiliates specifically disclaim any warranties, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. The software and accompanying documentation, if any, provided hereunder is provided solely to users licensed under the Fair Isaac Software License Agreement. Fair Isaac Corporation and its affiliates have no obligation to provide maintenance, support, updates, enhancements, or modifications except to users licensed under the Fair Isaac Software License Agreement. FICO, Fair Isaac and Blaze Advisor are trademarks or registered trademarks of Fair Isaac Corporation in the United States and may be trademarks or registered trademarks of Fair Isaac Corporation in other countries. Other product and company names herein may be trademarks of their respective owners. How to Contact the Xpress Team Information, Sales and Licensing USA, CANADA AND ALL AMERICAS XpressSalesUS@fico.com WORLDWIDE XpressSalesUK@fico.com Tel: Fax: Xpress Optimization, FICO FICO House International Square Starley Way Birmingham B37 7GN UK Product Support Support@fico.com (Please include Xpress in the subject line) Telephone: NORTH AMERICA Tel (toll free): +1 (877) 4FI-SUPP Fax: +1 (402) EUROPE, MIDDLE EAST, AFRICA Tel: +44 (0) UK (toll free): South Africa (toll free): Fax: +44 (0) ASIA-PACIFIC, LATIN AMERICA, CARIBBEAN Tel: +1 (415) Brazil (toll free): For the latest news and Xpress software and documentation updates, please visit the Xpress website at or subscribe to our mailing list.
3 Contents I Basic Usage 1 1 Introduction Mathematical programs Linear programs Convex quadratic programs Convex quadratically constrained quadratic programs General nonlinear optimization problems Mixed integer programs Technology Overview The Simplex Method The Logarithmic Barrier Method Successive Linear Programming Second Order Methods Mixed Integer Solvers A Unified Framework Modelling Getting Started Initialization The Problem Container Adding Variables Adding Constraints Building up functions Constant Terms Linear Terms Quadratic Terms General Expressions Solving a model Generic Controls Solver Controls Optimizing Initial Values Getting the solution Messages Console Output Logging Message Callbacks II Advanced Topics 15 4 Understanding solutions Local and global optimality Xpress NonLinear Manual c Copyright Fair Isaac Corporation. All rights reserved. page i
4 4.2 Convexity Converged and practical solutions The duals of general, nonlinear program Some practical considerations The initial point Derivatives Finite Differences Symbolic Differentiation Automatic Differentiation Points of inflection Trust regions Convergence Convergence in KNITRO Convergence in Xpress-SLP Strict Convergence Extended Convergence Stopping Criterion Step Bounding Infeasibility Infeasibility Analysis in the Xpress Optimizer Managing Infeasibility with Xpress KNITRO Managing Infeasibility with Xpress-SLP Penalty Infeasibility Breakers in XSLP III Reference 29 8 Library Functions 30 XNLPaddconstantterms XNLPaddlinearterms XNLPaddquadraticterms XNLPaddrhsterms XNLPaddtokenizedterms XNLPappendlogfile XNLPclearmessagecallbacks XNLPclearrows XNLPcloselogfile XNLPcloselogfiles XNLPcreatecolumns XNLPcreateprob XNLPcreaterows XNLPderegistermessagecallback XNLPdescriberow XNLPdestroyprob XNLPdisableconsoleoutput XNLPenableconsoleoutput XNLPfree XNLPgetdoubleattribute XNLPgetdoublecontrol XNLPgetdual XNLPgetintattribute XNLPgetintcontrol XNLPgetlasterror Contents c Copyright Fair Isaac Corporation. All rights reserved. page ii
5 XNLPgetoptimizationsense XNLPinit XNLPgetreducedcost XNLPgetslack XNLPgetsolution XNLPgetsolverdoubleattribute XNLPgetsolverdoublecontrol XNLPgetsolverintattribute XNLPgetsolverintcontrol XNLPopenlogfile XNLPoptimize XNLPregistermessagecallback XNLPsetbounds XNLPsetcolumnnames XNLPsetdoublecontrol XNLPsetcolumntypes XNLPsetinitialvalues XNLPsetintcontrol XNLPsetoptimizationsense XNLPsetrownames XNLPsetrowtype XNLPsetsolverdoublecontrol XNLPsetsolverintcontrol XNLPtokenizeexpression XNLPwriteprob Problem Attributes 83 XNLP_COLUMNS XNLP_ROWS XNLP_OBJECTIVE_VALUE XNLP_SOLUTION_STATUS XNLP_SOLVER_SELECTED Optimization Controls 86 XNLP_FEASIBILITY_TOLERANCE XNLP_OPTIMALITY_TOLERANCE XNLP_INTEGRALITY_TOLERANCE XNLP_SOLVER Expression Formats String Expressions Tokenized Expressions IV Examples mmxnlp The effects of convexity The cost of numerical derivatives The effects of local optimality Problems without duals Non-connected feasible regions Penalty feasiblity breakers Penalty multipliers The effect of numerical derivatives Solution types Contents c Copyright Fair Isaac Corporation. All rights reserved. page iii
6 12.10Unbounded first order approximations The role of placeholder derivatives Index 109 Contents c Copyright Fair Isaac Corporation. All rights reserved. page iv
7 I. Basic Usage
8 CHAPTER 1 Introduction An accurate model of something from the real world doesn t often turn out to need only a weighted sum of a set of variables. It might sometimes be possible to approximate it that way, but in many cases it won t be very accurate. As a model changes, the best technology to use to solve it will change too, and there are many types of solver for specialised classes of problem, traditionally taking model descriptions in very different formats. FICO Xpress NonLinear is designed to help you solve your problems as they really are. It provides a single interface to the solvers of the FICO Xpress Optimization Suite, giving you access through one API to solvers of the highest quality for every class of problem. Not only that, but it can choose for you the most efficient means of solving your problem, freeing you to concentrate on making the best possible model. 1.1 Mathematical programs There are many specialised forms of model in mathematical programming, and if such a form can be identified, there are usually much more efficient solution techniques available. This section describes some of the major types of problem that Xpress-NonLinear can identify automatically Linear programs Linear programming (LP) involves solving problems of the form minimize subject to c T x Ax b and in practice this encompasses, via transformations, any problem whose objective and constraints are linear functions. Such problems were traditionally solved with the simplex method, although recently interior point methods have come to be favoured for larger instances. Linear programs can be solved quickly, and solution techniques scale to enormous sizes of the matrix A. However, few applications are genuinely linear. It was common in the past, however, to approximate general functions by linear counterparts when LPs were the only class of problem with efficient solution techniques Convex quadratic programs Convex quadratic programming (QP) involves solving problems of the form minimize subject to c T x + x T Qx Ax b c Copyright Fair Isaac Corporation. All rights reserved. page 2
9 for which the matrix Q is symmetric and positive semi-definite (that is, x T Qx 0 for all x). This encompasses, via transformations, all problems with a positive semi-definite Q and linear constraints. Such problems can be solved efficiently by interior point methods, and also by quadratic variants of the simplex method Convex quadratically constrained quadratic programs Convex quadratically constrained quadratic programming (QCQP) involves solving problems of the form minimize c T x + x T Qx subject to Ax b q T j x + xt P j x d j, j for which the matrix Q and all matrices P j are positive semi-definite. The most efficient solution techniques are based on interior point methods General nonlinear optimization problems Nonlinear programming (NLP) involves solving problems of the form minimize subject to f(x) g j (x) b, j where f(x) is an arbitrary function, and g(x) are a set of arbitrary functions. This is the most general type of problem, and any constrained model can be realised in this form via simple transformations. Until recently, few practical techniques existed for tackling such problems, but it is now possible to solve even large instances using Successive Linear Programming solvers (SLP) or second-order methods Mixed integer programs Mixed-integer programming (MIP), in the most general case, involves solving problems of the form minimize f(x) subject to g j (x) b, j x k integral It can be combined with any of the previous problem types, giving Mixed-Integer Linear Programming (MILP), Mixed-Integer Quadratic Programming (MIQP), Mixed-Integer Quadratically Constrained Quadratic Programming (MIQCQP), and Mixed-Integer Nonlinear Programming (MINLP). Efficient solution techniques now exist for all of these classes of problem. 1.2 Technology Overview In real-world applications, it is vital to match the right optimization technology to your problem. The FICO Xpress libraries provide dedicated, high performance implementations of optimization technologies for the many model classes commonly appearing in practical applications. This includes solvers for linear programing (LP), mixed integer programming (MIP), convex quadratic programming (QP), and convex quadratically constrained programming (QCQP), and general nonlinear programming (NLP). Introduction c Copyright Fair Isaac Corporation. All rights reserved. page 3
10 1.2.1 The Simplex Method The simplex method is one of the most well-developed and highly studied mathematical programming tools. The solvers in the FICO Xpress Optimizer are the product of over 30 years of research, and include high quality, competitive implementations of the primal and dual simplex methods for both linear and quadratic programs. A key advantage of the simplex method is that it can very quickly reoptimize a problem after it has been modified, which is an important step in solving mixed integer programs The Logarithmic Barrier Method The interior point method of the FICO Xpress Optimizer is a state of the art implementation, with leading performance across a variety of large models. It is capable of solving not only the largest and most difficult linear and convex quadratic programs, but also convex quadratically constrained quadratic programs. It includes optimized versions of both infeasible logarithmic barrier methods, and also homogeneous self-dual methods Successive Linear Programming For general nonlinear programs which are very large, highly structured, or contain a significant linear part, the FICO Xpress Sequential Linear Programming solver (XSLP) offers exceptional performance. Successive linear programming is a first order, iterative approach for solving nonlinear models. At each iteration, a linear approximation to the original problem is solved at the current point, and the distance of the result from the the selected point is examined. When the two points are sufficiently close, the solution is said to have converged and the result is returned. This technique is thus based upon solving a sequence of linear programming problems and benefits from the advanced algorithmic and presolving techniques available for linear problems. This makes XSLP scalable, as well as efficient for large problems. In addition, the relatively simple core concepts make understanding the solution process and subsequent tuning comparatively straightforward Second Order Methods Also integrated into the Xpress suite is KNITRO from Ziena Optimization, a second-order method which is particularly suited to large-scale continuous problems containing high levels of nonlinearity. Second order methods approximate a problem by examining quadratic programs fitted to a local region. This can provide information about the curvature of the solution space to the solver, which first-order methods do not have. Advanced implementations of such methods, like KNITRO, may as a result be able to produce more resilient solutions. This can be especially noticeable when the initial point is close to a local optimum Mixed Integer Solvers The FICO Xpress MIP Solver is one of the leading commercial codes for all classes of mixed integer program. Mixed integer programming forms the basis of many important applications, and the implementation in the FICO Xpress Suite has proven itself in operation for some of the world s largest organizations. Both XSLP and KNITRO are also able to solve mixed integer nonlinear problems (MINLP). 1.3 A Unified Framework Xpress-NonLinear makes it easy to get the best out of all these technologies, by bringing them together behind a single API. It can automatically select the most appropriate technology for Introduction c Copyright Fair Isaac Corporation. All rights reserved. page 4
11 your problem as it is currently posed. This gives you two big advantages. Firstly, if you find your linear or quadratic approximation is not strong enough, it makes it easy to add nonlinear terms to your model, without rewriting it entirely. The API for solving the problem remains the same, regardless of any specialised class it might belong to. Secondly, Xpress-NonLinear can apply the most efficient solver to your problem, regardless of the way it is fed into the run-time. If a particular instance of your problem turns out to be linear, you will benefit from the greatly increased efficiency that a dedicated linear solver can bring to your model. Xpress-NonLinear takes away from you the burden of analysing your current problem, deciding on appropriate technology to solve it with, and translating your model into a solver-specific format. Introduction c Copyright Fair Isaac Corporation. All rights reserved. page 5
12 CHAPTER 2 Modelling This chapter gives an overview of how to build and solve models in Xpress NonLinear using the C API. Detailed descriptions of each function can be found in the reference part of the manual. 2.1 Getting Started To begin creating a model with Xpress-NonLinear, there are a few essential steps which must be followed Initialization Before anything can be done with the XNLP API, it must be initialized using XNLPinit. This will claim any resources the run-time needs. The corresponding call to release these resources again is XNLPfree. #include <stdlib.h> #include <xnlp.h> int main(int argc, char **argv) { /* Declarations go here */ if(xnlpinit(null)!= XNLP_OK) { /* Failed to initialize XNLP */ exit(exit_failure); } /* Body of model goes here */ XNLPfree(); exit(exit_success); } The Problem Container An XNLPprob represents a model which is being built up and solved. The next step after initializing the run-time is to create a new problem to work on, using XNLPcreateprob. This takes the direction of optimization that the problem will have (XNLP_MAXIMIZE or XNLP_MINIMIZE). The direction of optimization can be set and queried later using XNLPsetoptimizationsense and XNLPgetoptimizationsense. Once the problem is no longer needed, it should be freed using XNLPdestroyprob. XNLPprob prob; /* Initializations skipped */ if(xnlpcreateprob(&prob, XNLP_MAXIMIZE)!= XNLP_OK) { /* Couldn t create the problem */ XNLPfree(); exit(exit_failure); } c Copyright Fair Isaac Corporation. All rights reserved. page 6
13 /* Use problem here */ XNLPdestroyprob(&prob); /* Remainder of clean-up skipped */ Adding Variables The problem when it is created is blank, having no constraints or variables, and its initial formulation is equivalent to maximize 0 (or equivalently minimize, if XNLP_MINIMIZE was passed to XNLPcreateprob instead). Each variable in the problem requires a column which will store information about any expressions that variable appears in, together with its type and bounds. To add variables to the problem, call XNLPcreatecolumns. The default type of a variable is continuous (type C), but other possibilities are binary (type B) and integer (type I). The default bounds are x 0 which is traditional for mathematical programs. For example, to create five new variables, initialized for simplicity here from static arrays, you might write const char *col_names[5] = {"x_1", "x_2", "x_3", "x_4", "x_5"}; const char col_types[5] = { C, C, I, C, B }; const double lb[5] = { 0, XNLP_MINUS_INFINITY, 2, 0, 0 }; const double ub[5] = { XNLP_PLUS_INFINITY, 0, 4, 3.5, 1 }; if(xnlpcreatecolumns(prob, 5, col_names, col_types, lb, ub)!= XNLP_OK) { /* An error occurred while creating the columns - see text */ } If an error occurs whilst creating new columns, the problem will be left in an indeterminate state. This means that it is no longer safe continue using the problem, as its contents are not well-defined. To help you avoid errors resulting from such issues, XNLP will report an error for any subsequent operation on the problem except for XNLPdestroyprob (which is always allowed) and XNLPgetlasterror Adding Constraints In a similar fashion to the variables, each constraint of the problem requires a row to store its type and the constraint function g(x). All constraints in XNLP have one of the types: g(x) 0 (type L) g(x) 0 (type G) g(x) = 0 (type E) g(x) free (type N) which means that unlike traditional linear programming, the right-hand side is always zero. To create three new rows, again initialized from static arrays to simplify the example, you might write const char *row_names[3] = {"r_1", "r_2", "r_3" }; const char row_types[3] = { E, G, L }; if(xnlpcreaterows(prob, 3, row_names, row_types)!= XNLP_OK) { /* An error occurred while creating the rows - see text */ } Modelling c Copyright Fair Isaac Corporation. All rights reserved. page 7
14 As for columns, a failure to create rows leaves the problem in an indeterminate state. You should retrieve any error message and error code, and then destroy the problem. It cannot be used for any further operations, as its internal state may no longer be valid. Rows and columns are referred to by index, and the indices begin at zero. After calling XNLPcreatecolumns and XNLPcreaterows, the highest numbered variables or constraints are newly created, the remainder have stayed in their original order. Notice that, at present, it isn t possible to delete rows or columns from the model, instead variables should be fixed (by setting both their bounds equal to zero using XNLPsetbounds) and constraint functions should be zeroed (by clearing the row using XNLPclearrows). The objective function is the only row which is not a constraint. It has the special index XNLP_OBJECTIVE_ROW. 2.2 Building up functions After creating a problem, and creating some rows and columns in its formulation, we have a problem statement along the lines of maximize 0 subject to r 1 : 0 = 0 r 2 : 0 0 r 3 : 0 0 x 1 0, x 2 0, 2 x 3 4, 0 x , 0 x 5 1 x 3 integer, x 5 binary Constant Terms Now we need to fill out the constraint functions g(x) and objective function f(x). The XNLP API is based around adding terms to the functions f(x) and g(x). These start off as zero functions, but can be built by successively adding terms of various types. Perhaps the simplest type of term is a constant term, added with XNLPaddconstantterms. Like all the functions which add terms, it can work on multiple rows at once, and if a null array of indices is passed, the first count rows are modified. For example, const double constants[3] = { 2, 1, 3 }; if(xnlpaddconstantterms(prob, 3, NULL, constants)!= XNLP_OK) { /* This error is unrecoverable */ } After this call, the rows of the problem have become r 1 : 2 = 0 r 2 : 1 0 r 3 : 3 0 Note that the constant terms are added to the constraint functions on the left-hand side of the row. XNLP provides a convenience function XNLPaddrhsterms, which is identical to XNLPaddconstantterms but subtracts the given constants from the left-hand side instead, which is of course mathematically equivalent to adding them to the right-hand side. But remember in the XNLP formulation, the right-hand side of every constraint is zero. The objective function is built up in the same way as constraint functions, passing the special row index XNLP_OBJECTIVE_ROW to the functions which add terms. Modelling c Copyright Fair Isaac Corporation. All rights reserved. page 8
15 2.2.2 Linear Terms Many models contain a large linear part, and XNLP allows these terms to be added efficiently using XNLPaddlinearterms. As for XNLPaddconstantterms, if the rows to modify are not specified, the first count rows will be changed. The linear terms are specified in standard, row-wise sparse matrix format, where the offsets give the start of the terms for each row in the matrix arrays. The final offset term is the length of the matrix arrays. Suppose we wish to add linear terms to the first and third rows. We might write const int rows [2] = { 0, 2 }; const int offset[3] = { 0, 2, 4 }; const int cols [4] = { 3, 4, 0, 2 }; const double coefs [4] = { 1, 5, 2, -1 }; if(xnlpaddlinearterms(prob, 2, rows, offset, cols, coefs)!= XNLP_OK) { /* This error is unrecoverable */ } After this call, the rows of the problem have become Quadratic Terms r 1 : x 4 + 5x = 0 r 2 : 1 0 r 3 : 2x 1 x XNLP also provides an efficient interface for adding quadratic terms to a formulation, through the interface function XNLPaddquadraticterms. Quadratic terms are most naturally expressed as symmetric matrices, and this function takes the lower triangle of such a matrix in triplet form. To now add quadratic terms to the objective and second row, we might write const int rows [2] = { 1, XNLP_OBJECTIVE_ROW }; const int offset[3] = { 0, 2, 5 }; const int col1 [4] = { 3, 4, 0, 2, 3}; const int col2 [4] = { 3, 1, 0, 2, 0}; const double coefs [4] = { 1, 5, 2, 3, -1}; if(xnlpaddquadraticterms(prob, 2, rows, offset, col1, col2, coefs)!= XNLP_OK) { /* This error is unrecoverable */ } After this call, the rows of the problem have become r 1 : x 2 + 5x = 0 r 2 : x x 5x r 3 : 2x 1 x and the objective is maximize 2x x2 3 2x 1x General Expressions Quadratic and linear terms alone are not sufficient to describe a general nonlinear problem. For that, a more powerful interface is required, and this is provided by XNLPaddtokenizedterms. Any type of term, including quadratic, linear and constant terms, may be added using this function, but also general expressions of arbitrary complexity. This function takes its expressions as token lists, which are pairs of numbers encoding the expression. The easiest way to get a token list is to parse it from a string expression using XNLPtokenizeexpression. For example, suppose we want to add sin(x 1 + x 2 ) to the first row. We might write Modelling c Copyright Fair Isaac Corporation. All rights reserved. page 9
16 const int rows [1] = { 0}; const char *expr [1] = { "sin(x_1+x_2)" }; int offset[2]; int *token_type; double *token_value; offset[0] = 0; if(xnlptokenizeexpression(prob, expr[0], 0, &offset[1], NULL, NULL)!= XNLP_OK) { /* Recoverable error; continue if possible */ } token_type = malloc(offset[1] * sizeof(*token_type)); token_value = malloc(offset[1] * sizeof(*token_value)); /* Memory allocation failure check skipped for brevity */ if(xnlptokenizeexpression(prob, expr[0], offset[1], NULL, token_type, token_value)!= XNLP_OK) { /* Recoverable error; continue if possible */ } if(xnlpaddtokenizedterms(prob, XNLP_TOKEN_FORMAT_POSTFIX, 1, rows, offset, token_type, token_value)!= XNLP_OK) { /* Unrecoverable error */ } free(token_type); free(token_value); After this call, the rows of the problem have become r 1 : sin(x 1 + x 2 ) + x 2 + 5x = 0 r 2 : x x 5x r 3 : 2x 1 x It is possible to work with tokens directly, and the token formats are described in the reference section of the manual. XNLPtokenizeexpression will always return tokens in postfix format, but you may find it more convenient when working with tokens directly to use infix format. Note that terms can be added to the constraint and objective functions in any order, it is not significant. Should it be necessary to remove terms from a function, the best option is to call XNLPclearrows to restore the function of the row or rows in question to zero. 2.3 Solving a model When the model is built, the next step is to try to find an optimal solution Generic Controls There are a number of control parameters which can affect the behaviour of the solvers underlying XNLP. The most commonly used are: XNLP_FEASIBILITY_TOLERANCE Specifies the margin of feasibility on the constraints XNLP_OPTIMALITY_TOLERANCE Specifies the margin of optimality on the objective function These can be set using XNLPsetdoublecontrol. For example, to set the feasibility tolerance to we might write if(xnlpsetdoublecontrol(prob, XNLP_FEASIBILITY_TOLERANCE, 1e-3)!= XNLP_OK) { /* Unrecoverable error */ } Modelling c Copyright Fair Isaac Corporation. All rights reserved. page 10
17 2.3.2 Solver Controls There is a distinction in XNLP between solver controls and generic controls. The above are generic controls, which are valid for all types of problem and solution engine. The solver controls are a way to specify parameters directly for individual solvers. For example, if we wished to set the absolute delta convergence tolerance for Xpress-SLP to 0.1, we might write if(xnlpsetsolverdoublecontrol(prob, XSLP_ATOL_A, 0.1)!= XNLP_OK) { /* Unrecoverable error */ } The parameters available for individual solvers are discussed in their respective reference manuals. Xpress-NonLinear will automatically choose the best solver for your problem, but to influence this process, the generic control XNLP_SOLVER can be set. For example, to suggest to the run-time that Xpress-SLP is the best solver for your problem, you might write if(xnlpsetintcontrol(prob, XNLP_SOLVER, XNLP_SOLVER_XSLP)!= XNLP_OK) { /* Unrecoverable error */ } Optimizing To begin the optimization, you must call XNLPoptimize on your problem, for example int status; if(xnlpoptimize(prob, &status)!= XNLP_OK) { /* Unrecoverable error */ } switch(status) { case XNLP_OPTIMAL: case XNLP_LOCALLY_OPTIMAL: /* Optimization was successful */ break; case XNLP_INFEASIBLE: case XNLP_UNBOUNDED: /* The problem is ill-posed */ break; case XNLP_LOCALLY_INFEASIBLE: /* The solver couldn t find a feasible point - see text */ break; default: /* The solver got stuck trying to solve the problem */ break; } The status return code describes the type of solution that the solver was able to obtain. Depending upon the techniques employed, nonlinear problems can have more complex solution types than a traditional linear model, and this is reflected in the status. For non-convex problems, it must be expected that the returned solutions will be locally optimal, and not necessarily globally optimal. Similarly, it is possible for a problem to be locally infeasible, such that the solver was unable to find a way to move to a more feasible point, without the entire problem being infeasible Initial Values Changing the initial values of the variables (using XNLPsetinitialvalues) may give different solutions, depending on the region the solver begins in. For example, to specify that x 1 and x 3 must begin at one, you might write: Modelling c Copyright Fair Isaac Corporation. All rights reserved. page 11
18 const int cols[2] = { 0, 2 }; const double iv [2] = { 1, 1 }; if(xnlpsetinitialvalues(prob, 2, cols, iv)!= XNLP_OK) { /* Unrecoverable error */ } Getting the solution If the optimization succeeded, the solution values at the optimal point can be retrieved using XNLPgetsolution. As for many other API functions, the indices to this function are optional, and the first count variables will be returned if they are NULL. For example, to retrieve the values of all variables from our problem, we might write: double solution[5]; if(xnlpgetsolution(prob, 5, NULL, solution)!= XNLP_OK) { /* Recoverable error - carry on if possible */ } Similarly, the slack in the constraints (the amount of change allowed in each g(x) before the constraint is violated) can be retrieved using XNLPgetslack. To retrieve the slack for r 2 we might write double slack_r2; const int index_r2 = 1; if(xnlpgetslack(prob, 1, &index_r2, &slack_r2)!= XNLP_OK) { /* Recoverable error - carry on if possible */ } The objective value can be retrieved from the XNLP_OBJECTIVE_VALUE generic attribute. For example, we might write: double optimal_objective; if(xnlpgetdoubleattribute(prob, XNLP_OBJECTIVE_VALUE, &optimal_objective)!= XNLP_OK) { /* Recoverable error - carry on if possible */ } There is complete documentation for the API in the reference section of this manual. Additionally, many of the finer points raised in this part are discussed more fully in the advanced topics section of this manual. Modelling c Copyright Fair Isaac Corporation. All rights reserved. page 12
19 CHAPTER 3 Messages The Xpress-NonLinear run-time can produce a variety of messages to inform the user of the current state of the solve. There are three ways to access these messages from the API, and these are described in this chapter. 3.1 Console Output The easiest way to see the output produced by the Xpress-NonLinear run-time is to enable console output. It is disabled by default, and has to be turned on using XNLPenableconsoleoutput. For example if(xnlpenableconsoleoutput(prob)!= XNLP_OK) { /* Recoverable error */ } Console output can be once more switched off by using XNLPdisableconsoleoutput: if(xnlpdisableconsoleoutput(prob)!= XNLP_OK) { /* Recoverable error */ } Both functions may be called at any time, including during a solve. 3.2 Logging The console is frequently unavailable in real applications, or otherwise undesirable to use. XNLP makes it easy to produce a log file of all or parts of a solve. To begin logging to a new file, you can use XNLPopenlogfile. For example, you might write if(xnlpopenlogfile(prob, "mylogfile.txt")!= XNLP_OK) { /* Recoverable error */ } It is also possible to append to an existing file, using XNLPappendlogfile, which has otherwise the same behaviour as XNLPopenlogfile. Multiple log files may be open at the same time, and log files can be opened and closed at any time, including during a solve. To close and flush a logfile, use XNLPcloselogfile, for example: if(xnlpcloselogfile(prob, "mylogfile.txt")!= XNLP_OK) { /* Recoverable error */ } c Copyright Fair Isaac Corporation. All rights reserved. page 13
20 Note that the filename passed to XNLPcloselogfile must be the same string as used when opening the file with XNLPopenlogfile or XNLPappendlogfile: the comparison is by strings, not by path equivalence. In particular, even on Windows machines, the comparison is case-sensitive, whether or not file names are in general case sensitive on the current platform. To close and flush all open log files in one step, you can simply used XNLPcloselogfiles, for example: if(xnlpcloselogfiles(prob)!= XNLP_OK) { /* Recoverable error */ } 3.3 Message Callbacks It is also possible to access messages programmatically through the Xpress-NonLinear API. To do so, you need to define a callback function which the XNLP run-time will invoke whenever a message is produced. This function takes an extra argument for you to pass in your own data to it. For example, we might define our callback function as void my_message_callback(xnlpprob prob, const char *msg, void *user_data) { /* Cast user data and process message here */ } We could then pass it to XNLPregistermessagecallback so that it is called for all subsequent messages. Note that there may be multiple callbacks per problem, and they will be invoked by the run-time in the order in which they were registered. struct my_data_type *my_data; int handle; if(xnlpregistermessagecallback(prob, my_message_callback, my_data, &handle)!= XNLP_OK) { /* Recoverable error */ } The handle which is returned is a unique identifier for this callback. It can be used to remove this specific callback, by calling XNLPderegistermessagecallback. For example, struct my_data_type *my_data; if(xnlpderegistermessagecallback(prob, handle, (void **)&my_data)!= XNLP_OK) { /* Recoverable error */ } /* Clean up my_data, as needed */ The optional data parameter will, if non-null, be filled in with the user data registered for this callback. This can be used to simplify clean-up. To deregister all message callbacks on a problem, use XNLPclearmessagecallbacks. Note that this function doesn t provide any way to free any user data associated with the callbacks, so the caller is responsible for making sure that it has all been cleaned up after this call is made. if(xnlpclearmessagecallbacks(prob)!= XNLP_OK) { /* Recoverable error */ } /* Clean up any user data for these callbacks, which we must have kept references to */ Messages c Copyright Fair Isaac Corporation. All rights reserved. page 14
21 II. Advanced Topics
22 CHAPTER 4 Understanding solutions 4.1 Local and global optimality A globally optimal solution is a feasible solution with the best possible objective value. In general, the global optimum for a problem is not unique. By contrast, a locally optimal solution has the best possible objective value within some open ball around it. For a convex problem, every local optimum is a global optimum, but for general nonlinear problems, this is not the case. An example showing the nature and effect of locally optimal solutions is provided in mmxnlp_local_optima.mos. For convex problems, which include linear, convex quadratic and convex quadratically constrained programs, solvers in the FICO Xpress library will always provide a globally optimal solution when one exists. This also holds true for mixed integer problems whose continuous relaxation is convex. When a problem is of a more general nonlinear type, there will typically be many local optima, which are potentially widely spaced, or even in parts of the feasible region which are not connected. For these problems, both XSLP and KNITRO guarantee only that they will return a locally optimal solution. That is, the result of optimization will be a solution which is better than any others in its immediate neighborhood, but there might exist other solutions which are far distant which have a better objective value. Finding a guaranteed global optimum for an arbitrary nonlinear function requires an exhaustive search, which may be orders of magnitude more expensive. To use an analogy, it is the difference between finding a valley in a range of mountains, and finding the deepest valley. When standing in a particular valley, there is no way to know whether there is a deeper valley somewhere else. Neither local nor global optima are typically unique. The solution returned by a solver will depend on the control settings used and, particularly for non-convex problems, on the initial values provided. A connected set of initial points yielding the same locally optimal solutions is sometimes referred to as a region of attraction for the solution. These regions are typically both algorithm and setting dependent. 4.2 Convexity Convex problems have many desirable characteristics from the perspective of mathematical optimization. Perhaps the most significant of these is that should both the objective and the feasible region be convex, any local optimally solutions found are also known immediately to be globally optimal. A demonstration of some of the effects of convexity can be found in mmxnlp_convexity.mos. A constraint f(x) 0 is convex if the matrix of second derivatives of f, that is to say its Hessian, is positive semi-definite at every point at which it exists. This requirement can be understood geometrically as requiring every point on every line segment which connects two points satisfying c Copyright Fair Isaac Corporation. All rights reserved. page 16
23 the constraint to also satisfy the constraint. It follows trivially that linear functions always lead to convex constraints, and that a nonlinear equality constraint is never convex. Figure 4.1: Two convex functions on the left, and two non-convex functions on the right. For regions, a similar property must hold. If any two points of the region can be connected by a line segment which lies fully in the region itself, the region is convex. This extension is straightforward when the the properties of convex functions are considered. Figure 4.2: A convex region on the left and a non-convex region on the right. It is important to note that convexity is necessary for some solution techniques and not for others. In particular, some solvers require convexity of the constraints and objective function to hold only in the feasible region, whilst others may require convexity to hold across the entire space, including infeasible points. In the special case of quadratic and quadratically constrained programs, Xpress-NonLinear seamlessly migrates problems to solvers whose convexity requirements match the convexity of the problem. 4.3 Converged and practical solutions In a strict mathematical sense, an algorithm is said to have converged if repeated iterations do not alter the coordinates of its solution significantly. A more practical view of convergence, as used in the nonlinear solvers of the Xpress suite, is to also consider the algorithm to have converged if repeated iterations have no significant effect on either the objective value or upon feasibility. This will be called extended convergence to distinguish it from the strict sense. For some problems, a solver may visit points at which the local neighborhood is very complex, or even malformed due to numerical issues. In this situation, the best results may be obtained when convergence of some of the variables is forced. This leads to practical solutions, which are feasible and converged in most variables, but the remaining variables have had their convergence forced by the solver, for example by means of a trust region. Although these solutions are not locally optimal in a strict sense, they provide meaningful, useful results for difficult problems in practice. Understanding solutions c Copyright Fair Isaac Corporation. All rights reserved. page 17
24 4.4 The duals of general, nonlinear program The dual of a mathematical program plays a fundamental role in the theory of continuous optimization. Each variable in a problem has a corresponding partner in that problem s dual, and the values of those variables are called the reduced costs and dual multipliers (shadow prices). Xpress-NonLinear makes estimates of these values available. These are normally defined in a similar way to the usual linear programming case, so that each value represents the rate of change of the objective when either increasing the corresponding primal variable or relaxing the corresponding primal constraint. For a demonstration of dual values in Xpress-NonLinear, please see mmxnlp_nlp_duals.mos. From an algorithmic perspective, one of the most important roles of the dual variables is to characterize local optimality. In this context, the dual multipliers and reduced costs are called Lagrange multipliers, and a solution with both primal and dual feasible variables satisfies the Karush-Kuhn-Tucker conditions. However, it is important to note that for general nonlinear problems, there exist situations in which there are no such multipliers. Geometrically, this means that the slope of the objective function is orthogonal to the linearization of the active constraints, but that their curvature still prevents any movement in the improving direction. As a simple example, consider: which is shown graphically in figure 4.3. minimize y subject to x 2 + y 2 1 (x 2) 2 + y 2 1 Figure 4.3: A problem admitting no dual values This problem has a single feasible solution at (1,0). Reduced costs and dual multipliers could never be meaningful indicators of optimality, and indeed are not well-defined for this problem. Intuitively, this arises because the feasible region lacks an interior, and the existence of an interior (also referred to as the Slater condition) is one of several alternative conditions which can be enforced to ensure that such situations do not occur. The other common condition for well-defined duals is that the gradients of the active constraints are linearly independent. Problems without valid duals do not often arise in practice, but it is important to be aware of the possibility. Analytic detection of such issues is difficult, and they manifest instead in the form of unexpectedly large or otherwise implausible dual values. Understanding solutions c Copyright Fair Isaac Corporation. All rights reserved. page 18
25 CHAPTER 5 Some practical considerations This chapter provides guidance on some of the practical aspects to consider when performing optimization with Xpress-NonLinear. 5.1 The initial point The solution process is sensitive to the initial values which are selected for variables in the problem, and particularly so for non-convex problems. It is not uncommon for a general nonlinear problem to have a feasible region which is not connected, and in this case the starting point may largely determine which region, connected set, or basin of attraction the final solution belongs to. An example of this type of problem is given in mmxnlp_nonconnected.mos. Note that it may not always be beneficial to completely specify an initial point, as the solvers themselves may be able to detect suitable starting values for some or all of the variables. 5.2 Derivatives Both XSLP and KNITRO require the availability of derivative information for the constraints and objective function in order to solve a problem. In the Xpress-NonLinear framework, several advanced approaches to the production of both first and second order derivatives (the Jacobian and Hessian matrices) are available, and which approach is used can be controlled by the user. A demonstration of some practical aspects of the different derivatives engines is available in mmxnlp_derivatives.mos Finite Differences The simplest such method is the use of finite differences, sometimes called numerical derivatives. This is a relatively coarse approximation, in which the function is evaluated in a small neighborhood of the point in question. The standard argument from calculus indicates that an increasingly accurate approximation to the derivative of the function will be found as the size of the neighborhood decreases. This argument ignores the effects of floating point arithmetic, however, which can make it difficult to select values sufficiently small to give a good approximation to the function, and yet sufficiently large to avoid substantial numerical error. The high performance implementation in XNLP makes use of subexpression caching to improve performance, but finite differences are inherently inefficient. They may however be necessary when the function itself is not known in closed form. When analytic approaches cannot be used, due to the use of expensive black box functions which do not provide derivatives (note that XSLP does allow user functions to provide their own derivatives), the cost of function evaluations may become a dominant factor in solve time. It is important to note that each second order numerical c Copyright Fair Isaac Corporation. All rights reserved. page 19
Chapter II. Linear Programming
1 Chapter II Linear Programming 1. Introduction 2. Simplex Method 3. Duality Theory 4. Optimality Conditions 5. Applications (QP & SLP) 6. Sensitivity Analysis 7. Interior Point Methods 1 INTRODUCTION
More informationProgramming, numerics and optimization
Programming, numerics and optimization Lecture C-4: Constrained optimization Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428 June
More informationAdvanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras
Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture 16 Cutting Plane Algorithm We shall continue the discussion on integer programming,
More informationAdvanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras
Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture - 35 Quadratic Programming In this lecture, we continue our discussion on
More informationXpress-Kalis. User guide. Release User guide. FICO TM Xpress Optimization Suite. Last update 14 February,
User guide FICO TM Xpress Optimization Suite Xpress-Kalis User guide Release 12.3 Last update 14 February, 2014 www.fico.com Make every decision count TM This document is the confidential, proprietary,
More informationApplied Lagrange Duality for Constrained Optimization
Applied Lagrange Duality for Constrained Optimization Robert M. Freund February 10, 2004 c 2004 Massachusetts Institute of Technology. 1 1 Overview The Practical Importance of Duality Review of Convexity
More informationContents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.
page v Preface xiii I Basics 1 1 Optimization Models 3 1.1 Introduction... 3 1.2 Optimization: An Informal Introduction... 4 1.3 Linear Equations... 7 1.4 Linear Optimization... 10 Exercises... 12 1.5
More informationUnconstrained Optimization Principles of Unconstrained Optimization Search Methods
1 Nonlinear Programming Types of Nonlinear Programs (NLP) Convexity and Convex Programs NLP Solutions Unconstrained Optimization Principles of Unconstrained Optimization Search Methods Constrained Optimization
More informationGurobi Guidelines for Numerical Issues February 2017
Gurobi Guidelines for Numerical Issues February 2017 Background Models with numerical issues can lead to undesirable results: slow performance, wrong answers or inconsistent behavior. When solving a model
More informationMathematical Programming and Research Methods (Part II)
Mathematical Programming and Research Methods (Part II) 4. Convexity and Optimization Massimiliano Pontil (based on previous lecture by Andreas Argyriou) 1 Today s Plan Convex sets and functions Types
More informationComputational Methods. Constrained Optimization
Computational Methods Constrained Optimization Manfred Huber 2010 1 Constrained Optimization Unconstrained Optimization finds a minimum of a function under the assumption that the parameters can take on
More informationNOTATION AND TERMINOLOGY
15.053x, Optimization Methods in Business Analytics Fall, 2016 October 4, 2016 A glossary of notation and terms used in 15.053x Weeks 1, 2, 3, 4 and 5. (The most recent week's terms are in blue). NOTATION
More informationDepartment of Mathematics Oleg Burdakov of 30 October Consider the following linear programming problem (LP):
Linköping University Optimization TAOP3(0) Department of Mathematics Examination Oleg Burdakov of 30 October 03 Assignment Consider the following linear programming problem (LP): max z = x + x s.t. x x
More informationB553 Lecture 12: Global Optimization
B553 Lecture 12: Global Optimization Kris Hauser February 20, 2012 Most of the techniques we have examined in prior lectures only deal with local optimization, so that we can only guarantee convergence
More informationSimulation. Lecture O1 Optimization: Linear Programming. Saeed Bastani April 2016
Simulation Lecture O Optimization: Linear Programming Saeed Bastani April 06 Outline of the course Linear Programming ( lecture) Integer Programming ( lecture) Heuristics and Metaheursitics (3 lectures)
More informationSome Advanced Topics in Linear Programming
Some Advanced Topics in Linear Programming Matthew J. Saltzman July 2, 995 Connections with Algebra and Geometry In this section, we will explore how some of the ideas in linear programming, duality theory,
More informationMLR Institute of Technology
Course Name : Engineering Optimization Course Code : 56021 Class : III Year Branch : Aeronautical Engineering Year : 2014-15 Course Faculty : Mr Vamsi Krishna Chowduru, Assistant Professor Course Objective
More informationAdvanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras
Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture 18 All-Integer Dual Algorithm We continue the discussion on the all integer
More informationCharacterizing Improving Directions Unconstrained Optimization
Final Review IE417 In the Beginning... In the beginning, Weierstrass's theorem said that a continuous function achieves a minimum on a compact set. Using this, we showed that for a convex set S and y not
More informationFundamentals of Operations Research. Prof. G. Srinivasan. Department of Management Studies. Indian Institute of Technology, Madras. Lecture No.
Fundamentals of Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture No. # 13 Transportation Problem, Methods for Initial Basic Feasible
More informationFundamentals of Operations Research. Prof. G. Srinivasan. Department of Management Studies. Indian Institute of Technology Madras.
Fundamentals of Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology Madras Lecture No # 06 Simplex Algorithm Initialization and Iteration (Refer Slide
More informationLECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach
LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach Basic approaches I. Primal Approach - Feasible Direction
More informationA Short SVM (Support Vector Machine) Tutorial
A Short SVM (Support Vector Machine) Tutorial j.p.lewis CGIT Lab / IMSC U. Southern California version 0.zz dec 004 This tutorial assumes you are familiar with linear algebra and equality-constrained optimization/lagrange
More informationAIMMS Language Reference - AIMMS Outer Approximation Algorithm for MINLP
AIMMS Language Reference - AIMMS Outer Approximation Algorithm for MINLP This file contains only one chapter of the book. For a free download of the complete book in pdf format, please visit www.aimms.com
More informationLP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008
LP-Modelling dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven January 30, 2008 1 Linear and Integer Programming After a brief check with the backgrounds of the participants it seems that the following
More informationUser s Guide FICO Online Support
FICO Online Support User s Guide User s Guide FICO Online Support www.fico.com Make every decision count This document is the confidential, unpublished property of Fair Isaac Corporation. Receipt or possession
More information5. DUAL LP, SOLUTION INTERPRETATION, AND POST-OPTIMALITY
5. DUAL LP, SOLUTION INTERPRETATION, AND POST-OPTIMALITY 5.1 DUALITY Associated with every linear programming problem (the primal) is another linear programming problem called its dual. If the primal involves
More informationDiscrete Optimization. Lecture Notes 2
Discrete Optimization. Lecture Notes 2 Disjunctive Constraints Defining variables and formulating linear constraints can be straightforward or more sophisticated, depending on the problem structure. The
More informationExploiting Degeneracy in MIP
Exploiting Degeneracy in MIP Tobias Achterberg 9 January 2018 Aussois Performance Impact in Gurobi 7.5+ 35% 32.0% 30% 25% 20% 15% 14.6% 10% 5.7% 7.9% 6.6% 5% 0% 2.9% 1.2% 0.1% 2.6% 2.6% Time limit: 10000
More informationChapter 15 Introduction to Linear Programming
Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of
More informationIntroduction to Linear Programming. Algorithmic and Geometric Foundations of Optimization
Introduction to Linear Programming Algorithmic and Geometric Foundations of Optimization Optimization and Linear Programming Mathematical programming is a class of methods for solving problems which ask
More informationOptimization Methods. Final Examination. 1. There are 5 problems each w i t h 20 p o i n ts for a maximum of 100 points.
5.93 Optimization Methods Final Examination Instructions:. There are 5 problems each w i t h 2 p o i n ts for a maximum of points. 2. You are allowed to use class notes, your homeworks, solutions to homework
More informationThe Ascendance of the Dual Simplex Method: A Geometric View
The Ascendance of the Dual Simplex Method: A Geometric View Robert Fourer 4er@ampl.com AMPL Optimization Inc. www.ampl.com +1 773-336-AMPL U.S.-Mexico Workshop on Optimization and Its Applications Huatulco
More informationMATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS
MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS GRADO EN A.D.E. GRADO EN ECONOMÍA GRADO EN F.Y.C. ACADEMIC YEAR 2011-12 INDEX UNIT 1.- AN INTRODUCCTION TO OPTIMIZATION 2 UNIT 2.- NONLINEAR PROGRAMMING
More informationReview of the C Programming Language for Principles of Operating Systems
Review of the C Programming Language for Principles of Operating Systems Prof. James L. Frankel Harvard University Version of 7:26 PM 4-Sep-2018 Copyright 2018, 2016, 2015 James L. Frankel. All rights
More informationIntroduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs
Introduction to Mathematical Programming IE406 Lecture 20 Dr. Ted Ralphs IE406 Lecture 20 1 Reading for This Lecture Bertsimas Sections 10.1, 11.4 IE406 Lecture 20 2 Integer Linear Programming An integer
More informationPrograms. Introduction
16 Interior Point I: Linear Programs Lab Objective: For decades after its invention, the Simplex algorithm was the only competitive method for linear programming. The past 30 years, however, have seen
More informationReview of the C Programming Language
Review of the C Programming Language Prof. James L. Frankel Harvard University Version of 11:55 AM 22-Apr-2018 Copyright 2018, 2016, 2015 James L. Frankel. All rights reserved. Reference Manual for the
More informationShiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 2. Convex Optimization
Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 2 Convex Optimization Shiqian Ma, MAT-258A: Numerical Optimization 2 2.1. Convex Optimization General optimization problem: min f 0 (x) s.t., f i
More informationParallelizing the dual revised simplex method
Parallelizing the dual revised simplex method Qi Huangfu 1 Julian Hall 2 1 FICO 2 School of Mathematics, University of Edinburgh Birmingham 9 September 2016 Overview Background Two parallel schemes Single
More informationEllipsoid Algorithm :Algorithms in the Real World. Ellipsoid Algorithm. Reduction from general case
Ellipsoid Algorithm 15-853:Algorithms in the Real World Linear and Integer Programming II Ellipsoid algorithm Interior point methods First polynomial-time algorithm for linear programming (Khachian 79)
More information5 Machine Learning Abstractions and Numerical Optimization
Machine Learning Abstractions and Numerical Optimization 25 5 Machine Learning Abstractions and Numerical Optimization ML ABSTRACTIONS [some meta comments on machine learning] [When you write a large computer
More informationKT-1 Token. Reference Guide. CRYPTOCard Token Guide
KT-1 Token Reference Guide CRYPTOCard Token Guide Proprietary Notice License and Warranty Information CRYPTOCard Inc. and its affiliates retain all ownership rights to the computer program described in
More informationTMA946/MAN280 APPLIED OPTIMIZATION. Exam instructions
Chalmers/GU Mathematics EXAM TMA946/MAN280 APPLIED OPTIMIZATION Date: 03 05 28 Time: House V, morning Aids: Text memory-less calculator Number of questions: 7; passed on one question requires 2 points
More informationCisco WebEx Best Practices for Secure Meetings for Site Administrators and Hosts
Cisco WebEx Best Practices for Secure Meetings for Site Administrators and Hosts First Published: 2016-04-04 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA
More informationModule 1 Lecture Notes 2. Optimization Problem and Model Formulation
Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization
More informationGoals of this Lecture
C Pointers Goals of this Lecture Help you learn about: Pointers and application Pointer variables Operators & relation to arrays 2 Pointer Variables The first step in understanding pointers is visualizing
More informationIntroduction to Optimization
Introduction to Optimization Constrained Optimization Marc Toussaint U Stuttgart Constrained Optimization General constrained optimization problem: Let R n, f : R n R, g : R n R m, h : R n R l find min
More informationMath 5593 Linear Programming Lecture Notes
Math 5593 Linear Programming Lecture Notes Unit II: Theory & Foundations (Convex Analysis) University of Colorado Denver, Fall 2013 Topics 1 Convex Sets 1 1.1 Basic Properties (Luenberger-Ye Appendix B.1).........................
More informationPartitioning in Oracle Database 10g Release 2. An Oracle White Paper May 2005
Partitioning in Oracle Database 10g Release 2 An Oracle White Paper May 2005 Oracle Partitioning EXECUTIVE OVERVIEW Oracle Partitioning will enhance the manageability, performance, and availability of
More informationPRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction
PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING KELLER VANDEBOGERT AND CHARLES LANNING 1. Introduction Interior point methods are, put simply, a technique of optimization where, given a problem
More informationIntroduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs
Introduction to Mathematical Programming IE496 Final Review Dr. Ted Ralphs IE496 Final Review 1 Course Wrap-up: Chapter 2 In the introduction, we discussed the general framework of mathematical modeling
More informationMetaheuristic Optimization with Evolver, Genocop and OptQuest
Metaheuristic Optimization with Evolver, Genocop and OptQuest MANUEL LAGUNA Graduate School of Business Administration University of Colorado, Boulder, CO 80309-0419 Manuel.Laguna@Colorado.EDU Last revision:
More informationIntroduction. Linear because it requires linear functions. Programming as synonymous of planning.
LINEAR PROGRAMMING Introduction Development of linear programming was among the most important scientific advances of mid-20th cent. Most common type of applications: allocate limited resources to competing
More informationAdvanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs
Advanced Operations Research Techniques IE316 Quiz 1 Review Dr. Ted Ralphs IE316 Quiz 1 Review 1 Reading for The Quiz Material covered in detail in lecture. 1.1, 1.4, 2.1-2.6, 3.1-3.3, 3.5 Background material
More informationIntroduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras
Introduction to Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Module - 05 Lecture - 24 Solving LPs with mixed type of constraints In the
More informationLinear Programming. them such that they
Linear Programming l Another "Sledgehammer" in our toolkit l Many problems fit into the Linear Programming approach l These are optimization tasks where both the constraints and the objective are linear
More informationSection Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017
Section Notes 5 Review of Linear Programming Applied Math / Engineering Sciences 121 Week of October 15, 2017 The following list of topics is an overview of the material that was covered in the lectures
More informationDaniel MeterLink Software v1.40
Quick Start Manual P/N 3-9000-763, Rev K June 2017 Daniel MeterLink Software v1.40 for Daniel Gas and Liquid Ultrasonic Flow Meters Software License Agreement PLEASE READ THIS SOFTWARE LICENSE AGREEMENT
More informationInteger Programming ISE 418. Lecture 7. Dr. Ted Ralphs
Integer Programming ISE 418 Lecture 7 Dr. Ted Ralphs ISE 418 Lecture 7 1 Reading for This Lecture Nemhauser and Wolsey Sections II.3.1, II.3.6, II.4.1, II.4.2, II.5.4 Wolsey Chapter 7 CCZ Chapter 1 Constraint
More informationA Truncated Newton Method in an Augmented Lagrangian Framework for Nonlinear Programming
A Truncated Newton Method in an Augmented Lagrangian Framework for Nonlinear Programming Gianni Di Pillo (dipillo@dis.uniroma1.it) Giampaolo Liuzzi (liuzzi@iasi.cnr.it) Stefano Lucidi (lucidi@dis.uniroma1.it)
More informationUsing Analytic QP and Sparseness to Speed Training of Support Vector Machines
Using Analytic QP and Sparseness to Speed Training of Support Vector Machines John C. Platt Microsoft Research 1 Microsoft Way Redmond, WA 9805 jplatt@microsoft.com Abstract Training a Support Vector Machine
More informationOPTIMIZATION METHODS
D. Nagesh Kumar Associate Professor Department of Civil Engineering, Indian Institute of Science, Bangalore - 50 0 Email : nagesh@civil.iisc.ernet.in URL: http://www.civil.iisc.ernet.in/~nagesh Brief Contents
More informationEC121 Mathematical Techniques A Revision Notes
EC Mathematical Techniques A Revision Notes EC Mathematical Techniques A Revision Notes Mathematical Techniques A begins with two weeks of intensive revision of basic arithmetic and algebra, to the level
More informationCalifornia Institute of Technology Crash-Course on Convex Optimization Fall Ec 133 Guilherme Freitas
California Institute of Technology HSS Division Crash-Course on Convex Optimization Fall 2011-12 Ec 133 Guilherme Freitas In this text, we will study the following basic problem: maximize x C f(x) subject
More informationSequential Coordinate-wise Algorithm for Non-negative Least Squares Problem
CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Sequential Coordinate-wise Algorithm for Non-negative Least Squares Problem Woring document of the EU project COSPAL IST-004176 Vojtěch Franc, Miro
More informationColumn Generation Method for an Agent Scheduling Problem
Column Generation Method for an Agent Scheduling Problem Balázs Dezső Alpár Jüttner Péter Kovács Dept. of Algorithms and Their Applications, and Dept. of Operations Research Eötvös Loránd University, Budapest,
More informationConvex Optimization. Erick Delage, and Ashutosh Saxena. October 20, (a) (b) (c)
Convex Optimization (for CS229) Erick Delage, and Ashutosh Saxena October 20, 2006 1 Convex Sets Definition: A set G R n is convex if every pair of point (x, y) G, the segment beteen x and y is in A. More
More information1.1 What is Microeconomics?
1.1 What is Microeconomics? Economics is the study of allocating limited resources to satisfy unlimited wants. Such a tension implies tradeoffs among competing goals. The analysis can be carried out at
More informationMOSEK Optimization Suite
MOSEK Optimization Suite Release 8.1.0.72 MOSEK ApS 2018 CONTENTS 1 Overview 1 2 Interfaces 5 3 Remote optimization 11 4 Contact Information 13 i ii CHAPTER ONE OVERVIEW The problem minimize 1x 1 + 2x
More informationSolution Methods Numerical Algorithms
Solution Methods Numerical Algorithms Evelien van der Hurk DTU Managment Engineering Class Exercises From Last Time 2 DTU Management Engineering 42111: Static and Dynamic Optimization (6) 09/10/2017 Class
More information1 2 (3 + x 3) x 2 = 1 3 (3 + x 1 2x 3 ) 1. 3 ( 1 x 2) (3 + x(0) 3 ) = 1 2 (3 + 0) = 3. 2 (3 + x(0) 1 2x (0) ( ) = 1 ( 1 x(0) 2 ) = 1 3 ) = 1 3
6 Iterative Solvers Lab Objective: Many real-world problems of the form Ax = b have tens of thousands of parameters Solving such systems with Gaussian elimination or matrix factorizations could require
More information15-451/651: Design & Analysis of Algorithms October 11, 2018 Lecture #13: Linear Programming I last changed: October 9, 2018
15-451/651: Design & Analysis of Algorithms October 11, 2018 Lecture #13: Linear Programming I last changed: October 9, 2018 In this lecture, we describe a very general problem called linear programming
More informationEcma International Policy on Submission, Inclusion and Licensing of Software
Ecma International Policy on Submission, Inclusion and Licensing of Software Experimental TC39 Policy This Ecma International Policy on Submission, Inclusion and Licensing of Software ( Policy ) is being
More informationCasADi tutorial Introduction
Lund, 6 December 2011 CasADi tutorial Introduction Joel Andersson Department of Electrical Engineering (ESAT-SCD) & Optimization in Engineering Center (OPTEC) Katholieke Universiteit Leuven OPTEC (ESAT
More informationLinear programming II João Carlos Lourenço
Decision Support Models Linear programming II João Carlos Lourenço joao.lourenco@ist.utl.pt Academic year 2012/2013 Readings: Hillier, F.S., Lieberman, G.J., 2010. Introduction to Operations Research,
More informationPart 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm
In the name of God Part 4. 4.1. Dantzig-Wolf Decomposition Algorithm Spring 2010 Instructor: Dr. Masoud Yaghini Introduction Introduction Real world linear programs having thousands of rows and columns.
More informationFinancial Optimization ISE 347/447. Lecture 13. Dr. Ted Ralphs
Financial Optimization ISE 347/447 Lecture 13 Dr. Ted Ralphs ISE 347/447 Lecture 13 1 Reading for This Lecture C&T Chapter 11 ISE 347/447 Lecture 13 2 Integer Linear Optimization An integer linear optimization
More informationData Mining Chapter 8: Search and Optimization Methods Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University
Data Mining Chapter 8: Search and Optimization Methods Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University Search & Optimization Search and Optimization method deals with
More informationSymantec Enterprise Vault Technical Note
Symantec Enterprise Vault Technical Note FSA Reporting deployment guidelines 8.0 Symantec Information Foundation Symantec Enterprise Vault: FSA Reporting deployment guidelines The software described in
More informationEcma International Policy on Submission, Inclusion and Licensing of Software
Ecma International Policy on Submission, Inclusion and Licensing of Software Experimental TC39 Policy This Ecma International Policy on Submission, Inclusion and Licensing of Software ( Policy ) is being
More informationThe Gurobi Optimizer. Bob Bixby
The Gurobi Optimizer Bob Bixby Outline Gurobi Introduction Company Products Benchmarks Gurobi Technology Rethinking MIP MIP as a bag of tricks 8-Jul-11 2010 Gurobi Optimization 2 Gurobi Optimization Incorporated
More informationLecture 15: Log Barrier Method
10-725/36-725: Convex Optimization Spring 2015 Lecturer: Ryan Tibshirani Lecture 15: Log Barrier Method Scribes: Pradeep Dasigi, Mohammad Gowayyed Note: LaTeX template courtesy of UC Berkeley EECS dept.
More informationVeritas Storage Foundation and High Availability Solutions HA and Disaster Recovery Solutions Guide for Microsoft SharePoint Server
Veritas Storage Foundation and High Availability Solutions HA and Disaster Recovery Solutions Guide for Microsoft SharePoint Server Windows Server 2003, Windows Server 2008 5.1 Service Pack 1 Veritas Storage
More informationTheoretical Concepts of Machine Learning
Theoretical Concepts of Machine Learning Part 2 Institute of Bioinformatics Johannes Kepler University, Linz, Austria Outline 1 Introduction 2 Generalization Error 3 Maximum Likelihood 4 Noise Models 5
More informationPrimal Dual Schema Approach to the Labeling Problem with Applications to TSP
1 Primal Dual Schema Approach to the Labeling Problem with Applications to TSP Colin Brown, Simon Fraser University Instructor: Ramesh Krishnamurti The Metric Labeling Problem has many applications, especially
More informationOpen Source Used In c1101 and c1109 Cisco IOS XE Fuji
Open Source Used In c1101 and c1109 Cisco IOS XE Fuji 16.8.1 Cisco Systems, Inc. www.cisco.com Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax numbers are listed on the Cisco
More informationOutline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014
5/2/24 Outline CS38 Introduction to Algorithms Lecture 5 May 2, 24 Linear programming simplex algorithm LP duality ellipsoid algorithm * slides from Kevin Wayne May 2, 24 CS38 Lecture 5 May 2, 24 CS38
More information16.410/413 Principles of Autonomy and Decision Making
16.410/413 Principles of Autonomy and Decision Making Lecture 17: The Simplex Method Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology November 10, 2010 Frazzoli (MIT)
More informationEARLY INTERIOR-POINT METHODS
C H A P T E R 3 EARLY INTERIOR-POINT METHODS An interior-point algorithm is one that improves a feasible interior solution point of the linear program by steps through the interior, rather than one that
More informationNokia Intellisync Mobile Suite Client Guide. S60 Platform, 3rd Edition
Nokia Intellisync Mobile Suite Client Guide S60 Platform, 3rd Edition Published May 2008 COPYRIGHT Copyright 1997-2008 Nokia Corporation. All rights reserved. Nokia, Nokia Connecting People, Intellisync,
More information3 INTEGER LINEAR PROGRAMMING
3 INTEGER LINEAR PROGRAMMING PROBLEM DEFINITION Integer linear programming problem (ILP) of the decision variables x 1,..,x n : (ILP) subject to minimize c x j j n j= 1 a ij x j x j 0 x j integer n j=
More informationAgenda. Understanding advanced modeling techniques takes some time and experience No exercises today Ask questions!
Modeling 2 Agenda Understanding advanced modeling techniques takes some time and experience No exercises today Ask questions! Part 1: Overview of selected modeling techniques Background Range constraints
More informationInterior Point I. Lab 21. Introduction
Lab 21 Interior Point I Lab Objective: For decades after its invention, the Simplex algorithm was the only competitive method for linear programming. The past 30 years, however, have seen the discovery
More information5 The Theory of the Simplex Method
5 The Theory of the Simplex Method Chapter 4 introduced the basic mechanics of the simplex method. Now we shall delve a little more deeply into this algorithm by examining some of its underlying theory.
More informationKEPServerEX Client Connectivity Guide
KEPServerEX Client Connectivity Guide For Intellution s FIX32 KTSM-00005 v. 1.02 Copyright 2001, Kepware Technologies KEPWARE END USER LICENSE AGREEMENT AND LIMITED WARRANTY The software accompanying this
More informationOptimization III: Constrained Optimization
Optimization III: Constrained Optimization CS 205A: Mathematical Methods for Robotics, Vision, and Graphics Doug James (and Justin Solomon) CS 205A: Mathematical Methods Optimization III: Constrained Optimization
More informationCHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM
20 CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 2.1 CLASSIFICATION OF CONVENTIONAL TECHNIQUES Classical optimization methods can be classified into two distinct groups:
More informationNonlinear Programming
Nonlinear Programming SECOND EDITION Dimitri P. Bertsekas Massachusetts Institute of Technology WWW site for book Information and Orders http://world.std.com/~athenasc/index.html Athena Scientific, Belmont,
More informationLinear and Integer Programming :Algorithms in the Real World. Related Optimization Problems. How important is optimization?
Linear and Integer Programming 15-853:Algorithms in the Real World Linear and Integer Programming I Introduction Geometric Interpretation Simplex Method Linear or Integer programming maximize z = c T x
More information