Continuous Piecewise Linear Delta-Approximations for. Bivariate and Multivariate Functions. Noname manuscript No. (will be inserted by the editor)

Size: px
Start display at page:

Download "Continuous Piecewise Linear Delta-Approximations for. Bivariate and Multivariate Functions. Noname manuscript No. (will be inserted by the editor)"

Transcription

1 Noname manuscript No. (will be inserted by the editor) Continuous Piecewise Linear Delta-Approximations for Bivariate and Multivariate Functions Steffen Rebennack Josef Kallrath 4 5 November, Abstract For functions depending on two variables, we automatically construct triangulations subject to the condition that the continuous, piecewise linear approximation, under- or overestimation never deviates more than a given δ-tolerance from the original function over a given domain. This tolerance is ensured by solving subproblems over each triangle to global optimality. The continuous, piecewise linear approximators, under- and overestimators involve shift variables at the vertices of the triangles leading to a small number of triangles while still ensuring continuity over the entire domain. For functions depending on more than two variables, we provide appropriate transformations and substitutions, which allow the use of one- or two- S. Rebennack ( ) Division of Economics and Business, Colorado School of Mines, Golden, Co, USA srebenna@mines.edu J. Kallrath Department of Astronomy, University of Florida, Gainesville, FL, USA kallrath@astro.ufl.edu

2 Steffen Rebennack, Josef Kallrath dimensional δ-approximators. We address the problem of error propagation when using these dimensionality reduction routines. We discuss and analyze the trade-off between one-dimensional and two-dimensional approaches and we demonstrate the numerical behavior of our approach on nine bivariate functions for five different δ- tolerances. Keywords global optimization nonlinear programming mixed-integer nonlinear programming non-convex optimization error propagation Mathematics Subject Classification () 9C6 Introduction In this paper, we are interested in approximating nonlinear multivariate functions. The computed approximations should not deviate more than a pre-defined tolerance δ > from the original function. In addition, the approximated functions should be continuous and piecewise linear so they can be represented using mixed-integer linear programming (MILP) techniques. Thus, we are not seeking the computation of a global optimum of a nonconvex function but a function approximation instead. The motivation to compute these δ-approximations is to approximate a global optimization problem via an MILP problem. The δ-tolerance of the obtained approximation allows for the computation of safe bounds for the original global optimization problem, if the approximation is constructed carefully. Such MILP representations are of particular interest if the global optimization problem is embedded into a, typically much larger, MILP problem. Examples are water supply network optimization and transient technical optimization of gas networks, generalized pooling and

3 Continuous Piecewise Linear Approximations integrated water systems problems, gas lifting and well scheduling for enhanced oil recovery, and electrical networks [, ]. Our own motivation comes from large-scale production planning problems [, 4], and power system optimization problems [5 7]. The incremental approach [, 8] producing Delaunay triangulations is most closely related to our work, but does not involve shift variables at the vertices of the triangles. Our approach can also handle arbitrary, indefinite functions regardless of their curvature. Instead of reviewing a rich body of literature related to piecewise linear approximation, we point the reader to the following papers: Ref. [] presents explicit, piecewise linear formulations of two- or three-dimensional functions based on simplices; Ref. [9] uses triangulations for quadratically constrained problems; Ref. [] compares different formulations (one-dimensional, rectangle, triangle) to approximate two-dimensional functions; and Refs. [, ] are the first to compute optimal breakpoint systems for univariate functions. The recent invention of modified SOS- formulations, growing only logarithmically in the number of support areas (breakpoints, triangles, or simplices), relieves somewhat the pressure to seek for a minimum number of support areas involved in the linear approximation of functions []. We extend the ideas for univariate functions [] to higher dimensions. The contributions of this paper are threefold: We develop algorithms to automatically compute triangulations and the construction of continuous, piecewise linear functions over such systems of triangles which approximate nonlinear functions in two variables to δ-accuracy.. We classify a rich class of n-dimensional functions which can be transformed into lower dimensional functions along with the error propagation.

4 4 Steffen Rebennack, Josef Kallrath 6 6. We demonstrate both the one- and two-dimensional approximation techniques on a test bed of nine bivariate functions In the remainder of this paper, we construct δ-accurate piecewise linear approximators, over- and underestimators for bivariate functions in Section. Transformations and higher dimensional functions are treated in Section. Section 4 provides our numerical results. We conclude in Section Bivariate Functions In the one-dimensional case, we construct convex linear combinations of breakpointlimited disjunct intervals covering the region of interest. In the two-dimensional case, we start with rectangular regions and we are seeking support areas which cover the rectangle and which can easily be made larger or smaller reflecting the curvature of the function we want to approximate. While functions depending on two or more variables are treated by equally-sized simplices leading to direct SOS- representations in [], our approach utilizes different-sized triangles to better adjust to the function. Consider a triangle T IR in the x -x -plane established by three points (vertices) P j =(X j,x j ) IR, j =,,. We assume that at most two of them are col inear, i.e., all three of them never lie on the same line. Each point p=(x,x ) T can be represented as a convex combination of these three points, i.e., p= j= λ j P j with λ j and j= λ j = Let f(p) = f(x,x ) be a real-valued function in two arguments x and x defined over a rectangle D := [X,X + ] [X,X + ] T. We can construct a linear ap-

5 Continuous Piecewise Linear Approximations proximation l(p) of f over T by a convex combination of the function values, f j = f(p j )= f(x j,x j ), at the points p: l(p)= j= λ j f j. 8. Constructing the Triangulation 8 84 Our goal is now to construct a triangulation of D by a set T of triangles T t, with T t T T t D, with a minimal number of triangles subject to the constraint t := max p T t l(p) f(p) δ, T t T. () The construction of triangulations with a minimum number of triangles remains an open problem. The direct approach, in which we proceed similarly as in the onedimensional case by allowing a fixed number of breakpoints p b :=(x b,y b ) distributed in planed, raises severe problems: How to construct non-overlapping triangles (from the p b ) covering the rectangle?. How to ensure continuity in the vertices? If we know that two triangles T and T share edge e, then we have to apply the continuity constraints to the two shared vertices v v e T = v v e T with v=, belonging to edge e Using a marching scheme (cf. the moving breakpoint approach [, Section 4]) leads to complications in constructing an irregular grid of triangles. Therefore, we proceed differently and use a triangle refinement approach. To begin with, we divide D into two triangles T and T. Then, for a given triangle T t with vertices [v t,v t,v t ] and fixed shift variables [s t,s t,s t ], we solve the

6 6 Steffen Rebennack, Josef Kallrath 98 following (potentially nonconvex) nonlinear programming (NLP) problem: t := max l(p t ) f(p t ) () s.t. p t = j= λ jv t j () j= λ j = (4) l(p t ) := j= λ jφ(v ti ) (5) λ j [,], p t T t, j =,,, (6) 99 with φ( ) defined as φ(v t j )= f(v t j )+s t j, j =,, (7) for vertices v t j contained in triangle T t ; (7) is the analogon to equation () in []. If t δ, then we keep the triangle T t along with the shift variables s t j ; i.e., we add T t to T. Otherwise, we try a different value for the shift variables s t j as follows ( ) d δ s t jd := D+ (8) for some D IN and d {,...,D}. Care has to be taken to identify shift variables which have been fixed before. Thus, for each triangle T t, we solve between and D NLP problems ()-(6). If none of the shift variable combinations satisfy t δ, we use a so-called sub- division rule to sub-divide triangle T t into smaller triangles. Given is T t with ver- tices [v t,v t,v t ] and their three center points p ti of each side of the triangle, i.e., 9 p t :=(v t + v t )/, p t :=(v t + v t )/ and p t :=(v t + v t )/. Now, we divide T t into four triangles as follows (see Figure ): T =[v t, p t, p t ], T =[v t, p t, p t ], T =[v t, p t, p t ], T 4 =[p t, p t, p t ]. (9)

7 Continuous Piecewise Linear Approximations Once all triangles in T yield a piecewise linear δ -approximator for f, we need to remove potential discontinuities at the boundary of the triangles. The idea is to sub-divide such T t, which contain vertices of other triangles at the boundary of T t, into smaller triangles, without introducing any new vertices. The piecewise linear functions constructed on the smaller triangles deviate then at most δ from f ; the shift variables at the vertices remain untouched. One simple rule of sub-division is to iteratively connect one of the vertices on the boundary of T t with another vertex of the boundary or with a vertex of T t, but not with a vertex at the same side of the triangle. This idea is illustrated in Figure as well. v t v t p t T p t vˆt v t T T 4 T v t p t v t v t vť v t Fig. : Sub-division rule (left) and removal of discontinuities (right) The described procedure is summarized in Algorithm.. Set T contains all triangles which have not been checked; set S is a set of ordered pairs {s t j,v t j } which assigns each vertex v t j of a triangle a shift variable s t j. Using Algorithm., we obtain a triangulation for the domain D, leading to a continuous, piecewise linear approximation with the desired approximation δ. We make the following observations: () If continuity for an approximator is not required, then Algorithm. can be modified as follows: steps 9- can be removed and the criteria for t in steps 7 and 9 can be relaxed to t δ.

8 8 Steffen Rebennack, Josef Kallrath () Computationally, it is an advantage to terminate loop -7 if the error t > δ. () Algorithm. works for any compact domain which can be partitioned into a (finite) set of triangles. Step 6 of the algorithm needs to be adjusted accordingly. (4) The only requirement for any sub-division rule is that after finitely many iterations, triangles of arbitrarily small side lengths can be computed. Thus, we suggest an alternative sub-division rule: using the point pt of maximal deviation ofl and f over T t, obtained by solving ()-(6), we divide T t with vertices [v t,v t,v t ] into three triangles as follows (we found fixing the free shift variables to zero for computing point pt as computationally most efficient): [v t,v t, pt ], [v t,v t, pt ], [v t,v t, pt ]. () The case that pt happens to lie on any of the three sides of the triangle T t needs special care. First, we remove the triangle with zero area. Second, we need to ensure that the calculated function approximation is continuous at this particular side of the triangle T t. The continuity can be ensured by a simple trick: divide the one neighboring triangle which contains point pt into three triangles using (). This preserves the approximation tolerance and a deviation of δ instead of δ up front can be allowed. This sub-division rule has one drawback: one could imagine a case where the computed triangles get arbitrarily small in area, but not in the side length. To avoid this issue, one could sub-divide the triangles into smaller triangles using the subdivision rule used by Algorithm. every fixed iteration count. The subdivision rule using a point of maximal deviation has the advantage over the other sub-division rule presented above that the number of triangles is expected to increase slower.

9 Continuous Piecewise Linear Approximations. 9 Algorithm. Heuristic to Compute Triangulation and δ-approximator : // INPUT: Continuous function f, scalar δ >, and shift variable discretization size D IN : // OUTPUT: Set of triangles T and shift values S : // Initialize 4: T := /, S := / 5: // rectangle D=[X,X + ] [X,X + ] is divided into two triangles 6: T :={[(X,X ),(X +,X ),(X,X + )],[(X,X + ),(X +,X ),(X +,X + )]} 7: // Divide triangles until all triangles satisfy the δ -criteria 8: repeat 9: // obtain and remove triangulation : choose T t T with vertices [v t,v t,v t ] and update T T \T t : // obtain fixed variables : if {s t j,v t j } S, then obtain s t j and fix this variable for formulation ()-(6); for all j=,, : repeat 4: for all vertices v t j with un-fixed shift s t j, obtain a discretized value via (8) 5: // optimize 6: solve ()-(6) with fixed shift variables to obtain t 7: until t δ or all discretize values for the shift variables have been checked 8: // check δ -criteria 9: if t δ then : // update the set of triangles... : T T {T t } : //...and the shift variables : S S {{s t j,v t j }, j =,,} 4: else 5: construct new triangles via (9) and add them to set T 6: end if 7: until T = / 8: // Remove discontinuities 9: for all T t T do : if T t T where v t j lies on one of the sides of T t for some j then : sub-divide triangle T t : end if : end for

10 Steffen Rebennack, Josef Kallrath 5. Deriving Good δ-underestimators and δ-overestimators The easiest way to construct δ-under- and overestimatorsl ± (x) in the bivariate case is to exploit the interpolation-based approximation l(x) of f(x) with accuracy δ by setting l ± (x) :=l(x)± δ. However, if the δ -approximator for f does not possess a minimal number of triangles, then the computed δ-under- and overestimators are not minimal in the number of triangles used in the triangulation []. Our specific calculation of δ-underestimators or δ-overestimators follows very closely the idea of δ-approximators. We focus our discussions on δ-underestimators. 58 Instead of solving (), we use forl (p) p Tt + t := max p T t ( f(p) l (p) ) δ s.t. l (p) f(p), p T t. () We discretize the continuum conditions (), for a given triangle, into I grid points p ti. This is achieved by choosing λ i and λ i with i I := {,...,I}, yielding to λ i = λ i λ i. This generates a system of grid points p ti, p ti = j= λ ji v t j, T t T, i I. () 6 Let T t be a triangle with vertices [v t,v t,v t ]. The NLP ()-(6) is replaced by: D+ t := min η () s.t. η f(p ti ) l (p ti ), i I (4) l (p ti ) f(p ti ), i I (5) l (p ti ) := j= λ jiφ(v t j ), i I (6) η, s t j [ δ,], j =,,, (7)

11 Continuous Piecewise Linear Approximations with φ( ) as given by (7); the λ ji are fixed and obtained by (). Notice that the shift variables are not discretized, in contrast to the approach described in Section.. 65 If D+ t > δ, one can proceed with a sub-division rule as in the case for δ-approximators to further divide the triangle T t. However, if D+ t δ, we need to ensure that the derivedl is indeed an underestimator for f. Therefore, we check z max ( ± := max f(p) l (p) ) ( δ and zmin ± := min f(p) l (p) ). p T t p T t If both conditions are met, then the computedl is an underestimator for f on triangle T t. Thus, we can keep T t as well as the shift variables sti. Otherwise, we have to divide the triangle T t further. To ensure continuity at the boundary of the triangles T t, we proceed as in the case for δ-approximators (steps 9- of Algorithm.). Shifting the obtained approximator by δ ensures a piecewise linear, continuous δ-underestimator for f. 74 Multivariate Functions and their Linear Approximations The ideas and concepts developed for univariate and bivariate functions can be extended to approximate functions of higher dimensions by piecewise linear constructs. However, the number of support areas, usually simplices, increases exponentially []. An open question is whether it worthwhile to exploit special properties, e.g., separability, of the functions to reduce the dimensionality, or is it more efficient to approximate the function directly in its dimensionality. Intuitively, one might argue that the reduction of dimensionality pays out, but this is not obvious and may depend both on the problem and on the branching strategy used by the selected MILP solver.

12 Steffen Rebennack, Josef Kallrath Transformations for special nonlinear expressions enable us to utilize one- and two-dimensional techniques to construct δ-approximators for n-dimensional func- tions. We summarize four function types and their transformation tricks in Table I: Separable functions. We apply the one-dimensional δ-approximators to each of the n one-dimensional functions f i (x i ) separately. The obtained approximation error for f(x) is then the sum of the individual errors δ i for each expression f i (x i ). II: Positive function products. For products of functions, we require that all functions are positive. Otherwise, assume without loss of generality that exactly one function, f j (x j ), is non-positive. As f j is continuous on the compactum[x j,x j+ ], f j is bounded. Therefore, L j := min x [Xj,X j+ ] f j (x) is finite. Now, substitute n i= f i (x i )=( f(x j )+D j ) n i=,i j f i (x i ) D j n i=,i j f i (x i ) 9 with D j = L j + k and some positive number k, e.g., k=. As ( f(x j )+D j ) n i=,i j n f i (x i ) := f(x)> and D j f i (x i ) := f(x)>, i=,i j we can apply the transformation to both functions f(x) and f(x) separately. Note that the error obtained by the transformation of the product of positive functions depends on f(x). If ln( f(x)) has error δ = i= δ i, then f(x) follows from f(x)+ f(x)=e ln( f(x))+δ = f(x) e δ and f(x) = f(x) ( e δ ), which for small values of δ reduces to f(x) f(x) δ. Thus, we loose the separation property between the x i variables regarding the discretization error, i.e., although the discretization errors of x i and x j are separated for i j, the discretization error of the product f i (x i ) f j (x j )

13 Continuous Piecewise Linear Approximations depends on both x i and x j (as well as on f i (x i ) and f j (x j )). However, if good bounds on f(x) are available, then this approach may still be computationally feasible, e.g., < f(x) is desirable as this guarantees an approximation error for f(x) of at most e δ, or δ for small values of δ, respectively. III: Exponentials. Chains of exponentials f (x) f(x) for n-dimensional functions f (x) and f (x) with x IR n require some care related to the arguments. The transformation works only for f (x)> and f (x)>. IV: Substitutions. Complicated terms with more variables appearing as arguments ( of functions can always be replaced by substitutions. Let f(x) = f f (x) ) be a nested function with x D IR n. Define u := f (x) and f :D D. If function f (x) is approximated with an absolute error of δ, then a maximal error of γ(δ ) is derived for f (if f is represented exactly). The function γ(δ ) is the maximal deviation of function f in its domain over a small variation with magnitude δ. Note that γ(δ ) can be overestimated using the derivative of f as follows: γ(δ ) f δ, (8) where f = max u D f (u) u, if f is differentiable in a domain containing D. The errors of an approximation of f and γ(δ ) are then additive for function f(x). 4 4 Computational Results We use the modeling language GAMS (v..6), employing the global optimization solver LindoGlobal and run the computational tests on a standard desktop computer as described in [].

14 4 Steffen Rebennack, Josef Kallrath Table : Transformations for n-dimensional functions; f i (x i ) : [X i,x i+ ] IR IR for all i=,...,n and f(x) :[X,X + ] IR n IR; all functions are continuous. Function f(x) Transformation Approx. Comment Error for f(x) I n i= ± f i(x i ) treat each term f i (x i ) n i= δ i δ i is approx. error of individually f i (x i ) II n i= f i(x i ) ln( f(x))= n i= ln( f i(x i )) f(x) ( e n i= δ i ) f i (x i )> for all i; δ i is approx. error of ln( f i (x i )) III f (x) f (x) ln(ln( f(x)))= f(x) ( e e(δ +δ ) ) f (x), f (x)>; δ is ln( f (x))+ln(ln( f (x))) approx. error of ln( f (x)) and δ is approx. error of ln(ln( f (x))) IV ( f f (x) ) f (u) and f (x) δ + γ(δ ) δ is approx. error of f(u) and δ is approx. error of f (x) γ(δ ) := max f (x) f (y) x D, x δ y x+δ 8 9 The nine different functions tested are summarized in Table. The columns X and X + define the lower and upper bounds, respectively, on both decision variables x and x. The functions are plotted in Figure. Table summarizes the transformations applied towards functions though 7 of Table. The column Type indicates which type of transformation, as defined in Table, has been applied. For all computations, we choose both δ and δ to 4 be equal. For type I transformations, this leads to δ = δ = δ (cf. Table column

15 Continuous Piecewise Linear Approximations (a) Function (b) Function (c) Function (d) Function (e) Function 5 (f) Function (g) Function 7 (h) Function (i) Function 9

16 6 Steffen Rebennack, Josef Kallrath Table : Two-dimensional functions tested. # f(x) X X + Comment x x [.5,.5] [7.5,.5] D.C. function [4] x + x [.5,.5] [7.5,.5] convex function x x [.,.] [8.,4.] 4 x exp( x x ) [.5,.5] [.,.] maximum function value:.4 5 x sin(x ) [.,.5] [4.,.] concave function on domain 6 sin(x ) x x [.,.] [.,.] 7 x sin(x )sin(x ) [.5,.5] [.,.] 8 ( x x ) [.,.] [.,.] 9 exp ( (x x )) [.,.] [.,.] steep peak at x = x 5 approx. error ). The individual approximation errors for type II transformations are δ = δ := ln ( δ m + ), with m := max x [X,X + ] f(x) If the exact value of m is missing, then we use an overestimator m + for m, i.e., m + m. The values for m + and/or m are given in column Comment of Table. For functions 8 and 9 of Table, we apply the substitution rule, i.e., case IV of Table. The resulting one-dimensional function f (u) : D IR is stated along with the two-dimensional, nested function f (x,x ); the domain of f is stated in Table. The choice for the approximation errors δ and δ for f and f, respectively, are stated in 4 5 the last two columns of the table. The two-dimensional function f (x,x )=x x can be approximated by applying a type I transformation, choosing an individual approximation error of δ, for instance. In order to compute δ for function 9, we have 5 used the maximal derivative of e in order to overestimate γ(δ ), see (8).

17 Continuous Piecewise Linear Approximations. 7 Table : Transformations to univariate functions for functions to 7 of Table. # f (x ) f (x ) Type Comment x x I x x I ln(x ) ln(x ) II m + = m = 4 ln(x ) x x II m + =.4 5 ln(x ) ln ( sin(x ) ) II m + = m = 4 6 ln ( sin(x ) ) ln(x ) ln(x ) II m + =.7 7 ln ( sin(x ) ) + ln(x ) ln ( sin(x ) ) II m + = For our computations via Algorithm., we use the maximal deviation point in each triangle as the sub-division rule, as described in Section.. Empirically, 8 9 we observed that a discretization of the shift variables of δ, δ 4,, δ 4, δ trade-off between computational time and number of triangles computed. is a good Table 4: Substitutions for function 8 and 9 of Table. # f (u) D f (x,x ) δ δ 8 u [,4] x x δ δ = 4 4+ δ 9 exp( u ) [.4] x x δ δ = δ 4 e The computational results for functions through 9 of Table are summarized in Table 5. For each function f(x), we choose five consecutive values for the approximation error δ among the set {.5,.,.5,.5,.,.5,.,.,.}, dependent on the scaling of the function. The results for the -D approach are computed by Algorithm.. The column T states the number of triangles used. For

18 8 Steffen Rebennack, Josef Kallrath 45 the -D approach, we use the Algorithm 4. in []. The approximation error δ i is applied to both functions f (x ) and f (x ), except for functions 8 and 9. B and B are the computed number of breakpoints for function f and f, respectively. Column R reports on the number of rectangles resulting from the obtained breakpoint systems; again, functions 8 and 9 are different. There, we report the number of rectangular prisms leading to feasible values for x, x and u. For both -D and -D, dev. summarizes the maximal deviation of the obtained piecewise linear, continuous function over the triangulation compared to the approximated function f(x). These values have been obtained by solving a series of global optimization problems after the approximations have been computed (the computational times are not reported). The columns CPU (sec.) provide the computational times in seconds From the numerical results presented in Table 5, we derive two main conclusions: () At a first glance, the advantage of applying approximations schemes seems not as striking as expected because separate one-dimensional piecewise linear approximations seem to require less breakpoints (particularly for functions which separate well, e.g., functions and ). However, whether this is really an advantage depends on the behavior of the MILP solver when both the triangles and the one-dimensional breakpoint systems are implemented. () A limitation of one-dimensional separable approaches is the numerical accuracy required. For instance, the numerical errors when using logarithmic separations approaches involve the function values themselves. This may request very small errors of the order of. or smaller Triangulations calculated by Algorithm. are shown in Figure for different values of δ, before the final refinement, to ensure continuity, has been applied.

19 Continuous Piecewise Linear Approximations. 9 Table 5: Computation results for triangulations and one-dimensional transformations. -D -D # δ T dev. CPU δ i B B R dev. CPU (sec.) (sec.)

20 Steffen Rebennack, Josef Kallrath (a) Func. : x x, δ =. (b) Func. : x + x, δ = (c) Func. : x x, δ =. (d) Func. 4: x exp( x x ), δ = (e) Func. 5: x sin(x ), δ =.5 (f) Func. 6: sin(x ) x x, δ = (g) Func. 7: x sin(x )sin(x ), δ =.5 (h) Func. 8: ( x x ), δ = (i) Func. 9: exp ( (x x )), δ =.5

21 Continuous Piecewise Linear Approximations Conclusions For bivariate nonlinear functions, we automatically generate triangulations for continuous piecewise linear approximations as well as over- and underestimators satisfying a specified δ-accuracy. The methods we have developed require the solution of nonconvex mathematical programming problems to global optimality. We allow the deviation of the computed interpolation, associated with the triangulation, at the vertices of the triangles through shift variables in an effort to reduce the number of required triangles. We presented four different dimension reduction techniques allowing to utilize approaches approximating lower dimensional functions. The computational results for the one-dimensional approaches applied to two-dimensional problems are quite promising in that the piecewise linear approximations are computed fast, requiring very few support areas. There are several promising directions for future research. We have mentioned two open problems in the paper. In addition, when using the proposed dimension reduction transformations, we face the problem of choosing the individual approximation errors δ i. For our computations, we have chosen them equally. An optimal selection of δ i s leading to a piecewise linear function requiring the least number of breakpoints for a given accuracy δ is an interesting problem in this context. 87 References Misener, R., Floudas, C.A.: Piecewise-linear approximations of multidimensional functions. Journal of Optimization Theory and Applications 45, 47 ()

22 Steffen Rebennack, Josef Kallrath Geißler, B., Martin, A., Morsi, A., Schewe, L.: Using piecewise linear functions for solving MINLPs. In: J. Lee, S. Leyffer (eds.) Mixed Integer Nonlinear Programming, The IMA Volumes in Mathematics and its Applications, vol. 54, pp Springer (). Timpe, C., Kallrath, J.: Optimal planning in large multi-site production networks. European Journal of Operational Research 6(), 4 45 () 4. Kallrath, J.: Solving planning and design problems in the process industry using mixed integer and global optimization. Annals of Operations Research 4, 9 7 (5) 5. Zheng, Q.P., Rebennack, S., Iliadis, N.A., Pardalos, P.M.: Optimization models in the natural gas 98 industry. In: S. Rebennack, P.M. Pardalos, M.V. Pereira, N.A. Iliadis (eds.) Handbook of Power Systems I, chap. 6, pp. 48. Springer () 6. Frank, S., Steponavice, I., Rebennack, S.: Optimal power flow: a bibliographic survey I. Energy Systems (), 58 () 7. Frank, S., Steponavice, I., Rebennack, S.: Optimal power flow: a bibliographic survey II. Energy Systems (), () 8. Geißler, B.: Towards globally optimal solutions for MINLPs by discretization techniques with applications in gas network optimization. Dissertation, Universität Erlangen-Nürnberg, () 9. Linderoth, J.: A simplicial branch-and-bound algorithm for solving quadratically constrained quadratic programs. Mathematical Programming Ser. B, 5 8 (5). D Ambrosio, C., Lodi, A., Martello, S.: Piecewise linear approximation of functions of two variables in MILP models. Operations Research Letters 8, 9 46 (). Rebennack, S., Kallrath, J.: Continuous piecewise linear delta-approximations for univariate func- tions: computing minimal breakpoint systems. (4). Submitted Journal of Optimization Theory and Applications. Kallrath, J., Rebennack, S.: Computing area-tight piecewise linear overestimators, underestimators 4 and tubes for univariate functions. In: S. Butenko, C. Floudas, T. Rassias (eds.) Optimization in Science and Engineering. Springer (4). Vielma, J.P., Nemhauser, G.: Modeling disjunctive constraints with a logarithmic number of binary variables and constraints. Mathematical Programming 8, 49 7 () 4. Horst, R., Pardalos, P.M., Thoai, N.V.: Introduction to global optimization, nd edn. Kluwer ()

Continuous Piecewise Linear Delta-Approximations for. Univariate Functions: Computing Minimal Breakpoint. Systems

Continuous Piecewise Linear Delta-Approximations for. Univariate Functions: Computing Minimal Breakpoint. Systems Noname manuscript No. (will be inserted by the editor) 1 2 3 Continuous Piecewise Linear Delta-Approximations for Univariate Functions: Computing Minimal Breakpoint Systems 4 Steffen Rebennack Josef Kallrath

More information

Stochastic Separable Mixed-Integer Nonlinear Programming via Nonconvex Generalized Benders Decomposition

Stochastic Separable Mixed-Integer Nonlinear Programming via Nonconvex Generalized Benders Decomposition Stochastic Separable Mixed-Integer Nonlinear Programming via Nonconvex Generalized Benders Decomposition Xiang Li Process Systems Engineering Laboratory Department of Chemical Engineering Massachusetts

More information

Bilinear Modeling Solution Approach for Fixed Charged Network Flow Problems (Draft)

Bilinear Modeling Solution Approach for Fixed Charged Network Flow Problems (Draft) Bilinear Modeling Solution Approach for Fixed Charged Network Flow Problems (Draft) Steffen Rebennack 1, Artyom Nahapetyan 2, Panos M. Pardalos 1 1 Department of Industrial & Systems Engineering, Center

More information

Bilinear Programming

Bilinear Programming Bilinear Programming Artyom G. Nahapetyan Center for Applied Optimization Industrial and Systems Engineering Department University of Florida Gainesville, Florida 32611-6595 Email address: artyom@ufl.edu

More information

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization

More information

Discrete Optimization. Lecture Notes 2

Discrete Optimization. Lecture Notes 2 Discrete Optimization. Lecture Notes 2 Disjunctive Constraints Defining variables and formulating linear constraints can be straightforward or more sophisticated, depending on the problem structure. The

More information

B553 Lecture 12: Global Optimization

B553 Lecture 12: Global Optimization B553 Lecture 12: Global Optimization Kris Hauser February 20, 2012 Most of the techniques we have examined in prior lectures only deal with local optimization, so that we can only guarantee convergence

More information

A PACKAGE FOR DEVELOPMENT OF ALGORITHMS FOR GLOBAL OPTIMIZATION 1

A PACKAGE FOR DEVELOPMENT OF ALGORITHMS FOR GLOBAL OPTIMIZATION 1 Mathematical Modelling and Analysis 2005. Pages 185 190 Proceedings of the 10 th International Conference MMA2005&CMAM2, Trakai c 2005 Technika ISBN 9986-05-924-0 A PACKAGE FOR DEVELOPMENT OF ALGORITHMS

More information

Global Solution of Mixed-Integer Dynamic Optimization Problems

Global Solution of Mixed-Integer Dynamic Optimization Problems European Symposium on Computer Arded Aided Process Engineering 15 L. Puigjaner and A. Espuña (Editors) 25 Elsevier Science B.V. All rights reserved. Global Solution of Mixed-Integer Dynamic Optimization

More information

Penalty Alternating Direction Methods for Mixed- Integer Optimization: A New View on Feasibility Pumps

Penalty Alternating Direction Methods for Mixed- Integer Optimization: A New View on Feasibility Pumps Penalty Alternating Direction Methods for Mixed- Integer Optimization: A New View on Feasibility Pumps Björn Geißler, Antonio Morsi, Lars Schewe, Martin Schmidt FAU Erlangen-Nürnberg, Discrete Optimization

More information

Research Interests Optimization:

Research Interests Optimization: Mitchell: Research interests 1 Research Interests Optimization: looking for the best solution from among a number of candidates. Prototypical optimization problem: min f(x) subject to g(x) 0 x X IR n Here,

More information

Global Minimization via Piecewise-Linear Underestimation

Global Minimization via Piecewise-Linear Underestimation Journal of Global Optimization,, 1 9 (2004) c 2004 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. Global Minimization via Piecewise-Linear Underestimation O. L. MANGASARIAN olvi@cs.wisc.edu

More information

Simplicial Global Optimization

Simplicial Global Optimization Simplicial Global Optimization Julius Žilinskas Vilnius University, Lithuania September, 7 http://web.vu.lt/mii/j.zilinskas Global optimization Find f = min x A f (x) and x A, f (x ) = f, where A R n.

More information

Introduction to Linear Programming. Algorithmic and Geometric Foundations of Optimization

Introduction to Linear Programming. Algorithmic and Geometric Foundations of Optimization Introduction to Linear Programming Algorithmic and Geometric Foundations of Optimization Optimization and Linear Programming Mathematical programming is a class of methods for solving problems which ask

More information

LaGO - A solver for mixed integer nonlinear programming

LaGO - A solver for mixed integer nonlinear programming LaGO - A solver for mixed integer nonlinear programming Ivo Nowak June 1 2005 Problem formulation MINLP: min f(x, y) s.t. g(x, y) 0 h(x, y) = 0 x [x, x] y [y, y] integer MINLP: - n

More information

A SIMULATED ANNEALING ALGORITHM FOR SOME CLASS OF DISCRETE-CONTINUOUS SCHEDULING PROBLEMS. Joanna Józefowska, Marek Mika and Jan Węglarz

A SIMULATED ANNEALING ALGORITHM FOR SOME CLASS OF DISCRETE-CONTINUOUS SCHEDULING PROBLEMS. Joanna Józefowska, Marek Mika and Jan Węglarz A SIMULATED ANNEALING ALGORITHM FOR SOME CLASS OF DISCRETE-CONTINUOUS SCHEDULING PROBLEMS Joanna Józefowska, Marek Mika and Jan Węglarz Poznań University of Technology, Institute of Computing Science,

More information

3 INTEGER LINEAR PROGRAMMING

3 INTEGER LINEAR PROGRAMMING 3 INTEGER LINEAR PROGRAMMING PROBLEM DEFINITION Integer linear programming problem (ILP) of the decision variables x 1,..,x n : (ILP) subject to minimize c x j j n j= 1 a ij x j x j 0 x j integer n j=

More information

Approximate Linear Programming for Average-Cost Dynamic Programming

Approximate Linear Programming for Average-Cost Dynamic Programming Approximate Linear Programming for Average-Cost Dynamic Programming Daniela Pucci de Farias IBM Almaden Research Center 65 Harry Road, San Jose, CA 51 pucci@mitedu Benjamin Van Roy Department of Management

More information

A NEW SEQUENTIAL CUTTING PLANE ALGORITHM FOR SOLVING MIXED INTEGER NONLINEAR PROGRAMMING PROBLEMS

A NEW SEQUENTIAL CUTTING PLANE ALGORITHM FOR SOLVING MIXED INTEGER NONLINEAR PROGRAMMING PROBLEMS EVOLUTIONARY METHODS FOR DESIGN, OPTIMIZATION AND CONTROL P. Neittaanmäki, J. Périaux and T. Tuovinen (Eds.) c CIMNE, Barcelona, Spain 2007 A NEW SEQUENTIAL CUTTING PLANE ALGORITHM FOR SOLVING MIXED INTEGER

More information

Data Partitioning. Figure 1-31: Communication Topologies. Regular Partitions

Data Partitioning. Figure 1-31: Communication Topologies. Regular Partitions Data In single-program multiple-data (SPMD) parallel programs, global data is partitioned, with a portion of the data assigned to each processing node. Issues relevant to choosing a partitioning strategy

More information

Branch-and-Cut and GRASP with Hybrid Local Search for the Multi-Level Capacitated Minimum Spanning Tree Problem

Branch-and-Cut and GRASP with Hybrid Local Search for the Multi-Level Capacitated Minimum Spanning Tree Problem Branch-and-Cut and GRASP with Hybrid Local Search for the Multi-Level Capacitated Minimum Spanning Tree Problem Eduardo Uchoa Túlio A.M. Toffolo Mauricio C. de Souza Alexandre X. Martins + Departamento

More information

Applied Lagrange Duality for Constrained Optimization

Applied Lagrange Duality for Constrained Optimization Applied Lagrange Duality for Constrained Optimization Robert M. Freund February 10, 2004 c 2004 Massachusetts Institute of Technology. 1 1 Overview The Practical Importance of Duality Review of Convexity

More information

Embedding Formulations, Complexity and Representability for Unions of Convex Sets

Embedding Formulations, Complexity and Representability for Unions of Convex Sets , Complexity and Representability for Unions of Convex Sets Juan Pablo Vielma Massachusetts Institute of Technology CMO-BIRS Workshop: Modern Techniques in Discrete Optimization: Mathematics, Algorithms

More information

CS 450 Numerical Analysis. Chapter 7: Interpolation

CS 450 Numerical Analysis. Chapter 7: Interpolation Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

MATH3016: OPTIMIZATION

MATH3016: OPTIMIZATION MATH3016: OPTIMIZATION Lecturer: Dr Huifu Xu School of Mathematics University of Southampton Highfield SO17 1BJ Southampton Email: h.xu@soton.ac.uk 1 Introduction What is optimization? Optimization is

More information

CS321 Introduction To Numerical Methods

CS321 Introduction To Numerical Methods CS3 Introduction To Numerical Methods Fuhua (Frank) Cheng Department of Computer Science University of Kentucky Lexington KY 456-46 - - Table of Contents Errors and Number Representations 3 Error Types

More information

Financial Optimization ISE 347/447. Lecture 13. Dr. Ted Ralphs

Financial Optimization ISE 347/447. Lecture 13. Dr. Ted Ralphs Financial Optimization ISE 347/447 Lecture 13 Dr. Ted Ralphs ISE 347/447 Lecture 13 1 Reading for This Lecture C&T Chapter 11 ISE 347/447 Lecture 13 2 Integer Linear Optimization An integer linear optimization

More information

control polytope. These points are manipulated by a descent method to compute a candidate global minimizer. The second method is described in Section

control polytope. These points are manipulated by a descent method to compute a candidate global minimizer. The second method is described in Section Some Heuristics and Test Problems for Nonconvex Quadratic Programming over a Simplex Ivo Nowak September 3, 1998 Keywords:global optimization, nonconvex quadratic programming, heuristics, Bezier methods,

More information

Integer Programming ISE 418. Lecture 1. Dr. Ted Ralphs

Integer Programming ISE 418. Lecture 1. Dr. Ted Ralphs Integer Programming ISE 418 Lecture 1 Dr. Ted Ralphs ISE 418 Lecture 1 1 Reading for This Lecture N&W Sections I.1.1-I.1.4 Wolsey Chapter 1 CCZ Chapter 2 ISE 418 Lecture 1 2 Mathematical Optimization Problems

More information

Analysis and optimization methods of graph based meta-models for data flow simulation

Analysis and optimization methods of graph based meta-models for data flow simulation Rochester Institute of Technology RIT Scholar Works Theses Thesis/Dissertation Collections 8-1-2010 Analysis and optimization methods of graph based meta-models for data flow simulation Jeffrey Harrison

More information

A robust optimization based approach to the general solution of mp-milp problems

A robust optimization based approach to the general solution of mp-milp problems 21 st European Symposium on Computer Aided Process Engineering ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) 2011 Elsevier B.V. All rights reserved. A robust optimization based

More information

Triangulation Based Volume Calculation Andrew Y.T Kudowor and George Taylor Department of Geomatics University of Newcastle

Triangulation Based Volume Calculation Andrew Y.T Kudowor and George Taylor Department of Geomatics University of Newcastle Triangulation Based Volume Calculation Andrew Y.T Kudowor and George Taylor Department of Geomatics University of Newcastle ABSTRACT Delaunay triangulation is a commonly used method for generating surface

More information

MINLP applications, part II: Water Network Design and some applications of black-box optimization

MINLP applications, part II: Water Network Design and some applications of black-box optimization MINLP applications, part II: Water Network Design and some applications of black-box optimization Claudia D Ambrosio CNRS & LIX, École Polytechnique dambrosio@lix.polytechnique.fr 5th Porto Meeting on

More information

A Note on the Separation of Subtour Elimination Constraints in Asymmetric Routing Problems

A Note on the Separation of Subtour Elimination Constraints in Asymmetric Routing Problems Gutenberg School of Management and Economics Discussion Paper Series A Note on the Separation of Subtour Elimination Constraints in Asymmetric Routing Problems Michael Drexl March 202 Discussion paper

More information

Principles of Optimization Techniques to Combinatorial Optimization Problems and Decomposition [1]

Principles of Optimization Techniques to Combinatorial Optimization Problems and Decomposition [1] International Journal of scientific research and management (IJSRM) Volume 3 Issue 4 Pages 2582-2588 2015 \ Website: www.ijsrm.in ISSN (e): 2321-3418 Principles of Optimization Techniques to Combinatorial

More information

The Branch-and-Sandwich Algorithm for Mixed-Integer Nonlinear Bilevel Problems

The Branch-and-Sandwich Algorithm for Mixed-Integer Nonlinear Bilevel Problems The Branch-and-Sandwich Algorithm for Mixed-Integer Nonlinear Bilevel Problems Polyxeni-M. Kleniati and Claire S. Adjiman MINLP Workshop June 2, 204, CMU Funding Bodies EPSRC & LEVERHULME TRUST OUTLINE

More information

Introduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 20 Dr. Ted Ralphs IE406 Lecture 20 1 Reading for This Lecture Bertsimas Sections 10.1, 11.4 IE406 Lecture 20 2 Integer Linear Programming An integer

More information

Cutting Planes for Some Nonconvex Combinatorial Optimization Problems

Cutting Planes for Some Nonconvex Combinatorial Optimization Problems Cutting Planes for Some Nonconvex Combinatorial Optimization Problems Ismael Regis de Farias Jr. Department of Industrial Engineering Texas Tech Summary Problem definition Solution strategy Multiple-choice

More information

EXERCISES SHORTEST PATHS: APPLICATIONS, OPTIMIZATION, VARIATIONS, AND SOLVING THE CONSTRAINED SHORTEST PATH PROBLEM. 1 Applications and Modelling

EXERCISES SHORTEST PATHS: APPLICATIONS, OPTIMIZATION, VARIATIONS, AND SOLVING THE CONSTRAINED SHORTEST PATH PROBLEM. 1 Applications and Modelling SHORTEST PATHS: APPLICATIONS, OPTIMIZATION, VARIATIONS, AND SOLVING THE CONSTRAINED SHORTEST PATH PROBLEM EXERCISES Prepared by Natashia Boland 1 and Irina Dumitrescu 2 1 Applications and Modelling 1.1

More information

Optimization of Chemical Processes Using Surrogate Models Based on a Kriging Interpolation

Optimization of Chemical Processes Using Surrogate Models Based on a Kriging Interpolation Krist V. Gernaey, Jakob K. Huusom and Rafiqul Gani (Eds.), 12th International Symposium on Process Systems Engineering and 25th European Symposium on Computer Aided Process Engineering. 31 May 4 June 2015,

More information

Methods and Models for Combinatorial Optimization Exact methods for the Traveling Salesman Problem

Methods and Models for Combinatorial Optimization Exact methods for the Traveling Salesman Problem Methods and Models for Combinatorial Optimization Exact methods for the Traveling Salesman Problem L. De Giovanni M. Di Summa The Traveling Salesman Problem (TSP) is an optimization problem on a directed

More information

A Nonlinear Presolve Algorithm in AIMMS

A Nonlinear Presolve Algorithm in AIMMS A Nonlinear Presolve Algorithm in AIMMS By Marcel Hunting marcel.hunting@aimms.com November 2011 This paper describes the AIMMS presolve algorithm for nonlinear problems. This presolve algorithm uses standard

More information

Linear Programming. them such that they

Linear Programming. them such that they Linear Programming l Another "Sledgehammer" in our toolkit l Many problems fit into the Linear Programming approach l These are optimization tasks where both the constraints and the objective are linear

More information

Generating edge covers of path graphs

Generating edge covers of path graphs Generating edge covers of path graphs J. Raymundo Marcial-Romero, J. A. Hernández, Vianney Muñoz-Jiménez and Héctor A. Montes-Venegas Facultad de Ingeniería, Universidad Autónoma del Estado de México,

More information

Euler s Theorem. Brett Chenoweth. February 26, 2013

Euler s Theorem. Brett Chenoweth. February 26, 2013 Euler s Theorem Brett Chenoweth February 26, 2013 1 Introduction This summer I have spent six weeks of my holidays working on a research project funded by the AMSI. The title of my project was Euler s

More information

REINER HORST. License or copyright restrictions may apply to redistribution; see

REINER HORST. License or copyright restrictions may apply to redistribution; see MATHEMATICS OF COMPUTATION Volume 66, Number 218, April 1997, Pages 691 698 S 0025-5718(97)00809-0 ON GENERALIZED BISECTION OF n SIMPLICES REINER HORST Abstract. A generalized procedure of bisection of

More information

x ji = s i, i N, (1.1)

x ji = s i, i N, (1.1) Dual Ascent Methods. DUAL ASCENT In this chapter we focus on the minimum cost flow problem minimize subject to (i,j) A {j (i,j) A} a ij x ij x ij {j (j,i) A} (MCF) x ji = s i, i N, (.) b ij x ij c ij,

More information

Modeling with Uncertainty Interval Computations Using Fuzzy Sets

Modeling with Uncertainty Interval Computations Using Fuzzy Sets Modeling with Uncertainty Interval Computations Using Fuzzy Sets J. Honda, R. Tankelevich Department of Mathematical and Computer Sciences, Colorado School of Mines, Golden, CO, U.S.A. Abstract A new method

More information

LECTURE NOTES Non-Linear Programming

LECTURE NOTES Non-Linear Programming CEE 6110 David Rosenberg p. 1 Learning Objectives LECTURE NOTES Non-Linear Programming 1. Write out the non-linear model formulation 2. Describe the difficulties of solving a non-linear programming model

More information

Integrating Mixed-Integer Optimisation & Satisfiability Modulo Theories

Integrating Mixed-Integer Optimisation & Satisfiability Modulo Theories Integrating Mixed-Integer Optimisation & Satisfiability Modulo Theories Application to Scheduling Miten Mistry and Ruth Misener Wednesday 11 th January, 2017 Mistry & Misener MIP & SMT Wednesday 11 th

More information

LaGO. Ivo Nowak and Stefan Vigerske. Humboldt-University Berlin, Department of Mathematics

LaGO. Ivo Nowak and Stefan Vigerske. Humboldt-University Berlin, Department of Mathematics LaGO a Branch and Cut framework for nonconvex MINLPs Ivo Nowak and Humboldt-University Berlin, Department of Mathematics EURO XXI, July 5, 2006 21st European Conference on Operational Research, Reykjavik

More information

Unconstrained Optimization Principles of Unconstrained Optimization Search Methods

Unconstrained Optimization Principles of Unconstrained Optimization Search Methods 1 Nonlinear Programming Types of Nonlinear Programs (NLP) Convexity and Convex Programs NLP Solutions Unconstrained Optimization Principles of Unconstrained Optimization Search Methods Constrained Optimization

More information

A Short SVM (Support Vector Machine) Tutorial

A Short SVM (Support Vector Machine) Tutorial A Short SVM (Support Vector Machine) Tutorial j.p.lewis CGIT Lab / IMSC U. Southern California version 0.zz dec 004 This tutorial assumes you are familiar with linear algebra and equality-constrained optimization/lagrange

More information

Simulation. Lecture O1 Optimization: Linear Programming. Saeed Bastani April 2016

Simulation. Lecture O1 Optimization: Linear Programming. Saeed Bastani April 2016 Simulation Lecture O Optimization: Linear Programming Saeed Bastani April 06 Outline of the course Linear Programming ( lecture) Integer Programming ( lecture) Heuristics and Metaheursitics (3 lectures)

More information

Test Problems for Lipschitz Univariate Global Optimization with Multiextremal Constraints

Test Problems for Lipschitz Univariate Global Optimization with Multiextremal Constraints Test Problems for Lipschitz Univariate Global Optimization with Multiextremal Constraints Domenico Famularo DEIS, Università degli Studi della Calabria, Via Pietro Bucci C-C, 8736 Rende CS, ITALY e-mail

More information

Experiments On General Disjunctions

Experiments On General Disjunctions Experiments On General Disjunctions Some Dumb Ideas We Tried That Didn t Work* and Others We Haven t Tried Yet *But that may provide some insight Ted Ralphs, Serdar Yildiz COR@L Lab, Department of Industrial

More information

Algorithms for Integer Programming

Algorithms for Integer Programming Algorithms for Integer Programming Laura Galli November 9, 2016 Unlike linear programming problems, integer programming problems are very difficult to solve. In fact, no efficient general algorithm is

More information

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization

Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization 2017 2 nd International Electrical Engineering Conference (IEEC 2017) May. 19 th -20 th, 2017 at IEP Centre, Karachi, Pakistan Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic

More information

Arc-Flow Model for the Two-Dimensional Cutting Stock Problem

Arc-Flow Model for the Two-Dimensional Cutting Stock Problem Arc-Flow Model for the Two-Dimensional Cutting Stock Problem Rita Macedo Cláudio Alves J. M. Valério de Carvalho Centro de Investigação Algoritmi, Universidade do Minho Escola de Engenharia, Universidade

More information

MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS

MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS GRADO EN A.D.E. GRADO EN ECONOMÍA GRADO EN F.Y.C. ACADEMIC YEAR 2011-12 INDEX UNIT 1.- AN INTRODUCCTION TO OPTIMIZATION 2 UNIT 2.- NONLINEAR PROGRAMMING

More information

A Constrained Delaunay Triangle Mesh Method for Three-Dimensional Unstructured Boundary Point Cloud

A Constrained Delaunay Triangle Mesh Method for Three-Dimensional Unstructured Boundary Point Cloud International Journal of Computer Systems (ISSN: 2394-1065), Volume 03 Issue 02, February, 2016 Available at http://www.ijcsonline.com/ A Constrained Delaunay Triangle Mesh Method for Three-Dimensional

More information

A Randomized Algorithm for Minimizing User Disturbance Due to Changes in Cellular Technology

A Randomized Algorithm for Minimizing User Disturbance Due to Changes in Cellular Technology A Randomized Algorithm for Minimizing User Disturbance Due to Changes in Cellular Technology Carlos A. S. OLIVEIRA CAO Lab, Dept. of ISE, University of Florida Gainesville, FL 32611, USA David PAOLINI

More information

Comparing different interpolation methods on two-dimensional test functions

Comparing different interpolation methods on two-dimensional test functions Comparing different interpolation methods on two-dimensional test functions Thomas Mühlenstädt, Sonja Kuhnt May 28, 2009 Keywords: Interpolation, computer experiment, Kriging, Kernel interpolation, Thin

More information

General Method for Exponential-Type Equations. for Eight- and Nine-Point Prismatic Arrays

General Method for Exponential-Type Equations. for Eight- and Nine-Point Prismatic Arrays Applied Mathematical Sciences, Vol. 3, 2009, no. 43, 2143-2156 General Method for Exponential-Type Equations for Eight- and Nine-Point Prismatic Arrays G. L. Silver Los Alamos National Laboratory* P.O.

More information

The AIMMS Outer Approximation Algorithm for MINLP

The AIMMS Outer Approximation Algorithm for MINLP The AIMMS Outer Approximation Algorithm for MINLP (using GMP functionality) By Marcel Hunting Paragon Decision Technology BV An AIMMS White Paper November, 2011 Abstract This document describes how to

More information

Bi-objective Network Flow Optimization Problem

Bi-objective Network Flow Optimization Problem Bi-objective Network Flow Optimization Problem Pedro Miguel Guedelha Dias Department of Engineering and Management, Instituto Superior Técnico June 2016 Abstract Network flow problems, specifically minimum

More information

Standard dimension optimization of steel frames

Standard dimension optimization of steel frames Computer Aided Optimum Design in Engineering IX 157 Standard dimension optimization of steel frames U. Klanšek & S. Kravanja University of Maribor, Faculty of Civil Engineering, Slovenia Abstract This

More information

A Comparison of Mixed-Integer Programming Models for Non-Convex Piecewise Linear Cost Minimization Problems

A Comparison of Mixed-Integer Programming Models for Non-Convex Piecewise Linear Cost Minimization Problems A Comparison of Mixed-Integer Programming Models for Non-Convex Piecewise Linear Cost Minimization Problems Keely L. Croxton Fisher College of Business The Ohio State University Bernard Gendron Département

More information

Normals of subdivision surfaces and their control polyhedra

Normals of subdivision surfaces and their control polyhedra Computer Aided Geometric Design 24 (27 112 116 www.elsevier.com/locate/cagd Normals of subdivision surfaces and their control polyhedra I. Ginkel a,j.peters b,,g.umlauf a a University of Kaiserslautern,

More information

Applications of Operational Calculus: Rectangular Prisms and the Estimation. of a Missing Datum

Applications of Operational Calculus: Rectangular Prisms and the Estimation. of a Missing Datum Applied Mathematical Sciences, Vol. 4, 2010, no. 13, 603-610 Applications of Operational Calculus: Rectangular Prisms and the Estimation of a Missing Datum G. L. Silver Los Alamos National Laboratory*

More information

Efficient Algorithms for Robust Procurements

Efficient Algorithms for Robust Procurements Efficient Algorithms for Robust Procurements Shawn Levie January 8, 2013 Studentnr. 344412 E-mail: shawnlevie@gmail.com Supervisors: Dr. A. F. Gabor and Dr. Y. Zhang Erasmus School of Economics 1 Abstract

More information

1 Linear programming relaxation

1 Linear programming relaxation Cornell University, Fall 2010 CS 6820: Algorithms Lecture notes: Primal-dual min-cost bipartite matching August 27 30 1 Linear programming relaxation Recall that in the bipartite minimum-cost perfect matching

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

Graph Coloring via Constraint Programming-based Column Generation

Graph Coloring via Constraint Programming-based Column Generation Graph Coloring via Constraint Programming-based Column Generation Stefano Gualandi Federico Malucelli Dipartimento di Elettronica e Informatica, Politecnico di Milano Viale Ponzio 24/A, 20133, Milan, Italy

More information

Linear Programming. Meaning of Linear Programming. Basic Terminology

Linear Programming. Meaning of Linear Programming. Basic Terminology Linear Programming Linear Programming (LP) is a versatile technique for assigning a fixed amount of resources among competing factors, in such a way that some objective is optimized and other defined conditions

More information

Optimization of fuzzy multi-company workers assignment problem with penalty using genetic algorithm

Optimization of fuzzy multi-company workers assignment problem with penalty using genetic algorithm Optimization of fuzzy multi-company workers assignment problem with penalty using genetic algorithm N. Shahsavari Pour Department of Industrial Engineering, Science and Research Branch, Islamic Azad University,

More information

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING KELLER VANDEBOGERT AND CHARLES LANNING 1. Introduction Interior point methods are, put simply, a technique of optimization where, given a problem

More information

Quadratic and cubic b-splines by generalizing higher-order voronoi diagrams

Quadratic and cubic b-splines by generalizing higher-order voronoi diagrams Quadratic and cubic b-splines by generalizing higher-order voronoi diagrams Yuanxin Liu and Jack Snoeyink Joshua Levine April 18, 2007 Computer Science and Engineering, The Ohio State University 1 / 24

More information

Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2

Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2 Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2 X. Zhao 3, P. B. Luh 4, and J. Wang 5 Communicated by W.B. Gong and D. D. Yao 1 This paper is dedicated to Professor Yu-Chi Ho for his 65th birthday.

More information

Level Set Extraction from Gridded 2D and 3D Data

Level Set Extraction from Gridded 2D and 3D Data Level Set Extraction from Gridded 2D and 3D Data David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International

More information

Integer Programming Theory

Integer Programming Theory Integer Programming Theory Laura Galli October 24, 2016 In the following we assume all functions are linear, hence we often drop the term linear. In discrete optimization, we seek to find a solution x

More information

A Two-Phase Global Optimization Algorithm for Black-Box Functions

A Two-Phase Global Optimization Algorithm for Black-Box Functions Baltic J. Modern Computing, Vol. 3 (2015), No. 3, pp. 214 224 A Two-Phase Global Optimization Algorithm for Black-Box Functions Gražina GIMBUTIENĖ, Antanas ŽILINSKAS Institute of Mathematics and Informatics,

More information

Lecture VIII. Global Approximation Methods: I

Lecture VIII. Global Approximation Methods: I Lecture VIII Global Approximation Methods: I Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Global Methods p. 1 /29 Global function approximation Global methods: function

More information

Metaheuristic Optimization with Evolver, Genocop and OptQuest

Metaheuristic Optimization with Evolver, Genocop and OptQuest Metaheuristic Optimization with Evolver, Genocop and OptQuest MANUEL LAGUNA Graduate School of Business Administration University of Colorado, Boulder, CO 80309-0419 Manuel.Laguna@Colorado.EDU Last revision:

More information

Computational study of the step size parameter of the subgradient optimization method

Computational study of the step size parameter of the subgradient optimization method 1 Computational study of the step size parameter of the subgradient optimization method Mengjie Han 1 Abstract The subgradient optimization method is a simple and flexible linear programming iterative

More information

Data Mining Practical Machine Learning Tools and Techniques. Slides for Chapter 6 of Data Mining by I. H. Witten and E. Frank

Data Mining Practical Machine Learning Tools and Techniques. Slides for Chapter 6 of Data Mining by I. H. Witten and E. Frank Data Mining Practical Machine Learning Tools and Techniques Slides for Chapter 6 of Data Mining by I. H. Witten and E. Frank Implementation: Real machine learning schemes Decision trees Classification

More information

Efficient Degree Elevation and Knot Insertion for B-spline Curves using Derivatives

Efficient Degree Elevation and Knot Insertion for B-spline Curves using Derivatives Efficient Degree Elevation and Knot Insertion for B-spline Curves using Derivatives Qi-Xing Huang a Shi-Min Hu a,1 Ralph R Martin b a Department of Computer Science and Technology, Tsinghua University,

More information

Review of Mixed-Integer Nonlinear and Generalized Disjunctive Programming Methods

Review of Mixed-Integer Nonlinear and Generalized Disjunctive Programming Methods Carnegie Mellon University Research Showcase @ CMU Department of Chemical Engineering Carnegie Institute of Technology 2-2014 Review of Mixed-Integer Nonlinear and Generalized Disjunctive Programming Methods

More information

Normals of subdivision surfaces and their control polyhedra

Normals of subdivision surfaces and their control polyhedra Normals of subdivision surfaces and their control polyhedra I. Ginkel, a, J. Peters b, and G. Umlauf a, a University of Kaiserslautern, Germany b University of Florida, Gainesville, FL, USA Abstract For

More information

Graceful Graphs and Graceful Labelings: Two Mathematical Programming Formulations and Some Other New Results

Graceful Graphs and Graceful Labelings: Two Mathematical Programming Formulations and Some Other New Results Graceful Graphs and Graceful Labelings: Two Mathematical Programming Formulations and Some Other New Results Timothy A. Redl Department of Computational and Applied Mathematics, Rice University, Houston,

More information

w KLUWER ACADEMIC PUBLISHERS Global Optimization with Non-Convex Constraints Sequential and Parallel Algorithms Roman G. Strongin Yaroslav D.

w KLUWER ACADEMIC PUBLISHERS Global Optimization with Non-Convex Constraints Sequential and Parallel Algorithms Roman G. Strongin Yaroslav D. Global Optimization with Non-Convex Constraints Sequential and Parallel Algorithms by Roman G. Strongin Nizhni Novgorod State University, Nizhni Novgorod, Russia and Yaroslav D. Sergeyev Institute of Systems

More information

CSCI 4620/8626. Computer Graphics Clipping Algorithms (Chapter 8-5 )

CSCI 4620/8626. Computer Graphics Clipping Algorithms (Chapter 8-5 ) CSCI 4620/8626 Computer Graphics Clipping Algorithms (Chapter 8-5 ) Last update: 2016-03-15 Clipping Algorithms A clipping algorithm is any procedure that eliminates those portions of a picture outside

More information

VARIANCE REDUCTION TECHNIQUES IN MONTE CARLO SIMULATIONS K. Ming Leung

VARIANCE REDUCTION TECHNIQUES IN MONTE CARLO SIMULATIONS K. Ming Leung POLYTECHNIC UNIVERSITY Department of Computer and Information Science VARIANCE REDUCTION TECHNIQUES IN MONTE CARLO SIMULATIONS K. Ming Leung Abstract: Techniques for reducing the variance in Monte Carlo

More information

Approximation of a Fuzzy Function by Using Radial Basis Functions Interpolation

Approximation of a Fuzzy Function by Using Radial Basis Functions Interpolation International Journal of Mathematical Modelling & Computations Vol. 07, No. 03, Summer 2017, 299-307 Approximation of a Fuzzy Function by Using Radial Basis Functions Interpolation R. Firouzdor a and M.

More information

5 Day 5: Maxima and minima for n variables.

5 Day 5: Maxima and minima for n variables. UNIVERSITAT POMPEU FABRA INTERNATIONAL BUSINESS ECONOMICS MATHEMATICS III. Pelegrí Viader. 2012-201 Updated May 14, 201 5 Day 5: Maxima and minima for n variables. The same kind of first-order and second-order

More information

9.4 SOME CHARACTERISTICS OF INTEGER PROGRAMS A SAMPLE PROBLEM

9.4 SOME CHARACTERISTICS OF INTEGER PROGRAMS A SAMPLE PROBLEM 9.4 SOME CHARACTERISTICS OF INTEGER PROGRAMS A SAMPLE PROBLEM Whereas the simplex method is effective for solving linear programs, there is no single technique for solving integer programs. Instead, a

More information

Lecture 2 - Introduction to Polytopes

Lecture 2 - Introduction to Polytopes Lecture 2 - Introduction to Polytopes Optimization and Approximation - ENS M1 Nicolas Bousquet 1 Reminder of Linear Algebra definitions Let x 1,..., x m be points in R n and λ 1,..., λ m be real numbers.

More information

Optimization of Process Plant Layout Using a Quadratic Assignment Problem Model

Optimization of Process Plant Layout Using a Quadratic Assignment Problem Model Optimization of Process Plant Layout Using a Quadratic Assignment Problem Model Sérgio. Franceira, Sheila S. de Almeida, Reginaldo Guirardello 1 UICAMP, School of Chemical Engineering, 1 guira@feq.unicamp.br

More information

Formal Model. Figure 1: The target concept T is a subset of the concept S = [0, 1]. The search agent needs to search S for a point in T.

Formal Model. Figure 1: The target concept T is a subset of the concept S = [0, 1]. The search agent needs to search S for a point in T. Although this paper analyzes shaping with respect to its benefits on search problems, the reader should recognize that shaping is often intimately related to reinforcement learning. The objective in reinforcement

More information

The Geometry of Carpentry and Joinery

The Geometry of Carpentry and Joinery The Geometry of Carpentry and Joinery Pat Morin and Jason Morrison School of Computer Science, Carleton University, 115 Colonel By Drive Ottawa, Ontario, CANADA K1S 5B6 Abstract In this paper we propose

More information