Approximating the Nondominated Set of an MOLP by Approximately Solving its Dual Problem

Size: px
Start display at page:

Download "Approximating the Nondominated Set of an MOLP by Approximately Solving its Dual Problem"

Transcription

1 Approximating the Nondominated Set of an MOLP by Approximately Solving its Dual Problem Lizhen Shao Department of Engineering Science The University of Auckland, New Zealand Matthias Ehrgott Department of Engineering Science The University of Auckland, New Zealand m.ehrgott and Laboratoire d Informatique de Nantes Atlantique Université de Nantes, France matthias.ehrgott@univ-nantes.fr August, 7 Abstract The geometric duality theory of Heyde and Löhne (6) defines a dual to a multiple objective linear programme (MOLP). In objective space, the primal problem can be solved by Benson s outer approximation method (Benson, 998a,b) while the dual problem can be solved by a dual variant of Benson s algorithm (Ehrgott et al., 7). Duality theory then assures that it is possible to find the nondominated set of the primal MOLP by solving its dual. In this paper, we propose an algorithm to solve the dual MOLP approximately but within specified tolerance. This approximate solution set can be used to calculate an approximation of the nondominated set of the primal. We show that this set is an ε-nondominated set of the original primal MOLP and provide numerical evidence that this approach can be faster than solving the primal MOLP approximately.

2 Introduction The goal of multiple objective optimization is to simultaneously minimize p noncomparable objectives. The objectives are usually conflicting in nature, a feasible solution optimizing all the objectives simultaneously does not exist. Therefore, the goal of multiobjective optimization is to obtain the nondominated set. For a multiple objective programme (MOP), a nondominated point in objective space corresponds to an efficient solution in decision space and an efficient solution is defined as a solution for which an improvement in one objective will always lead to a deterioration in at least one of the other objectives. The set of all nondominated points forms the nondominated set in objective space. It conveys the trade-off information to a decision maker (DM) we assume to prefer less to more in each objective. In this paper, we are concerned with multiobjective linear programming problems. Researchers have developed a variety of methods for generating the efficient set or the nondominated set such as multiobjective simplex methods, interior point methods, and objective space methods. Multiobjective simplex methods and interior point methods work in variable space to find the efficient set, see the references in Ehrgott and Wiecek (). Since the number of objectives of and MOLP is often much smaller than the number of variables and typically many efficient solutions in decision space are mapped to a single nondominated point in objective space, Benson (998a) argues that generating the nondominated set should require less computation than generating the efficient set. Moreover, it is reasonable to assume that a decision maker will choose a solution based on the objective values rather than variable values. Therefore, finding the nondominated set in objective space instead of the efficient set in decision space is more important for the DM. Benson has proposed an outer approximation method (Benson, 998a) to find the extended feasible set in objective space of an MOLP. Furthermore, for an MOLP, geometric duality theory (Heyde and Löhne, 6) specifies a dual MOLP and establishes that the extended primal and dual objective set are related to each other. Thus if the extended objective set of the dual MOLP is known, then the extended objective set of the primal MOLP can be obtained, and vice versa. Based on the geometric duality theory, Ehrgott et al. (7) develop a dual variant of Benson s algorithm to obtain the feasible set in objective space of the dual MOLP. Although it is theoretically possible to identify the complete nondominated set with the methods mentioned above, finding an exact description of this set often turns out to be practically impossible or at least computationally too expensive (see the examples in Shao and Ehrgott (6)). Therefore, many researchers focus on approximating the nondominated set, see Ruzika and Wiecek () for a survey. In the literature, the concept of ε-nondominated points has been suggested as a means to account for modeling limitations or computational inaccuracies. In Shao and Ehrgott (6), we have proposed an approximation version of Benson s algorithm to sandwich the extended feasible set of an MOLP with an outer approximation and an inner approximation. The nondominated set of the inner approximation is proved to be a set of ε-nondominated points. In this paper, we propose to solve the dual MOLP approximately, then to calculate a corresponding polyhedral set in objective space of the primal MOLP using a coupling function. The nondominated subset of this polyhedral set can be proved to be a set of ε- nondominated points of the original primal MOLP. Finally, we apply this approximate method to solve the beam intensity optimization problem of radiotherapy treatment planning. Three clinical cases were used and the results are compared with those obtained by the approximation version of Benson s algorithm which directly approximates the truncated extended feasible set of the primal MOLP, an algorithm developed in Shao and Ehrgott (6).

3 Preliminaries In this paper we use the notation y to indicate y but y for y, R p whereas y < means yk <y k for all k =,...,p. The k-th unit vector in R p is denoted by e k and a vector of all ones is denoted by e. Given a mapping f : R n R p and a subset X R n we write f(x ):={f(x) :x X}. Let A R p. We denote the interior, and relative interior of A by int A, and ri A. Let C R p be a closed convex cone. An element y Ais called C-minimal if ({y} C\{}) A = and C-maximal if ({y} + C\{}) A =. A point y A is called weakly C-minimal (weakly C-maximal) if ({y} ric) A = (({y} +ric) A= ). We set wmin C A := {y A:({y} ri C) A= } and wmax C A := wmin ( C) A. In this paper we consider two special ordering cones, namely C = R p = {x Rp : x k,k =,...,p} and C = K := R e p = {y R p : y = = y p =,y p }. For the choice C = R p the set of weakly Rp -minimal elements of A (also called the set of weakly nondominated points of A) isgivenby wmin R p A := {y A:({y} int R p ) A= }. In case of C = K the set of K-maximal elements of A is given by max K A := {y A:({y} + K\{}) A= }. Note that ri K = K\{} so that weakly K-maximal and K-maximal elements of A coincide. Let us recall some facts concerning the facial structure of polyhedral sets (Webster, 99). Let A R p be a convex set. A convex subset F Ais called a face of A if for all y, Aand α (, ) such that αy +( α) F it holds that y, F. AfaceF of A is called proper if F A. Apointy Ais called an extreme point of A if {y} is a face of A. The extreme points are also called vertices. The set of all vertices of a polyhedron A is denoted by vert A. A polyhedral convex set A is defined by {y R p : By β}, whereb R m p and β R m. A polyhedral set A has a finite number of faces. A subset F of A is a face if and only if there are λ R p and γ R such that A { y R p : λ T y γ } and F = { y R p : λ T y = γ } A. Moreover, F is a proper face if and only if H := { y R p : λ T y = γ } is a supporting hyperplane to A with F = A H. We call hyperplane H = { y R p : λ T y = γ } supporting if λ T y γ for all y Aand there is some y Asuch that λ T y = γ. The proper (r )-dimensional faces of an r-dimensional polyhedral set A are called facets of A. A recession direction of A is a vector d R p such that y + αd Afor some y Aand all α. The recession cone (or asymptotic cone) A of A is the set of all recession directions A := {d R p : y + αd Afor some y Afor all α }. A recession direction d is called extreme if there are no recession directions d,d withd αd for all α> such that d = (d + d ).

4 A polyhedral convex set A can be represented by both a finite set of inequalities and the set of all extreme points and extreme directions of A (Rockafellar, 97, Theorem 8.). Let E = {y,...,y r,d,...,d t } be the set of all extreme points and extreme directions of A then r t r A = y Rp : y = α i y i + ν j d j with α i,ν j, and α i =. i= j= MOLP Duality Theory In this paper we consider multiple objective linear programming problems of the form min{px : x X}. () We assume that X in () is a nonempty feasible set X in decision space R n defined by X = {x R n : Ax b}. We have A R m n, b R m and P R p n. The feasible set Y in objective space R p is defined by i= Y = {Px: x X}. () It is well known that the image Y of a nonempty, polyhedron X under a linear map P is also a nonempty polyhedron of dimension dimy p. Definition. Afeasiblesolutionˆx X is an efficient solution of problem () if there exists no x X such that Px P ˆx. The set of all efficient solutions of problem () will be denoted by X E and called the efficient set in decision space. Correspondingly, ŷ = P ˆx is called a nondominated point and Y N = {Px : x X E } is the nondominated set in objective space of problem (). Definition. Afeasiblesolutionˆx X is called weakly efficient if there is no x X such that Px < Pˆx. The set of all weakly efficient solutions of problem () will be denoted by X WE and called the weakly efficient set in decision space. Correspondingly, the point ŷ = P ˆx is called weakly nondominated point and Y WN = {Px : x X WE } is the weakly nondominated set in objective space of problem (). The primal and dual pair of this MOLP formulated in Heyde and Löhne (6) are (P) wmin R p P (X ), X := {x R n : Ax b} (D) max K D(U), U := { (u, λ) R m R p :(u, λ),a T u = P T λ, e T λ = }, where K := {y R p : y = =... = y p =,y p } as defined before and D : R m+p R p is given by D(u, λ) := ( λ,..., λ p,b T u ) ( )( ) T Ip u = b T. λ The primal problem (P) consists in finding the weakly nondominated points of P (X ), the dual problem consists in finding the K-maximal elements of D(U). We use the extended polyhedral image sets P := P (X )+R p of problem (P) and D := D(U) Kof problem (D). It is known that the R p -minimal (nondominated) points of P and P (X )aswellasthek-maximal elements of D and D(U) coincide. Geometric duality theory (Heyde and Löhne, 6) states that there is an inclusion reversing one-to-one map Ψ between the set of all proper K-maximal faces

5 of D and the set of all proper R p -minimal faces of P. The map Ψ is based on the two set-valued maps, H : R p R p, H(v) :={y R p : ϕ(y, v) =}, H : R p R p, H (y) :={v R p : ϕ(y, v) =}. where ϕ(y, v) = ( p i= y iv i + y p ) p i= v i v p is the coupling function. Let then λ(v) := ( ) p T v,...,v p, v i i= λ (y) := ( y y p,...,y p y p, ) T and H(v) = { y R p : λ(v) T } y = v p and H (y) = { v R p : λ (y) T } v = y p. Theorem. (Heyde and Löhne (6)) Let F be a proper weakly nondominated face of P and F be a proper K-maximal face of D, then Ψ(F ):= v F H(v) P. Ψ is an inclusion reversing one-to-one map between the set of all proper K-maximal faces of D and the set of all proper weakly nondominated faces of P and the inverse map is given by Ψ (F) = H (y) D. y F Moreover, for every proper K-maximal face F of D it holds dim F +dimψ(f )= p. Geometric duality theory gives us the following results. Corollar. (Heyde and Löhne (6)). If v is a K-maximal vertex of D, thenh(v) P is a weakly nondominated (p )-dimensional facet of P, and vice versa.. If F is a weakly nondominated (p )-dimensional facet of P, thereissome uniquely defined point v R p such that F = H(v) P.. If y is a weakly nondominated vertex of P, thenh (y) D is a K-maximal (p )-dimensional facet of D, and vice versa.. If F is a K-maximal (p )-dimensional facet of D, there is some uniquely defined point y R p such that F = H (y) D. The following pair of dual linear programming problems are useful for the proof of the geometric duality theory and they also play a key role in the algorithm of solving the dual problem to be introduced in the next section. (P (v)) min x X λ(v)t Px, X := {x R n : Ax b} (D (v)) max u T (v) bt u, T (v) := { u R m : u,a T u = P T λ(v) }

6 Solving the Primal and Dual MOLP Assuming X is bounded, Benson proposes an outer approximation algorithm to solve MOLP () in objective space (Benson, 998a,b). Benson defines Y = {y R p : Px y ŷ for some x X}, () where ŷ R p is chosen to satisfy ŷ>y AI. The vector y AI R p is called the anti ideal point for the problem () and is defined as y AI k =max{y k : y Y}. () According to the following theorem, instead of directly finding Y, Benson s outer approximation algorithm is dedicated to finding Y. Theorem. (Benson (998a,b)) The set Y R p is a nonempty, bounded polyhedron of dimension p and Y N = Y N. Benson s outer approximation algorithm can be illustrated as follows. First, a cover that contains Y is constructed and an interior point of Y is found. Then, for each vertex of the cover, the algorithm checks whether it is in Y or not. If not, the vertex and the interior point are connected to form a line segment and the unique boundary point of Y on the line segment is found. A cut (supporting hyperplane) of Y containing the boundary point is constructed and the cover is updated. The procedure repeats until all the vertices of the cover are in Y. As a result, when the algorithm terminates, the cover is actually equal to Y. In addition, to improve computation time, some improvements have been proposed in Shao and Ehrgott (6). In Ehrgott et al. (7), we have extended Benson s algorithm to MOLPs which are R p -bounded. The algorithm finds P instead of Y. Notice that P is an unbounded extension of Y as P = Y + R p and Y =(ŷ R p ) P. P, Y and Y have the same nondominated set. In the same paper we also propose a dual variant of Benson s algorithm to solve the dual MOLP. We show the dual variant of Benson s algorithm and how to obtain the nondominated facets of P from D. The algorithm is very similar to Benson s algorithm. Instead of Y, it finds D. In the course of the algorithm, supporting hyperplanes of D are constructed. The following proposition is the basis for finding supporting hyperplanes. Proposition. (Ehrgott et al. (7)) Let v max K D, then for every solution x of (P ( v)), H (P x) is a supporting hyperplane of D with v H (P x). The algorithm is shown in Algorithm. Algorithm (Dual variant of Benson s algorithm) Initialization: Choose some ˆd int D. Compute an optimal solution x of (P ( ˆd)). Set S = {v R p : λ(v),ϕ(px,v) } and k =. Iteration k. Step k. If vert S k D stop, otherwise choose a vertex s k of S k such that s k D. Step k. Compute α k (, ) such that v k := α k s k +( α k ) ˆd max K D. Step k. Compute an optimal solution x k of (P (v k )). Step k. Set S k := S k {v R p : ϕ(px k,v) }. Step k. Set k := k +andgoto(k). 6

7 In Step k, we use the on-line vertex enumeration algorithm of Chen and Hansen (99) to calculate the vertices of S k = S k {v R p : ϕ(px k,v) }. This algorithm needs to remember the adjacency list of each vertex of S k. The adjacency list of a vertex v includes its adjacent vertices and its adjacent cutting planes. We say that a supporting hyperplane H (Px k )={v R p : ϕ(px k,v)=} is adjacent toavertexs k if the vertex is on the hyperplane, i.e., ϕ(px k,s k )=. Wesaya cut is degenerate if the supporting hyperplane does not support D in a facet. The principle of the on-line vertex enumeration algorithm is to find the vertex sets of S k on both sides of the supporting hyperplane H (Px k )={v R p : ϕ(px k,v)=} and then to use adjacency lists of vertices to identify all edges of S k intersecting H (Px k ). The corresponding intersection points are computed and the adjacency lists updated. When the algorithm terminates, we have the following Proposition (see Ehrgott et al. (7) for a proof). Proposition. () The set of K-maximal vertices of D is vert S k. () The set { y R p : ϕ(y, v) for all v vert S k } is a nondegenerate inequality representation of P. () All R p -minimal (nondominated) vertices of P are contained in the set W := {Px,Px,...,Px k }. () The set {v R p : λ(v),ϕ(y, v) for all y W}is a (possibly degenerate) inequality representation of D. Possibly degenerate means that this inequality representation may include redundant inequalities (a redundant inequality is produced during the iteration if a supporting hyperplane supports a face but not a facet of D). () and () in Proposition. give the vertex set and the inequality representation of D while () gives the inequality representation of P and () gives a set of nondominated points of P which includes all the vertices of P. Proposition. suggests that we can get both P and D when the algorithm terminates. At each iteration k, the hyperplane given by H (Px k )={v R p : ϕ(px k,v)= } is constructed so that it cuts off a portion of S k containing s k,thuss S S... S k S k = D. Geometric duality theory establishes a relationship between P and D. P has the property that P = P + R p while D has the property that D = D Kand the projection to its first p components is the polytope {t R p,t, p i= t i }. Now we are going to extend the geometric duality theorem to two special polyhedral convex sets which have the same property as P and D, respectively. Property. In this paper, we consider special convex polyhedral sets S R p with the property that S = S Kand the projection to its first p components is the polytope {t R p,t, p i= t i }. Lemma. For S with Property., S = K. Proof. The proof is part of the proof of Proposition. in Ehrgott et al. (7). Definition.6 For a polyhedral convex set S R p with Property., we define D(S) ={y R p : ϕ(y, v), for all v vert S}, whereϕ(y, v) = p ( i= y iv i + y p ) p i= v i v p. 7

8 Proposition.7 Let S R p with Property.. Then D(S) =D(S)+R p. Proof. It is obvious that D(S) D(S) +R p, therefore we only need to show D(S) D(S)+R p. Let w + d D(S) +R p with w D(S) andd Rp. Since w D(S), we have ϕ(w, v) for all v vert S. ϕ(w + d, v) =ϕ(w, v)+ d, λ(v) where λ(v) = (v,...,v p, p i= v i) T.Sinceϕ(w, v) and it is obvious that d, λ(v) because both d and λ(v) R p,wehaveϕ(w + d, v) andw + d D(S). Corollary.8 For S R p with Property., Theorem. holds for D = S and P = D(S). Proposition.9 Let S and S be polyhedral convex sets with Property. and S S,thenD(S ) D(S ). Proof. By Lemma. we have S = S = K. This means that S and S have only one extreme direction d = e p =(,...,, ). Suppose S has r vertices, v,...,v r. We need to show that for y D(S ), i.e., ϕ(y, v),v = v,...,v r, it holds that y D(S ). Let v be a vertex of S,thenv S as S S. Therefore, v can be expressed as v = r i= α iv i + νd with α i,ν, r i= α i =, and d = e p. We calculate ϕ(y, v ) as follows. ϕ(y, v ) = = = = = p y k vk + y p p( vk ) v p k= k= i= k= p r p y k ( α i vk i + νd k)+y p ( ( p y k k= r i= p α i vk i + y p ( r α i ϕ(y, v i )+ +ν i= r α i ϕ(y, v i )+ν i= k= i= k= r α i vk i + νd k)) i= r α i vk) i i= k= r α i vp i νd p i= r p p α i vp i + y k νd k y p νd k νd p k= Since ϕ(y, v i ), α andν, we have ϕ(y, v ). This means that any y D(S ) is also contained in D(S ). This proves D(S ) D(S ). For the dual variant of Benson s algorithm, Proposition.9 indicates that D(S k ) enlarges with the iteration and when the algorithm terminates at iteration k, D(S k )= D(D) = P. We give an example to illustrate the dual variant of Benson s algorithm and we show how S k and D(S k ) change with iteration k. Example. Consider the MOLP min{px : Ax b}, where ( ) P =,A=,b=. 8

9 P and D are shown in Fig.. The vertices of D are (, ), (, ), (, ), (, ), (, ). Their corresponding facets (supporting hyperplanes) of D are =,y + =,y + =,y + =,y =. The four vertices of P, (, ), (, ), (, ) and (, ) correspond to the facets (supporting hyperplanes) of D v + v =,v + v =, v + v =and v + v =, respectively. v P D y Figure : P and D for Example.. v Fig. shows the change of S k with each iteration k. As can be seen, with iteration k, S k becomes smaller and smaller until at termination it is the same as D. The vertices of the initial cover S are (, )and(, ). The first hyperplane cuts off vertex (, ), the vertices of S are (, ), (, ), and ( 8, ). The second hyperplane cuts off vertex (, ), thus the vertices of S are (, ), ( 8, ), ( 8, ) and (, ). The third hyperplane cuts off vertex ( 8, ), thus the vertices of S are (, ), (, ), (,,( 8, )and(, ). The fourth hyperplane cuts off vertex ( 8, ) and the vertices of S are (, ), (, ), (,,(, )and(, ). After the fourth cut, we have S = D. Therefore, the vertices of S are also the vertices of D. v v v v v S S S S S ˆd v ˆd v ˆd v ˆd v ˆd v Figure : The reduction of S k with iteration k. The change of D(S k ) after each iteration k can be seen in Fig.. The calculation of D(S k ) is according to the definition D(S k )={y R p : ϕ(y, v) for all v vert S k }. For example, D(S )={y R p : ϕ(y, v) forv =(, ), (, )}, i.e. D(S )={y } { }. In contrast to the reduction of Sk, D(S k ) enlarges with iteration k. When the dual variant of Benson s algorithm terminates, S = D and D(S )=D(D) =P. With Proposition.9 the process of outer approximation of D can be interpreted as a process of inner approximation of P.. Obtaining the Nondominated Facets of P from D As seen in Proposition. (), vertex v =(v,v,...,v p )ofd corresponds to a supporting hyperplane of P which supports P in a weakly nondominated facet. The 9

10 D(S ) D(S ) D(S ) y y y D(S ) D(S ) y y Figure : The enlarging of D(S k ) with iteration k. hyperplane is λ(v) T y = v p,whereλ(v) =(v,...,v p, p i= v i) T and λ(v). If λ(v) >, then the supporting hyperplane supports P in a nondominated facet instead of a weakly nondominated facet. We call v an inner vertex of D if λ(v) >, otherwise, we call it a boundary vertex of D. To calculate the nondominated facets of P, we only need to consider inner vertices of D. Now we are going to show how to calculate the nondominated facet F which corresponds to an inner vertex v of D. When the algorithm terminates, Proposition. () gives us the vertex set of D and () gives us the inequality representation of D. The inequality representation of D is D = {v R p : λ(v),ϕ(y, v) for all y W}where W = {Px,Px,...,Px k }. As we mentioned before, at Step k, we use the on-line vertex enumeration algorithm of Chen and Hansen (99). Therefore, we have the adjacency list for each vertex of D. The adjacency list of a vertex includes the adjacent vertices and the adjacent supporting hyperplanes of D. The set of adjacent supporting hyperplanes of an inner vertex is a subset of {v R p : ϕ(y, v) = for all y W}. ForaninnervertexofD, if all its adjacent supporting hyperplanes (cuts) are non degenerate, i.e., each cut supports D in a facet, then we can find all the points in P which correspond to the cuts of D by geometric duality theory. These points are the vertices of F. If not all of its adjacent cuts support D in facets, i.e., some cuts are degenerate, then we need to use the following result. Proposition. Let inner vertex v of D correspond to nondominated facet F of P. Suppose v has a degenerate adjacent cut (supporting hyperplane), then the cut corresponds to a nondominated point p F,butp is not a vertex of F. Proof. Suppose vertex v has k non degenerate cuts (supporting hyperplanes). The k non degenerate cuts correspond to the k vertices of F. The degenerate cut can be expressed by a linear combination of the k non degenerate cuts. Correspondingly, the point that the degenerate cut corresponds to can also be expressed by the same linear combination of the k vertices. Therefore, we have p F.

11 v D P v y Figure : Vertex (, ) and its degenerate cut. Figure : P and the point corresponding to a degenerate cut Proposition. shows that the points of P which correspond to the adjacent cuts of a vertex of D are on the same facet of P. Moreover, all non degenerate cuts of D correspond to the vertices of the facet of P and all degenerate cuts correspond to points which are not vertices of the facet. Therefore, we can use the convex hull of the points to find the facet. We show an example with a degenerate cut. Example. For Example., vertex (, )ofd is adjacent to three cuts, they are v + v =, v + v =andv =.Amongthem,v = is a degenerate cut because it only supports D in vertex (, ) instead of a facet. This can be seen in Fig.. Vertex (, )ofd corresponds to the line segment between (, ) and (, ), a facet F of P, as shown in Fig.. Cut v + v = corresponds to (, ) and cut v + v = corresponds to (, ); while v = corresponds to (, ), a nondominated point of P. The two points (, ) and (, ) are the vertices of F, while point (, )isinf, but it is not a vertex of F. The degenerate cut v = can be obtained by simply adding the two adjacent non degenerate cuts together with equal weights, i.e., (v + v )+ ( v + v )= + orv =. On the other hand, (, ) can also be obtained by simply adding the two vertices (, ) and (, ) together with the same equal weights, i.e., (, ) + (, ) = (, ). Solving the Dual MOLP Approximately If the nondominated set of an MOLP is curved, which means there are very many facets and very many vertices of P, then there will be very many vertices and facets of D according to Corollar.. Therefore, whether we are solving the primal problem with Benson s algorithm or whether we are solving the dual problem with dual variant of Benson s algorithm, computation time may be a problem. To improve the computation time, an approximation version of Benson s algorithm has been proposed in Shao and Ehrgott (6). For a vertex of S k not in Y, but with Euclidean distance to the boundary point of Y on the line segment between the vertex and the interior point less than tolerance ɛ (ɛ R, ɛ>), we omit constructing the cut and remember both the vertex and the boundary point as a point of the outer approximation and a point of the inner approximation, respectively. Finally, we have an outer approximation Y o of Y and an inner approximation Y i of Y to sandwich Y, i.e., Y o Y Y i. Moreover, each vertex of Y o has a corresponding point of Y i, and the Euclidean distance between the two points is less than tolerance ɛ. As a result of approximation, all the points of Y i calculated are on the boundary of Y and the weakly nondominated set of Y i has been proved to be a set of weakly ε-nondominated points of Y (ε = ɛe).

12 In this section, we propose approximately solving the dual MOLP but controlling the approximation error to get an approximate extended feasible set in objective space D o, and then finding the polyhedral set D(D o ). D(D o ) is an inner approximation of P, the original extended primal feasible set. Finally we will show that the nondominated set of D(D o ) is actually an ε-nondominated set of the original MOLP. Our approximate dual variant of Benson s algorithm is identical to Algorithm except for Step k. Let ɛ R,ɛ> be a tolerance, then the changes are as follows. Step k. If, for each v vert S k, v D+ ɛe p is satisfied, then the outer approximation of D denoted by D o is equal to S k. Stop. Otherwise, choose any v k vert S k such that v k / D+ ɛe p and continue. To check v D+ ɛe p, we need solve D (v) and get its objective value f. If v p f ɛ, thenv D+ ɛe p. Since D o D, Proposition.9 implies D(D o ) D(D) =P. This means that D(D o ) is an inner approximation of P. WeuseP i to represent D(D o ). We illustrate the algorithm by using the same example as Example.. Example. In Example., let us set ɛ =. After two cuts there are two vertices of S (v =( 8, )andv =( 8, )) outside D. The boundary points of D which have the same v value as v and v are bd =( 8, 8 )andbd =( 8, 8 ), respectively. Both the Euclidean distance between v and bd and the Euclidean distance between v and bd are equal to 8. We accept these two infeasible points v, v for the outer approximation of D due to the distances to their corresponding boundary points being less than ɛ, i.e., v, v D+ ɛe q. When the algorithm terminates, the total number of iterations k is equal to and S = D o. Figs. 6 and 7 show D o and its corresponding polyhedral set P i = D(D o ). v D(D o )=P i D o Figure 6: D o v y Figure 7: P i We are going to evaluate the approximation quality of P i as inner approximation of P. Letusfirstrecallthedefinitionofε-efficient solutions. Definition. (Loridan (98)) Consider the MOLP () and let ε R p.. A feasible solution ˆx X is called an ε-efficient solution of () if there does not exist x X such that Px P ˆx ε. Correspondingly, ŷ = P ˆx is called an ε-nondominated point in objective space;. A feasible solution ˆx X is called a weakly ε-efficient solution if there does not exist x X such that Px < Pˆx ε. Correspondingly, ŷ = P ˆx is called a weakly ε-nondominated point in objective space.

13 Now we proceed to show that the weakly nondominated set of P i is actually a set of weakly ε-nondominated points of P. First let us move D along e p by ɛ, then we get D u = D + ɛe p. When the approximate algorithm terminates, the set of K-maximal points of D o will lie in between the set of K-maximal points of D and the set of K-maximal points of D u, i.e., D D o D u. We can calculate P = D(D) andp u = D(D u ). According to Proposition.9, we have P P i P u. Therefore, the set of weakly nondominated points (R p - minimal points) of P i = D(D o ) is in between the set of weakly nondominated points (R p -minimal points) of P and the set of weakly nondominated points of Pu. Theorem. Suppose the dual approximation error is ɛ. Let ε = ɛe, then the nondominated set of P i is a set of ε-nondominated points of P. Proof. First we show that the nondominated set of P u is actually a set of ε- nondominated points of P. According to Corollary.8, Theorem. applies to D = D u and P = P u,thus we can use duality theory to find P u. Let v =(v,v,...,v p )beavertexofd. Then there is a corresponding vertex v u of D u and v u = v+ɛe p. Suppose vertex v of D corresponds to facet λ(v) T y = v p of P, where λ(v) =(v,...,v p, p i= v i) T and suppose vertex v u =(v u,vu,...,vu p ) of D u corresponds to facet λ(v u ) T y = vp u of Pu,whereλ(v u )=(v u,...,vu p, p i= vu i )T.Asv +ɛe p = v u,wehaveλ(v u )=λ(v). Therefore, the facet λ(v)y = v p of P is parallel to the facet λ(v u )y = vp u of Pu. Let F be a facet of D, then there is a corresponding facet F u of D u and F u is parallel to F. Suppose F corresponds to vertex y =(y,,...,y p )ofp, then the equation of the hyperplane spanned by F is (y p y )v +(y p )v +...+(y p y p )v p +v p = y p. Suppose F u corresponds to vertex y u =(y u,y u,...,yp u )ofp u, then the equation of the hyperplane spanned by F u is (yp u yu )v +(yp u yu )v +...+(yp u yp )v u p + v p = yp u. F u and F are parallel and F u is obtained by moving F along e p by ɛ, therefore, we have yp u = y p + ɛ, and(yp u yu )=(y p y ),..., (yp u yu p )=(y p y p ), i.e., y u = y + ɛe. Above all, P u can be obtained by moving every point y of P to y+ɛe. Therefore, the nondominated set of P u is a set of ε-nondominated points of P. Moreover, as P u P i P, so the nondominated set of P i is also a set of ε-nondominated points of P. Example. As in Example., ɛ =. Fig. 8 and Fig. 9 show D, Du, P and P u. The vertices of P are (, ), (, ), (, ) and (, ). The corresponding vertices of P u are (, ), (, ), (, )and(, ). The nondominated set of P u is a set of ε-nondominated points of P where ε = e. 6 Application: Radiotherapy Treatment Planning The aim of radiotherapy is to kill tumor cells while at the same time protecting the surrounding tissue and organs from the damaging effect of radiation. To achieve these goals computerized inverse planning systems are used. Given the number of beams and beam directions, beam intensity profiles that yield the best dose distribution under consideration of clinical and physical constraints are calculated. This is called beam intensity optimization problem.

14 v D D u v Figure 8: D and D u P P u y Figure 9: P and P u We have formulated the beam intensity optimization problem as an MOLP in Shao and Ehrgott (6). The objectives of the MOLP are to minimize the maximum deviation y,, of delivered dose from tumor lower bounds, from critical organ upper bounds and from the normal tissue upper bounds, respectively. We used both Benson s algorithm and an approximation version of Benson s algorithm to solve the MOLP of an acoustic neuroma (AC), a prostate (PR) and a pancreatic lesion (PL) case, respectively. In this paper, we solve the dual problems of the same three clinical cases as above, i.e., the acoustic neuroma (AC), the prostate (PR) and the pancreatic lesion (PL) both by the dual variant of Benson s algorithm and the approximate dual variant of Benson s algorithm. Both the acoustic neuroma case and the prostate case can be solvedexactly with the dual variant of Benson s algorithm. We show the results of solving the dual problem exactly with the dual variant of Benson s algorithm and approximately with the approximate dual variant of Benson s algorithm. The results for the acoustic neuroma case are shown in Fig.. The set max K D and the union of the nondominated facets of P obtained by the dual variant of Benson s algorithm are on the top left and right, respectively. The other pictures show the results of solving the dual problem approximately with the approximate dual variant of Benson s algorithm. Pictures on the left are max K D o while pictures on the right are the union of the nondominated facets of P i. From top to bottom, they are for approximation error ɛ =., and ɛ =., respectively. The results for the prostate case are in Fig.. The pictures on the top left and right are max K D and the union of the nondominated facets of P, theywere obtained by the dual variant of Benson s algorithm. The rest of the pictures show the results of solving the dual problem approximately with the approximate dual variant of Benson s algorithm. Pictures on the left show max K D o while pictures on the right show the union of the nondominated facets of P i. From top to bottom, they are for approximation error ɛ =., ɛ =., and ɛ =., respectively.

15 8 6 9 v 8 7. v v 6 y v 8 7. v v 6 y v 8 7. v v 6 y 6 8 Figure : The results for the AC case.

16 v. v v. y v. v v. y v. v v. y v. v v. y Figure : The results for the PR case. 6

17 The dual variant of Benson s algorithm cannot solve the problem of pancreatic lesion case within hours of computation. Therefore, we show the results solved by the approximate dual variant of Benson s algorithm in Fig.. Pictures on the left are max K D o and pictures on the right are the union of the nondominated facets of P i. Pictures on the top are for ɛ =. while pictures on the bottom are for ɛ =.. v 8 6. v v. y v 8 6. v v. y Figure : The results for the PL case. Summarizing information comparing the number of vertices and number of cuts of the dual variant of Benson s algorithm and the approximate dual variant of Benson s algorithm with various values of ɛ is given in Table. The dual variant of Benson s algorithm can solve the dual problem of the first two cases exactly in one hour, but not the problem of the pancreatic lesion case. The approximate dual variant of Benson s algorithm can solve all three problems within minutes with approximation error ɛ =.. Table and Figs., and clearly show the effect of the choice of ɛ. The smaller the approximation error, the more vertices and the more cuts are generated and the longer the computation time. To make comparisons with the results obtained by Benson s algorithm and the approximation version of Benson s algorithm, we show Y and Y o with different values of approximation error ɛ for the above three cases. Benson s algorithm can exactly solve the acoustic case and the prostate case. Fig. shows Y for the acoustic case and Fig. shows Y for the prostate case. Y o with ɛ =. for the acoustic case obtained by the approximation version of Benson s algorithm is shown in Fig., while Y o with ɛ =. for the prostate case obtained by the approximation version of Benson s algorithm is shown in Fig. 6. Benson s algorithm cannot solve the pancreatic lesion case within hours of computation, 7

18 we show Y o with approximation error ɛ =. in Fig. 7 and ɛ =. in Fig y y 8 Figure : AC: Y solved by Benson s algorithm. Figure : AC: Y o solved by approximation version of Benson s algorithm with ɛ =.. y y Figure : PR: Y solved by Benson s algorithm. Figure 6: PR: Y o with ɛ = γ 6 6 α β y Figure 7: PL: Y o with ɛ =.. Figure 8: PL: Y o with ɛ =.. The number of vertices and number of cuts of Y o with various values of ɛ are also given in Table. Comparing the computation time of exactly solving the primal and exactly solving the dual problem for both the acoustic case and the prostate case, solving the primal problem exactly using Benson s algorithm needs more computation time than solving the dual problem exactly using the dual variant of Benson s algorithm. If the approximation error ɛ in the approximation version of Benson s algorithm and the approximation error ɛ in the approximate dual variant of Benson s algorithm are the same, then both algorithms guarantee finding ε-nondominated set (ε = ɛe). Thus we can compare the computation time for solving Y o and P i with the same 8

19 approximation error ɛ. In Table, we observe that for the three clinical cases solving the dual approximately is always faster than solving the primal approximately with the same approximation error ɛ. Table : Running time and number of vertices and cutting planes to solve the dual problem and the primal problem for three cases with different approximation error ɛ (ɛ = means it is solved exactly). Case ɛ Solving the dual Solving the primal Time Vertices Cuts Time Nondominated Cuts (seconds) of D o of D o (seconds) vertices of Y o of Y o AN PR PL Solving the primal MOLP gives us Y. If we only need the nondominated facets, we need to get rid of the weakly nondominated facets. Solving the dual problem allows us to directly calculate the nondominated facets. This can be taken as an advantage of solving the dual problem. 7 Conclusion In this paper, we have developed an approximate dual variant of Benson s algorithm to solve MOLPs in objective space. We have shown the algorithm guarantees to find ε-nondominated points with a specified accuracy ɛ. This algorithm was applied to the beam intensity optimization problem of radiation therapy treatment planning. Three clinical cases were used and the results were compared with those obtained by approximately solving the primal with the approximation version of Benson s algorithm. When both algorithms use the same approximation error ɛ, both of them guarantee producing ε-nondominated set (ε = ɛe), we found that approximately solving the dual with the approximate dual variant of Benson s algorithm is faster than approximately solving the primal with the approximation version of Benson s algorithm for all the three clinical cases. References Benson, H. P. (998a). Hybrid approach for solving multiple-objective linear programs in outcome space. Journal of Optimization Theory and Applications, 98, 7. Benson, H. P. (998b). An outer approximation algorithm for generating all efficient 9

20 extreme points in the outcome set of a multiple objective linear programming problem. Journal of Global Optimization,,. Chen, P. C. and Hansen, P. (99). On-line and off-line vertex enumeration by adjacency lists. Operations Research Letters,, 9. Ehrgott, M. and Wiecek, M. (). Multiobjective programming. In J. Figueira, S. Greco, and M. Ehrgott, editors, Multicriteria Decision Analysis: State of the Art Surveys, pages Springer Science + Business Media, New York. Ehrgott, M., Löhne, A., and Shao, L. (7). A dual variant of Benson s outer approximation algorithm. Report 6, Department of Engineering Science, The University of Auckland. Heyde, F. and Löhne, A. (6). Geometric duality in multi-objective linear programming. Reports on Optimization and Stochastics 6-, Department of Mathematics and Computer Science, Martin-Luther-University Halle-Wittenberg. Submitted to SIAM Journal on Optimization. Loridan, P. (98). ε-solutions in vector minimization problems. Journal of Optimization Theory and Applications, (), Rockafellar, R. (97). Convex Analysis. Princeton University Press, Princeton. Ruzika, S. and Wiecek, M. M. (). Approximation methods in multiobjective programming. Journal of Optimization Theory and Applications, 6(), 7. Shao, L. and Ehrgott, M. (6). Approximately solving multiobjective linear programmes in objective space and an application in radiotherapy treatment planning. Report 66, Department of Engineering Science, The University of Auckland. Available online at esc-tr-66.pdf. Accepted for publication in Mathematical Methods of Operations Research. Webster, R. (99). Convexity. Oxford Science Publications, Oxford, Oxford University Press.

A Dual Variant of. Benson s Outer Approximation Algorithm. for Multiple Objective Linear Programs

A Dual Variant of. Benson s Outer Approximation Algorithm. for Multiple Objective Linear Programs A Dual Variant of Benson s Outer Approximation Algorithm for Multiple Objective Linear Programs Matthias Ehrgott 1, Andreas Löhne 2 und Lizhen Shao 3 1,3 University of Auckland, New Zealand 2 Martin-Luther-Universität

More information

Primal and Dual Methods for Optimisation over the Non-dominated Set of a Multi-objective Programme and Computing the Nadir Point

Primal and Dual Methods for Optimisation over the Non-dominated Set of a Multi-objective Programme and Computing the Nadir Point Primal and Dual Methods for Optimisation over the Non-dominated Set of a Multi-objective Programme and Computing the Nadir Point Ethan Liu Supervisor: Professor Matthias Ehrgott Lancaster University Outline

More information

Math 5593 Linear Programming Lecture Notes

Math 5593 Linear Programming Lecture Notes Math 5593 Linear Programming Lecture Notes Unit II: Theory & Foundations (Convex Analysis) University of Colorado Denver, Fall 2013 Topics 1 Convex Sets 1 1.1 Basic Properties (Luenberger-Ye Appendix B.1).........................

More information

FACES OF CONVEX SETS

FACES OF CONVEX SETS FACES OF CONVEX SETS VERA ROSHCHINA Abstract. We remind the basic definitions of faces of convex sets and their basic properties. For more details see the classic references [1, 2] and [4] for polytopes.

More information

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 1 Review Dr. Ted Ralphs IE316 Quiz 1 Review 1 Reading for The Quiz Material covered in detail in lecture. 1.1, 1.4, 2.1-2.6, 3.1-3.3, 3.5 Background material

More information

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 2 Review Dr. Ted Ralphs IE316 Quiz 2 Review 1 Reading for The Quiz Material covered in detail in lecture Bertsimas 4.1-4.5, 4.8, 5.1-5.5, 6.1-6.3 Material

More information

Linear Programming in Small Dimensions

Linear Programming in Small Dimensions Linear Programming in Small Dimensions Lekcija 7 sergio.cabello@fmf.uni-lj.si FMF Univerza v Ljubljani Edited from slides by Antoine Vigneron Outline linear programming, motivation and definition one dimensional

More information

Applied Integer Programming

Applied Integer Programming Applied Integer Programming D.S. Chen; R.G. Batson; Y. Dang Fahimeh 8.2 8.7 April 21, 2015 Context 8.2. Convex sets 8.3. Describing a bounded polyhedron 8.4. Describing unbounded polyhedron 8.5. Faces,

More information

Linear Programming Duality and Algorithms

Linear Programming Duality and Algorithms COMPSCI 330: Design and Analysis of Algorithms 4/5/2016 and 4/7/2016 Linear Programming Duality and Algorithms Lecturer: Debmalya Panigrahi Scribe: Tianqi Song 1 Overview In this lecture, we will cover

More information

Linear programming and duality theory

Linear programming and duality theory Linear programming and duality theory Complements of Operations Research Giovanni Righini Linear Programming (LP) A linear program is defined by linear constraints, a linear objective function. Its variables

More information

Some Advanced Topics in Linear Programming

Some Advanced Topics in Linear Programming Some Advanced Topics in Linear Programming Matthew J. Saltzman July 2, 995 Connections with Algebra and Geometry In this section, we will explore how some of the ideas in linear programming, duality theory,

More information

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs Introduction to Mathematical Programming IE496 Final Review Dr. Ted Ralphs IE496 Final Review 1 Course Wrap-up: Chapter 2 In the introduction, we discussed the general framework of mathematical modeling

More information

Lecture 3. Corner Polyhedron, Intersection Cuts, Maximal Lattice-Free Convex Sets. Tepper School of Business Carnegie Mellon University, Pittsburgh

Lecture 3. Corner Polyhedron, Intersection Cuts, Maximal Lattice-Free Convex Sets. Tepper School of Business Carnegie Mellon University, Pittsburgh Lecture 3 Corner Polyhedron, Intersection Cuts, Maximal Lattice-Free Convex Sets Gérard Cornuéjols Tepper School of Business Carnegie Mellon University, Pittsburgh January 2016 Mixed Integer Linear Programming

More information

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if POLYHEDRAL GEOMETRY Mathematical Programming Niels Lauritzen 7.9.2007 Convex functions and sets Recall that a subset C R n is convex if {λx + (1 λ)y 0 λ 1} C for every x, y C and 0 λ 1. A function f :

More information

Conic Duality. yyye

Conic Duality.  yyye Conic Linear Optimization and Appl. MS&E314 Lecture Note #02 1 Conic Duality Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/

More information

Lecture 5: Duality Theory

Lecture 5: Duality Theory Lecture 5: Duality Theory Rajat Mittal IIT Kanpur The objective of this lecture note will be to learn duality theory of linear programming. We are planning to answer following questions. What are hyperplane

More information

CS 372: Computational Geometry Lecture 10 Linear Programming in Fixed Dimension

CS 372: Computational Geometry Lecture 10 Linear Programming in Fixed Dimension CS 372: Computational Geometry Lecture 10 Linear Programming in Fixed Dimension Antoine Vigneron King Abdullah University of Science and Technology November 7, 2012 Antoine Vigneron (KAUST) CS 372 Lecture

More information

AMS : Combinatorial Optimization Homework Problems - Week V

AMS : Combinatorial Optimization Homework Problems - Week V AMS 553.766: Combinatorial Optimization Homework Problems - Week V For the following problems, A R m n will be m n matrices, and b R m. An affine subspace is the set of solutions to a a system of linear

More information

Numerical Optimization

Numerical Optimization Convex Sets Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Let x 1, x 2 R n, x 1 x 2. Line and line segment Line passing through x 1 and x 2 : {y

More information

MATH 890 HOMEWORK 2 DAVID MEREDITH

MATH 890 HOMEWORK 2 DAVID MEREDITH MATH 890 HOMEWORK 2 DAVID MEREDITH (1) Suppose P and Q are polyhedra. Then P Q is a polyhedron. Moreover if P and Q are polytopes then P Q is a polytope. The facets of P Q are either F Q where F is a facet

More information

Convexity: an introduction

Convexity: an introduction Convexity: an introduction Geir Dahl CMA, Dept. of Mathematics and Dept. of Informatics University of Oslo 1 / 74 1. Introduction 1. Introduction what is convexity where does it arise main concepts and

More information

CS675: Convex and Combinatorial Optimization Spring 2018 Consequences of the Ellipsoid Algorithm. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Spring 2018 Consequences of the Ellipsoid Algorithm. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Spring 2018 Consequences of the Ellipsoid Algorithm Instructor: Shaddin Dughmi Outline 1 Recapping the Ellipsoid Method 2 Complexity of Convex Optimization

More information

Combinatorial Geometry & Topology arising in Game Theory and Optimization

Combinatorial Geometry & Topology arising in Game Theory and Optimization Combinatorial Geometry & Topology arising in Game Theory and Optimization Jesús A. De Loera University of California, Davis LAST EPISODE... We discuss the content of the course... Convex Sets A set is

More information

Integer Programming Theory

Integer Programming Theory Integer Programming Theory Laura Galli October 24, 2016 In the following we assume all functions are linear, hence we often drop the term linear. In discrete optimization, we seek to find a solution x

More information

Division of the Humanities and Social Sciences. Convex Analysis and Economic Theory Winter Separation theorems

Division of the Humanities and Social Sciences. Convex Analysis and Economic Theory Winter Separation theorems Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory Winter 2018 Topic 8: Separation theorems 8.1 Hyperplanes and half spaces Recall that a hyperplane in

More information

Topological properties of convex sets

Topological properties of convex sets Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory Winter 2018 Topic 5: Topological properties of convex sets 5.1 Interior and closure of convex sets Let

More information

Lecture 2 - Introduction to Polytopes

Lecture 2 - Introduction to Polytopes Lecture 2 - Introduction to Polytopes Optimization and Approximation - ENS M1 Nicolas Bousquet 1 Reminder of Linear Algebra definitions Let x 1,..., x m be points in R n and λ 1,..., λ m be real numbers.

More information

However, this is not always true! For example, this fails if both A and B are closed and unbounded (find an example).

However, this is not always true! For example, this fails if both A and B are closed and unbounded (find an example). 98 CHAPTER 3. PROPERTIES OF CONVEX SETS: A GLIMPSE 3.2 Separation Theorems It seems intuitively rather obvious that if A and B are two nonempty disjoint convex sets in A 2, then there is a line, H, separating

More information

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang 600.469 / 600.669 Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang 9.1 Linear Programming Suppose we are trying to approximate a minimization

More information

Polyhedral Computation Today s Topic: The Double Description Algorithm. Komei Fukuda Swiss Federal Institute of Technology Zurich October 29, 2010

Polyhedral Computation Today s Topic: The Double Description Algorithm. Komei Fukuda Swiss Federal Institute of Technology Zurich October 29, 2010 Polyhedral Computation Today s Topic: The Double Description Algorithm Komei Fukuda Swiss Federal Institute of Technology Zurich October 29, 2010 1 Convexity Review: Farkas-Type Alternative Theorems Gale

More information

A PARAMETRIC SIMPLEX METHOD FOR OPTIMIZING A LINEAR FUNCTION OVER THE EFFICIENT SET OF A BICRITERIA LINEAR PROBLEM. 1.

A PARAMETRIC SIMPLEX METHOD FOR OPTIMIZING A LINEAR FUNCTION OVER THE EFFICIENT SET OF A BICRITERIA LINEAR PROBLEM. 1. ACTA MATHEMATICA VIETNAMICA Volume 21, Number 1, 1996, pp. 59 67 59 A PARAMETRIC SIMPLEX METHOD FOR OPTIMIZING A LINEAR FUNCTION OVER THE EFFICIENT SET OF A BICRITERIA LINEAR PROBLEM NGUYEN DINH DAN AND

More information

On Unbounded Tolerable Solution Sets

On Unbounded Tolerable Solution Sets Reliable Computing (2005) 11: 425 432 DOI: 10.1007/s11155-005-0049-9 c Springer 2005 On Unbounded Tolerable Solution Sets IRENE A. SHARAYA Institute of Computational Technologies, 6, Acad. Lavrentiev av.,

More information

be a polytope. has such a representation iff it contains the origin in its interior. For a generic, sort the inequalities so that

be a polytope. has such a representation iff it contains the origin in its interior. For a generic, sort the inequalities so that ( Shelling (Bruggesser-Mani 1971) and Ranking Let be a polytope. has such a representation iff it contains the origin in its interior. For a generic, sort the inequalities so that. a ranking of vertices

More information

The simplex method and the diameter of a 0-1 polytope

The simplex method and the diameter of a 0-1 polytope The simplex method and the diameter of a 0-1 polytope Tomonari Kitahara and Shinji Mizuno May 2012 Abstract We will derive two main results related to the primal simplex method for an LP on a 0-1 polytope.

More information

Convex Geometry arising in Optimization

Convex Geometry arising in Optimization Convex Geometry arising in Optimization Jesús A. De Loera University of California, Davis Berlin Mathematical School Summer 2015 WHAT IS THIS COURSE ABOUT? Combinatorial Convexity and Optimization PLAN

More information

arxiv: v1 [cs.cc] 30 Jun 2017

arxiv: v1 [cs.cc] 30 Jun 2017 On the Complexity of Polytopes in LI( Komei Fuuda May Szedlá July, 018 arxiv:170610114v1 [cscc] 30 Jun 017 Abstract In this paper we consider polytopes given by systems of n inequalities in d variables,

More information

MA4254: Discrete Optimization. Defeng Sun. Department of Mathematics National University of Singapore Office: S Telephone:

MA4254: Discrete Optimization. Defeng Sun. Department of Mathematics National University of Singapore Office: S Telephone: MA4254: Discrete Optimization Defeng Sun Department of Mathematics National University of Singapore Office: S14-04-25 Telephone: 6516 3343 Aims/Objectives: Discrete optimization deals with problems of

More information

Polytopes Course Notes

Polytopes Course Notes Polytopes Course Notes Carl W. Lee Department of Mathematics University of Kentucky Lexington, KY 40506 lee@ms.uky.edu Fall 2013 i Contents 1 Polytopes 1 1.1 Convex Combinations and V-Polytopes.....................

More information

Lecture 2 September 3

Lecture 2 September 3 EE 381V: Large Scale Optimization Fall 2012 Lecture 2 September 3 Lecturer: Caramanis & Sanghavi Scribe: Hongbo Si, Qiaoyang Ye 2.1 Overview of the last Lecture The focus of the last lecture was to give

More information

Investigating Mixed-Integer Hulls using a MIP-Solver

Investigating Mixed-Integer Hulls using a MIP-Solver Investigating Mixed-Integer Hulls using a MIP-Solver Matthias Walter Otto-von-Guericke Universität Magdeburg Joint work with Volker Kaibel (OvGU) Aussois Combinatorial Optimization Workshop 2015 Outline

More information

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29 CS 473: Algorithms Ruta Mehta University of Illinois, Urbana-Champaign Spring 2018 Ruta (UIUC) CS473 1 Spring 2018 1 / 29 CS 473: Algorithms, Spring 2018 Simplex and LP Duality Lecture 19 March 29, 2018

More information

11 Linear Programming

11 Linear Programming 11 Linear Programming 11.1 Definition and Importance The final topic in this course is Linear Programming. We say that a problem is an instance of linear programming when it can be effectively expressed

More information

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg MVE165/MMG630, Integer linear programming algorithms Ann-Brith Strömberg 2009 04 15 Methods for ILP: Overview (Ch. 14.1) Enumeration Implicit enumeration: Branch and bound Relaxations Decomposition methods:

More information

Lecture 2. Topology of Sets in R n. August 27, 2008

Lecture 2. Topology of Sets in R n. August 27, 2008 Lecture 2 Topology of Sets in R n August 27, 2008 Outline Vectors, Matrices, Norms, Convergence Open and Closed Sets Special Sets: Subspace, Affine Set, Cone, Convex Set Special Convex Sets: Hyperplane,

More information

16.410/413 Principles of Autonomy and Decision Making

16.410/413 Principles of Autonomy and Decision Making 16.410/413 Principles of Autonomy and Decision Making Lecture 17: The Simplex Method Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology November 10, 2010 Frazzoli (MIT)

More information

Graphs that have the feasible bases of a given linear

Graphs that have the feasible bases of a given linear Algorithmic Operations Research Vol.1 (2006) 46 51 Simplex Adjacency Graphs in Linear Optimization Gerard Sierksma and Gert A. Tijssen University of Groningen, Faculty of Economics, P.O. Box 800, 9700

More information

The Simplex Algorithm

The Simplex Algorithm The Simplex Algorithm Uri Feige November 2011 1 The simplex algorithm The simplex algorithm was designed by Danzig in 1947. This write-up presents the main ideas involved. It is a slight update (mostly

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

On the Hardness of Computing Intersection, Union and Minkowski Sum of Polytopes

On the Hardness of Computing Intersection, Union and Minkowski Sum of Polytopes On the Hardness of Computing Intersection, Union and Minkowski Sum of Polytopes Hans Raj Tiwary hansraj@cs.uni-sb.de FR Informatik Universität des Saarlandes D-66123 Saarbrücken, Germany Tel: +49 681 3023235

More information

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Mathematical and Algorithmic Foundations Linear Programming and Matchings Adavnced Algorithms Lectures Mathematical and Algorithmic Foundations Linear Programming and Matchings Paul G. Spirakis Department of Computer Science University of Patras and Liverpool Paul G. Spirakis

More information

Math 414 Lecture 2 Everyone have a laptop?

Math 414 Lecture 2 Everyone have a laptop? Math 44 Lecture 2 Everyone have a laptop? THEOREM. Let v,...,v k be k vectors in an n-dimensional space and A = [v ;...; v k ] v,..., v k independent v,..., v k span the space v,..., v k a basis v,...,

More information

Optimality certificates for convex minimization and Helly numbers

Optimality certificates for convex minimization and Helly numbers Optimality certificates for convex minimization and Helly numbers Amitabh Basu Michele Conforti Gérard Cornuéjols Robert Weismantel Stefan Weltge May 10, 2017 Abstract We consider the problem of minimizing

More information

THEORY OF LINEAR AND INTEGER PROGRAMMING

THEORY OF LINEAR AND INTEGER PROGRAMMING THEORY OF LINEAR AND INTEGER PROGRAMMING ALEXANDER SCHRIJVER Centrum voor Wiskunde en Informatica, Amsterdam A Wiley-Inter science Publication JOHN WILEY & SONS^ Chichester New York Weinheim Brisbane Singapore

More information

Lecture 6: Faces, Facets

Lecture 6: Faces, Facets IE 511: Integer Programming, Spring 2019 31 Jan, 2019 Lecturer: Karthik Chandrasekaran Lecture 6: Faces, Facets Scribe: Setareh Taki Disclaimer: These notes have not been subjected to the usual scrutiny

More information

College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007

College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007 College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007 CS G399: Algorithmic Power Tools I Scribe: Eric Robinson Lecture Outline: Linear Programming: Vertex Definitions

More information

60 2 Convex sets. {x a T x b} {x ã T x b}

60 2 Convex sets. {x a T x b} {x ã T x b} 60 2 Convex sets Exercises Definition of convexity 21 Let C R n be a convex set, with x 1,, x k C, and let θ 1,, θ k R satisfy θ i 0, θ 1 + + θ k = 1 Show that θ 1x 1 + + θ k x k C (The definition of convexity

More information

Bilinear Programming

Bilinear Programming Bilinear Programming Artyom G. Nahapetyan Center for Applied Optimization Industrial and Systems Engineering Department University of Florida Gainesville, Florida 32611-6595 Email address: artyom@ufl.edu

More information

Rigidity of ball-polyhedra via truncated Voronoi and Delaunay complexes

Rigidity of ball-polyhedra via truncated Voronoi and Delaunay complexes !000111! NNNiiinnnttthhh IIInnnttteeerrrnnnaaatttiiiooonnnaaalll SSSyyymmmpppooosssiiiuuummm ooonnn VVVooorrrooonnnoooiii DDDiiiaaagggrrraaammmsss iiinnn SSSccciiieeennnccceee aaannnddd EEEnnngggiiinnneeeeeerrriiinnnggg

More information

maximize c, x subject to Ax b,

maximize c, x subject to Ax b, Lecture 8 Linear programming is about problems of the form maximize c, x subject to Ax b, where A R m n, x R n, c R n, and b R m, and the inequality sign means inequality in each row. The feasible set

More information

arxiv: v1 [math.co] 12 Dec 2017

arxiv: v1 [math.co] 12 Dec 2017 arxiv:1712.04381v1 [math.co] 12 Dec 2017 Semi-reflexive polytopes Tiago Royer Abstract The Ehrhart function L P(t) of a polytope P is usually defined only for integer dilation arguments t. By allowing

More information

Geometry. Every Simplicial Polytope with at Most d + 4 Vertices Is a Quotient of a Neighborly Polytope. U. H. Kortenkamp. 1.

Geometry. Every Simplicial Polytope with at Most d + 4 Vertices Is a Quotient of a Neighborly Polytope. U. H. Kortenkamp. 1. Discrete Comput Geom 18:455 462 (1997) Discrete & Computational Geometry 1997 Springer-Verlag New York Inc. Every Simplicial Polytope with at Most d + 4 Vertices Is a Quotient of a Neighborly Polytope

More information

Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

More information

Linear Optimization. Andongwisye John. November 17, Linkoping University. Andongwisye John (Linkoping University) November 17, / 25

Linear Optimization. Andongwisye John. November 17, Linkoping University. Andongwisye John (Linkoping University) November 17, / 25 Linear Optimization Andongwisye John Linkoping University November 17, 2016 Andongwisye John (Linkoping University) November 17, 2016 1 / 25 Overview 1 Egdes, One-Dimensional Faces, Adjacency of Extreme

More information

Convex hulls of spheres and convex hulls of convex polytopes lying on parallel hyperplanes

Convex hulls of spheres and convex hulls of convex polytopes lying on parallel hyperplanes Convex hulls of spheres and convex hulls of convex polytopes lying on parallel hyperplanes Menelaos I. Karavelas joint work with Eleni Tzanaki University of Crete & FO.R.T.H. OrbiCG/ Workshop on Computational

More information

Planar Graphs. 1 Graphs and maps. 1.1 Planarity and duality

Planar Graphs. 1 Graphs and maps. 1.1 Planarity and duality Planar Graphs In the first half of this book, we consider mostly planar graphs and their geometric representations, mostly in the plane. We start with a survey of basic results on planar graphs. This chapter

More information

LECTURE 18 LECTURE OUTLINE

LECTURE 18 LECTURE OUTLINE LECTURE 18 LECTURE OUTLINE Generalized polyhedral approximation methods Combined cutting plane and simplicial decomposition methods Lecture based on the paper D. P. Bertsekas and H. Yu, A Unifying Polyhedral

More information

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017 Section Notes 5 Review of Linear Programming Applied Math / Engineering Sciences 121 Week of October 15, 2017 The following list of topics is an overview of the material that was covered in the lectures

More information

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize. Cornell University, Fall 2017 CS 6820: Algorithms Lecture notes on the simplex method September 2017 1 The Simplex Method We will present an algorithm to solve linear programs of the form maximize subject

More information

L-CONVEX-CONCAVE SETS IN REAL PROJECTIVE SPACE AND L-DUALITY

L-CONVEX-CONCAVE SETS IN REAL PROJECTIVE SPACE AND L-DUALITY MOSCOW MATHEMATICAL JOURNAL Volume 3, Number 3, July September 2003, Pages 1013 1037 L-CONVEX-CONCAVE SETS IN REAL PROJECTIVE SPACE AND L-DUALITY A. KHOVANSKII AND D. NOVIKOV Dedicated to Vladimir Igorevich

More information

Optimality certificates for convex minimization and Helly numbers

Optimality certificates for convex minimization and Helly numbers Optimality certificates for convex minimization and Helly numbers Amitabh Basu Michele Conforti Gérard Cornuéjols Robert Weismantel Stefan Weltge October 20, 2016 Abstract We consider the problem of minimizing

More information

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 1: Introduction to Optimization. Instructor: Shaddin Dughmi

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 1: Introduction to Optimization. Instructor: Shaddin Dughmi CS599: Convex and Combinatorial Optimization Fall 013 Lecture 1: Introduction to Optimization Instructor: Shaddin Dughmi Outline 1 Course Overview Administrivia 3 Linear Programming Outline 1 Course Overview

More information

Target Cuts from Relaxed Decision Diagrams

Target Cuts from Relaxed Decision Diagrams Target Cuts from Relaxed Decision Diagrams Christian Tjandraatmadja 1, Willem-Jan van Hoeve 1 1 Tepper School of Business, Carnegie Mellon University, Pittsburgh, PA {ctjandra,vanhoeve}@andrew.cmu.edu

More information

ORIE 6300 Mathematical Programming I September 2, Lecture 3

ORIE 6300 Mathematical Programming I September 2, Lecture 3 ORIE 6300 Mathematical Programming I September 2, 2014 Lecturer: David P. Williamson Lecture 3 Scribe: Divya Singhvi Last time we discussed how to take dual of an LP in two different ways. Today we will

More information

Exact adaptive parallel algorithms for data depth problems. Vera Rosta Department of Mathematics and Statistics McGill University, Montreal

Exact adaptive parallel algorithms for data depth problems. Vera Rosta Department of Mathematics and Statistics McGill University, Montreal Exact adaptive parallel algorithms for data depth problems Vera Rosta Department of Mathematics and Statistics McGill University, Montreal joint work with Komei Fukuda School of Computer Science McGill

More information

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 14: Combinatorial Problems as Linear Programs I. Instructor: Shaddin Dughmi

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 14: Combinatorial Problems as Linear Programs I. Instructor: Shaddin Dughmi CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 14: Combinatorial Problems as Linear Programs I Instructor: Shaddin Dughmi Announcements Posted solutions to HW1 Today: Combinatorial problems

More information

Introduction to Modern Control Systems

Introduction to Modern Control Systems Introduction to Modern Control Systems Convex Optimization, Duality and Linear Matrix Inequalities Kostas Margellos University of Oxford AIMS CDT 2016-17 Introduction to Modern Control Systems November

More information

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini DM545 Linear and Integer Programming Lecture 2 The Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. 3. 4. Standard Form Basic Feasible Solutions

More information

Minimum norm points on polytope boundaries

Minimum norm points on polytope boundaries Minimum norm points on polytope boundaries David Bremner UNB December 20, 2013 Joint work with Yan Cui, Zhan Gao David Bremner (UNB) Minimum norm facet December 20, 2013 1 / 27 Outline 1 Notation 2 Maximum

More information

CAT(0)-spaces. Münster, June 22, 2004

CAT(0)-spaces. Münster, June 22, 2004 CAT(0)-spaces Münster, June 22, 2004 CAT(0)-space is a term invented by Gromov. Also, called Hadamard space. Roughly, a space which is nonpositively curved and simply connected. C = Comparison or Cartan

More information

In this lecture, we ll look at applications of duality to three problems:

In this lecture, we ll look at applications of duality to three problems: Lecture 7 Duality Applications (Part II) In this lecture, we ll look at applications of duality to three problems: 1. Finding maximum spanning trees (MST). We know that Kruskal s algorithm finds this,

More information

6.854 Advanced Algorithms. Scribes: Jay Kumar Sundararajan. Duality

6.854 Advanced Algorithms. Scribes: Jay Kumar Sundararajan. Duality 6.854 Advanced Algorithms Scribes: Jay Kumar Sundararajan Lecturer: David Karger Duality This lecture covers weak and strong duality, and also explains the rules for finding the dual of a linear program,

More information

arxiv: v1 [math.co] 27 Feb 2015

arxiv: v1 [math.co] 27 Feb 2015 Mode Poset Probability Polytopes Guido Montúfar 1 and Johannes Rauh 2 arxiv:1503.00572v1 [math.co] 27 Feb 2015 1 Max Planck Institute for Mathematics in the Sciences, Inselstraße 22, 04103 Leipzig, Germany,

More information

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36 CS 473: Algorithms Ruta Mehta University of Illinois, Urbana-Champaign Spring 2018 Ruta (UIUC) CS473 1 Spring 2018 1 / 36 CS 473: Algorithms, Spring 2018 LP Duality Lecture 20 April 3, 2018 Some of the

More information

LP Geometry: outline. A general LP. minimize x c T x s.t. a T i. x b i, i 2 M 1 a T i x = b i, i 2 M 3 x j 0, j 2 N 1. where

LP Geometry: outline. A general LP. minimize x c T x s.t. a T i. x b i, i 2 M 1 a T i x = b i, i 2 M 3 x j 0, j 2 N 1. where LP Geometry: outline I Polyhedra I Extreme points, vertices, basic feasible solutions I Degeneracy I Existence of extreme points I Optimality of extreme points IOE 610: LP II, Fall 2013 Geometry of Linear

More information

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014 5/2/24 Outline CS38 Introduction to Algorithms Lecture 5 May 2, 24 Linear programming simplex algorithm LP duality ellipsoid algorithm * slides from Kevin Wayne May 2, 24 CS38 Lecture 5 May 2, 24 CS38

More information

Linear Programming. Larry Blume. Cornell University & The Santa Fe Institute & IHS

Linear Programming. Larry Blume. Cornell University & The Santa Fe Institute & IHS Linear Programming Larry Blume Cornell University & The Santa Fe Institute & IHS Linear Programs The general linear program is a constrained optimization problem where objectives and constraints are all

More information

Decision Aid Methodologies In Transportation Lecture 1: Polyhedra and Simplex method

Decision Aid Methodologies In Transportation Lecture 1: Polyhedra and Simplex method Decision Aid Methodologies In Transportation Lecture 1: Polyhedra and Simplex method Chen Jiang Hang Transportation and Mobility Laboratory April 15, 2013 Chen Jiang Hang (Transportation and Mobility Decision

More information

Monotone Paths in Geometric Triangulations

Monotone Paths in Geometric Triangulations Monotone Paths in Geometric Triangulations Adrian Dumitrescu Ritankar Mandal Csaba D. Tóth November 19, 2017 Abstract (I) We prove that the (maximum) number of monotone paths in a geometric triangulation

More information

EULER S FORMULA AND THE FIVE COLOR THEOREM

EULER S FORMULA AND THE FIVE COLOR THEOREM EULER S FORMULA AND THE FIVE COLOR THEOREM MIN JAE SONG Abstract. In this paper, we will define the necessary concepts to formulate map coloring problems. Then, we will prove Euler s formula and apply

More information

COMPENDIOUS LEXICOGRAPHIC METHOD FOR MULTI-OBJECTIVE OPTIMIZATION. Ivan P. Stanimirović. 1. Introduction

COMPENDIOUS LEXICOGRAPHIC METHOD FOR MULTI-OBJECTIVE OPTIMIZATION. Ivan P. Stanimirović. 1. Introduction FACTA UNIVERSITATIS (NIŠ) Ser. Math. Inform. Vol. 27, No 1 (2012), 55 66 COMPENDIOUS LEXICOGRAPHIC METHOD FOR MULTI-OBJECTIVE OPTIMIZATION Ivan P. Stanimirović Abstract. A modification of the standard

More information

Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity

Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity Marc Uetz University of Twente m.uetz@utwente.nl Lecture 5: sheet 1 / 26 Marc Uetz Discrete Optimization Outline 1 Min-Cost Flows

More information

Mathematical Optimization in Radiotherapy Treatment Planning

Mathematical Optimization in Radiotherapy Treatment Planning 1 / 35 Mathematical Optimization in Radiotherapy Treatment Planning Ehsan Salari Department of Radiation Oncology Massachusetts General Hospital and Harvard Medical School HST S14 May 13, 2013 2 / 35 Outline

More information

Introductory Operations Research

Introductory Operations Research Introductory Operations Research Theory and Applications Bearbeitet von Harvir Singh Kasana, Krishna Dev Kumar 1. Auflage 2004. Buch. XI, 581 S. Hardcover ISBN 978 3 540 40138 4 Format (B x L): 15,5 x

More information

Approximate Algorithms for Touring a Sequence of Polygons

Approximate Algorithms for Touring a Sequence of Polygons Approximate Algorithms for Touring a Sequence of Polygons Fajie Li 1 and Reinhard Klette 2 1 Institute for Mathematics and Computing Science, University of Groningen P.O. Box 800, 9700 AV Groningen, The

More information

Integrating column generation in a method to compute a discrete representation of the non-dominated set of multi-objective linear programmes

Integrating column generation in a method to compute a discrete representation of the non-dominated set of multi-objective linear programmes 4OR-Q J Oper Res (2017) 15:331 357 DOI 10.1007/s10288-016-0336-9 RESEARCH PAPER Integrating column generation in a method to compute a discrete representation of the non-dominated set of multi-objective

More information

CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm Instructor: Shaddin Dughmi Algorithms for Convex Optimization We will look at 2 algorithms in detail: Simplex and Ellipsoid.

More information

Orientation of manifolds - definition*

Orientation of manifolds - definition* Bulletin of the Manifold Atlas - definition (2013) Orientation of manifolds - definition* MATTHIAS KRECK 1. Zero dimensional manifolds For zero dimensional manifolds an orientation is a map from the manifold

More information

A mini-introduction to convexity

A mini-introduction to convexity A mini-introduction to convexity Geir Dahl March 14, 2017 1 Introduction Convexity, or convex analysis, is an area of mathematics where one studies questions related to two basic objects, namely convex

More information

x ji = s i, i N, (1.1)

x ji = s i, i N, (1.1) Dual Ascent Methods. DUAL ASCENT In this chapter we focus on the minimum cost flow problem minimize subject to (i,j) A {j (i,j) A} a ij x ij x ij {j (j,i) A} (MCF) x ji = s i, i N, (.) b ij x ij c ij,

More information

Duality. Primal program P: Maximize n. Dual program D: Minimize m. j=1 c jx j subject to n. j=1. i=1 b iy i subject to m. i=1

Duality. Primal program P: Maximize n. Dual program D: Minimize m. j=1 c jx j subject to n. j=1. i=1 b iy i subject to m. i=1 Duality Primal program P: Maximize n j=1 c jx j subject to n a ij x j b i, i = 1, 2,..., m j=1 x j 0, j = 1, 2,..., n Dual program D: Minimize m i=1 b iy i subject to m a ij x j c j, j = 1, 2,..., n i=1

More information