# Discrete Optimization. Lecture Notes 2

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 Discrete Optimization. Lecture Notes 2 Disjunctive Constraints Defining variables and formulating linear constraints can be straightforward or more sophisticated, depending on the problem structure. The language of linear constraints is surprisingly expressive. As another illustration of their modelling power and of the tricks we may have to apply, we discuss a more complicated example involving disjunctive constraints: Sometimes we have pairs of constraints only one of which must be satisfied (logical OR). For instance, in several job scheduling problems, a machine can process only one job at a time, and running jobs cannot be interrupted. Thus one of the following two propositions must be true for any two jobs i, j running on the same machine: i precedes j OR j precedes i. Suppose that job k needs processing time p k. One can formulate different objectives (such as: finish the jobs as soon as possible), but at the moment we are only concerned with the formulation of the constraints. We can assume that all work starts at time 0. A schedule is completely described by the start time t k 0 of every job k. Obviously our variables must fulfill t i + p i t j OR t i + p i t j for all pairs i, j of jobs. But how can we express this OR by linear constraints connected by AND? One way is to introduce new 0,1-variables: x ij = 1 if job i precedes job j, and x ijk = 0 else. (Note that we obtain a MIP.) Let M > 0 be some large enough constant. We create the following constraints for all i, j: t i t j p i + M(1 x ij ) AND t j t i p j + Mx ij. These constraints express that the jobs do not overlap. (Think about it.) We remark that scheduling is a big field, almost a science in itself, with an own standard terminology and a huge variety of optimization problems: Jobs can have release times and due times, preemption may be allowed or not, machines can be identical or different, there can be precedence constraints between jobs, one may aim at minimizing the total work, the makespan or some weighted combinations, and much more. 1

2 Minimizing the Sum Norm In many optimization tasks one is interested in a solution vector x with minimum l 1 -norm, that is, min x i under some linear constraints. Because of the absolute-value terms this objective function is not linear. An obvious idea to turn it into an LP is: Introduce new variables y i to replace x i, and express y i = x i by the constraints. We may state y i 0, further y i = x i or y i = x i, write them as inequalities, then apply the trick for disjunctive constraints, and so on. However in this case there exists a much more elegant way: Instead of the constraints proposed above, we simply introduce a pair of constraints y i x i and y i x i. This does not seem to solve our problem, as this pair of constraints itself does not express y i = x i, rather it only says y i x i. However, we are minimizing y i in the end. Assume that y i > x i for some i, in any solution. Then we can decrease y i to y i = x i, without violating any constraints. This is because y i appears only in the new constraints. This guarantees y i = x i in any optimal solution, and this is all we need. Wrap-up Some general remarks are appropriate here. We have seen that various problems can be written as LP, ILP, or MIP. However there is much freedom in the definition of variables and the choice of constraints. Often, already this modelling phase is not straightforward and a creative act, or a matter of experience. The constraints must describe the feasible set, but it does not mean that they are uniquely determined. For example, infinitely many sets of linear constraints describe the same set of integer points! It arises the question which of many equivalent formulations is favourable. One main criterion is computational complexity of the algorithms that we can apply. So far we have addressed problem formulations, but not solution methods. Which algorithms can solve a given problem, once we have, e.g., an ILP for it? Are they fast enough? Will they always output optimal solutions? If not, how close are the solutions to the optimal ones? Finally, (I)LP is not the only way to model optimization problems. It is often preferred because the mathematical theory of LP is rather general, well understood and powerful, and there exist generic algorithms, implemented in software packages. But we are not forced to squeeze every problem into ILP form. For many problems, special-purpose algorithms can be simpler 2

3 and faster, as they take advantage of special structural features of a problem. In the opposite direction, LP can be further generalized by nonlinear programming, convex optimization, constraint programming, etc., to mention only a few keywords. We conclude with a literature hint. This is not closely tied to the mathematical course contents, but might be some inspiring reading about modelling issues in real-world optimization problems: R. Barták, C. Sheahan, A. Sheahan: MAK(Euro) - a system for modelling, optimising, and analysing production in small and medium enterprises. SOFSEM 2012, Lecture Notes in Computer Science (Springer), vol. 7147, pp (should be accessible electronically through Chalmers Library). Algorithmic Complexity of LP and ILP The next phase after modelling is to actually solve the problems. difficult is this? How The Simplex Algorithm and the Geometry of LP We outline a classical algorithm for solving LP. Canonical and standard form of LP are equivalent, since we can transform them into each other. To transform a standard LP into a canonical LP, replace Ax = b with Ax b and Ax b. Transforming a canonical LP into a standard LP is more interesting: Replace Ax b with Ax+s = b and s 0, where s is a vector of m new variables, called the slack variables. Introducing yet another variable z that represents the objective function value, we can write this form of an LP as a so-called tableau: s = b Ax, z = c T x. Due to the minus sign, our goal is now to maximize z. In the following we will assume b 0, which is the case in many LPs arising from natural applications. The general case where b may contain also negative entries is handled later. In our tableau we may set x := 0 which implies s = b and z = 0. Since b 0, this is a feasible solution where n of the n + m variables are 0. We call it a basic feasible solution. Next we try to improve this solution, i.e., to raise z. In order to describe the general step, we introduce the general notion of a tableau. It looks as follows: x B = β Λx N, z = z 0 + γ T x N. Here, x B and x N is a vector of m and n nonnegative variables called basic and nonbasic variables, respectively. The other symbols stand for constants (matrices, vectors, numbers), and 3

4 β 0 is required. Note that our initial tableau s = b Ax, z = c T x fits in this scheme. By x N := 0 we get a basic feasible solution with z = z 0. Now suppose that γ j > 0 holds for some j. If we increase the jth nonbasic variable, we obviously improve z. We can increase it as long as none of the basic variables becomes negative. As soon as some of the positive basic variables reaches 0, we remove it from the basis, while the increased nonbasic variable is moved to the basis. After this exchange we have to rewrite the tableau. (For the moment we skip the details.) Property β 0 is preserved, since β = x B if x N = 0, and x B 0 holds by construction. This exchange is also called a pivot step. We repeatly apply pivot steps until γ 0. At this moment we know that the current solution is optimal, since any feasible solution must satisfy x N 0. This algorithm that successively improves basic feasible solutions in pivot steps exchanging basic and nonbasic variables is called the simplex algorithm. Its name is explained by the geometric interpretation. Note that linear inequality constraints are satisfied by an intersection of halfspaces, that is, a convex polytope. Specifically, in the (n + m)-dimensional space of variables and slack variables, the m equality constraints describe an n-dimensional subspace in which the feasible set is a convex polytope, also called a simplex. The basic feasible solutions are the vertices of this polytope, because n variables are 0. Since the objective function is linear, it attains its optimum at some vertex of the polytope. It follows that some optimal solution must be a basic feasible solution. The simplex algorithm proceeds from a vertex to a neighbor vertex (along an edge of the polytope) with a better objective value, as long as possible. From convexity it follows that a local optimum is also a global optimum. We have to discuss the computational details of tableau rewriting. After every pivot step we must express the new basic variable x j in terms of the nonbasic variables. We take the equation which had the new nonbasic variable on the left-hand side and solve it for x j. It contains x j with negative coefficient, since it was this equation that limited the increase of x j. Then we substitute x j in all other equations. In a pivot step it may happen that the selected nonbasic variable increases forever. But then the LP itself is unbounded and has no finite optimal value. Hence this case is not a problem. Nevertheless, the simplex algorithm suffers from other problems. It may happen that no nonbasic variable can increase, because some basic variable is already 0 and would 4

5 become negative. We speak of a degeneracy. In the geometric language, this case appears if more than n bounding hyperplanes of the polytope go through the current vertex. Still we can exchange two variables, but without improving z. In the worst case we may run into a cycle of degenerate tableaus. A simple trick to break such degeneracies is to add small perturbations to b, thus splitting a degenerate vertex into several regular vertices close to each other. Thus we can escape from every degeneracy. In the end we can undo the perturbations and get an exact solution. It also follows that the simplex algorithm always terminates, because the number of different vertices is bounded by ( n+m) m. Remember that we assumed b 0 in the beginning. It remains to discuss LP, say in canonical form min c T x, Ax b, x 0, with an arbitrary vector b. We introduce a variable x 0 and consider the auxiliary problem min x 0, Ax x 0 1 b, x 0 0, x 0, where 1 denotes the vector of m entries 1. Now we start from the tableau s = b Ax+x 0 1, z = x 0. We set x = 0 and increase x 0 until s 0. At this moment we have b+x 0 1 0, moreover some slack variable is 0. Exchanging this slack variable with x 0 yields a feasible tableau. Hence we can from now on use the simplex algorithm to solve the auxiliary problem. If the optimal x 0 is nonzero then, by construction, the original LP has no feasible solution. If x 0 = 0, we can finally ignore x 0 and get a feasible tableau for the original problem, hence we can continue with the simplex algorithm. (If x 0 is currently a basic variable, first exchange it once more, and then ignore it.) This procedure settles the case of arbitrary vectors b. In a pivot step we have in general the choice between several nonbasic variables. We may choose any of them, but we would prefer a choice rule that leads us to the optimum as quickly as possible. Several heuristic rules work well in most cases. Besides the simplex algorithm, the so-called interior point methods (not discussed here) are also widely used. ILP is NP-complete We presume that you know already the notions of polynomial reduction, NP-completeness, and the satisfiabilty problem (SAT) for Boolean formulas in conjunctive normal form (CNF). An important fact is that ILP is NP-complete. To see this, we reduce the NP-complete SAT problem to ILP. In other words, we reformulate any instance of this hard logical problem in polynomial time as an ILP. The idea 5

6 is really simple: Transform every clause of the given CNF into a linear constraint as follows. The Boolean values 0,1 are interpreted as real numbers, the logical OR ( ) is replaced with a usual addition of numbers (+). A Boolean variable x i is interpreted as a real variable x i. A negated Boolean variable x i is replaced with 1 x i. Now, a clause is true if and only if the sum of these terms is at least 1. Hence our ILP has a feasible solution if and only if the given CNF formula is satisfiable. It also follows that MIP is NP-complete. A consequence is that an ILP formulation alone does not yield a fast algorithm for an optimization problem. We must also utilize specific features of the problem to get good solutions in reasonable time. Therefore we need various approaches to solve such problems. Beware of a frequent misunderstanding: The result does not mean that every single ILP is hard to solve, it only says that (probably) no fast algorithm exists that would be able to solve all ILP. However, one example of a specific NP-complete integer optimization problem is 0,1-Knapsack. (We do not prove this here.) This might be astonishing, because this problem has only one linear constraint. One might expect that integer problems are easier than the corresponding problems with real variables, but the opposite is true: There exist polynomial-time algorithms for LP. The situation is even more bizarre: Polynomial-time algorithms for LP are barely practical. While the simplex method needs exponential time in the worst case, it is practical, in the sense that it works much faster for the vast majority of practical instances. This was known empirically for a long time. Finally this fact has also found a theoretical explanation by an exciting result of Spielman and Teng (Journal of the ACM 51 (2004), pp ). They analyzed the average runtime in some neighborhood of any instance, that is, an initial instance is slightly modified in a randomized way. The average time is polynomial, even if the initial instance is nasty. 6

### The Simplex Algorithm

The Simplex Algorithm Uri Feige November 2011 1 The simplex algorithm The simplex algorithm was designed by Danzig in 1947. This write-up presents the main ideas involved. It is a slight update (mostly

### 16.410/413 Principles of Autonomy and Decision Making

16.410/413 Principles of Autonomy and Decision Making Lecture 17: The Simplex Method Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology November 10, 2010 Frazzoli (MIT)

### 3 INTEGER LINEAR PROGRAMMING

3 INTEGER LINEAR PROGRAMMING PROBLEM DEFINITION Integer linear programming problem (ILP) of the decision variables x 1,..,x n : (ILP) subject to minimize c x j j n j= 1 a ij x j x j 0 x j integer n j=

### / Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang

600.469 / 600.669 Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang 9.1 Linear Programming Suppose we are trying to approximate a minimization

### Introduction to Linear Programming

Introduction to Linear Programming Eric Feron (updated Sommer Gentry) (updated by Paul Robertson) 16.410/16.413 Historical aspects Examples of Linear programs Historical contributor: G. Dantzig, late 1940

### BCN Decision and Risk Analysis. Syed M. Ahmed, Ph.D.

Linear Programming Module Outline Introduction The Linear Programming Model Examples of Linear Programming Problems Developing Linear Programming Models Graphical Solution to LP Problems The Simplex Method

### 3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

### Algorithmic Game Theory and Applications. Lecture 6: The Simplex Algorithm

Algorithmic Game Theory and Applications Lecture 6: The Simplex Algorithm Kousha Etessami Recall our example 1 x + y

### Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

Section Notes 5 Review of Linear Programming Applied Math / Engineering Sciences 121 Week of October 15, 2017 The following list of topics is an overview of the material that was covered in the lectures

### NP-Hardness. We start by defining types of problem, and then move on to defining the polynomial-time reductions.

CS 787: Advanced Algorithms NP-Hardness Instructor: Dieter van Melkebeek We review the concept of polynomial-time reductions, define various classes of problems including NP-complete, and show that 3-SAT

### Graphs that have the feasible bases of a given linear

Algorithmic Operations Research Vol.1 (2006) 46 51 Simplex Adjacency Graphs in Linear Optimization Gerard Sierksma and Gert A. Tijssen University of Groningen, Faculty of Economics, P.O. Box 800, 9700

### J Linear Programming Algorithms

Simplicibus itaque verbis gaudet Mathematica Veritas, cum etiam per se simplex sit Veritatis oratio. [And thus Mathematical Truth prefers simple words, because the language of Truth is itself simple.]

### Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity

Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity Marc Uetz University of Twente m.uetz@utwente.nl Lecture 5: sheet 1 / 26 Marc Uetz Discrete Optimization Outline 1 Min-Cost Flows

### Chapter 4: The Mechanics of the Simplex Method

Chapter 4: The Mechanics of the Simplex Method The simplex method is a remarkably simple and elegant algorithmic engine for solving linear programs. In this chapter we will examine the internal mechanics

### 4.1 Graphical solution of a linear program and standard form

4.1 Graphical solution of a linear program and standard form Consider the problem min c T x Ax b x where x = ( x1 x ) ( 16, c = 5 ), b = 4 5 9, A = 1 7 1 5 1. Solve the problem graphically and determine

Iterative Improvement Algorithm design technique for solving optimization problems Start with a feasible solution Repeat the following step until no improvement can be found: change the current feasible

### 16.410/413 Principles of Autonomy and Decision Making

16.410/413 Principles of Autonomy and Decision Making Lecture 16: Mathematical Programming I Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology November 8, 2010 E. Frazzoli

### Chapter 3: Towards the Simplex Method for Efficient Solution of Linear Programs

Chapter 3: Towards the Simplex Method for Efficient Solution of Linear Programs The simplex method, invented by George Dantzig in 1947, is the basic workhorse for solving linear programs, even today. While

### Convexity: an introduction

Convexity: an introduction Geir Dahl CMA, Dept. of Mathematics and Dept. of Informatics University of Oslo 1 / 74 1. Introduction 1. Introduction what is convexity where does it arise main concepts and

### Linear programming and duality theory

Linear programming and duality theory Complements of Operations Research Giovanni Righini Linear Programming (LP) A linear program is defined by linear constraints, a linear objective function. Its variables

### Vertex Cover Approximations

CS124 Lecture 20 Heuristics can be useful in practice, but sometimes we would like to have guarantees. Approximation algorithms give guarantees. It is worth keeping in mind that sometimes approximation

### 5 The Theory of the Simplex Method

5 The Theory of the Simplex Method Chapter 4 introduced the basic mechanics of the simplex method. Now we shall delve a little more deeply into this algorithm by examining some of its underlying theory.

### MATH 890 HOMEWORK 2 DAVID MEREDITH

MATH 890 HOMEWORK 2 DAVID MEREDITH (1) Suppose P and Q are polyhedra. Then P Q is a polyhedron. Moreover if P and Q are polytopes then P Q is a polytope. The facets of P Q are either F Q where F is a facet

### VARIANTS OF THE SIMPLEX METHOD

C H A P T E R 6 VARIANTS OF THE SIMPLEX METHOD By a variant of the Simplex Method (in this chapter) we mean an algorithm consisting of a sequence of pivot steps in the primal system using alternative rules

### arxiv: v1 [math.co] 12 Dec 2017

arxiv:1712.04381v1 [math.co] 12 Dec 2017 Semi-reflexive polytopes Tiago Royer Abstract The Ehrhart function L P(t) of a polytope P is usually defined only for integer dilation arguments t. By allowing

### maximize minimize b T y subject to A T y c, 0 apple y. The problem on the right is in standard form so we can take its dual to get the LP c T x

4 Duality Theory Recall from Section that the dual to an LP in standard form (P) is the LP maximize subject to c T x Ax apple b, apple x (D) minimize b T y subject to A T y c, apple y. Since the problem

### AM 121: Intro to Optimization Models and Methods Fall 2017

AM 121: Intro to Optimization Models and Methods Fall 2017 Lecture 10: Dual Simplex Yiling Chen SEAS Lesson Plan Interpret primal simplex in terms of pivots on the corresponding dual tableau Dictionaries

### Applied Lagrange Duality for Constrained Optimization

Applied Lagrange Duality for Constrained Optimization Robert M. Freund February 10, 2004 c 2004 Massachusetts Institute of Technology. 1 1 Overview The Practical Importance of Duality Review of Convexity

### Applications of Linear Programming

Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 1 Why LP? Linear programming (LP, also called linear

### Generalized Network Flow Programming

Appendix C Page Generalized Network Flow Programming This chapter adapts the bounded variable primal simplex method to the generalized minimum cost flow problem. Generalized networks are far more useful

### In this chapter we introduce some of the basic concepts that will be useful for the study of integer programming problems.

2 Basics In this chapter we introduce some of the basic concepts that will be useful for the study of integer programming problems. 2.1 Notation Let A R m n be a matrix with row index set M = {1,...,m}

### SUBSTITUTING GOMORY CUTTING PLANE METHOD TOWARDS BALAS ALGORITHM FOR SOLVING BINARY LINEAR PROGRAMMING

Bulletin of Mathematics Vol. 06, No. 0 (20), pp.. SUBSTITUTING GOMORY CUTTING PLANE METHOD TOWARDS BALAS ALGORITHM FOR SOLVING BINARY LINEAR PROGRAMMING Eddy Roflin, Sisca Octarina, Putra B. J Bangun,

### Mathematical Programming and Research Methods (Part II)

Mathematical Programming and Research Methods (Part II) 4. Convexity and Optimization Massimiliano Pontil (based on previous lecture by Andreas Argyriou) 1 Today s Plan Convex sets and functions Types

### GENERAL ASSIGNMENT PROBLEM via Branch and Price JOHN AND LEI

GENERAL ASSIGNMENT PROBLEM via Branch and Price JOHN AND LEI Outline Review the column generation in Generalized Assignment Problem (GAP) GAP Examples in Branch and Price 2 Assignment Problem The assignment

### How to Solve a Standard Maximization Problem Using the Simplex Method and the Rowops Program

How to Solve a Standard Maximization Problem Using the Simplex Method and the Rowops Program Problem: Maximize z = x + 0x subject to x + x 6 x + x 00 with x 0 y 0 I. Setting Up the Problem. Rewrite each

### OPERATIONS RESEARCH. Linear Programming Problem

OPERATIONS RESEARCH Chapter 1 Linear Programming Problem Prof. Bibhas C. Giri Department of Mathematics Jadavpur University Kolkata, India Email: bcgiri.jumath@gmail.com 1.0 Introduction Linear programming

### be a polytope. has such a representation iff it contains the origin in its interior. For a generic, sort the inequalities so that

( Shelling (Bruggesser-Mani 1971) and Ranking Let be a polytope. has such a representation iff it contains the origin in its interior. For a generic, sort the inequalities so that. a ranking of vertices

### Polynomial-Time Approximation Algorithms

6.854 Advanced Algorithms Lecture 20: 10/27/2006 Lecturer: David Karger Scribes: Matt Doherty, John Nham, Sergiy Sidenko, David Schultz Polynomial-Time Approximation Algorithms NP-hard problems are a vast

### 11.1 Facility Location

CS787: Advanced Algorithms Scribe: Amanda Burton, Leah Kluegel Lecturer: Shuchi Chawla Topic: Facility Location ctd., Linear Programming Date: October 8, 2007 Today we conclude the discussion of local

### DEGENERACY AND THE FUNDAMENTAL THEOREM

DEGENERACY AND THE FUNDAMENTAL THEOREM The Standard Simplex Method in Matrix Notation: we start with the standard form of the linear program in matrix notation: (SLP) m n we assume (SLP) is feasible, and

### This lecture: Convex optimization Convex sets Convex functions Convex optimization problems Why convex optimization? Why so early in the course?

Lec4 Page 1 Lec4p1, ORF363/COS323 This lecture: Convex optimization Convex sets Convex functions Convex optimization problems Why convex optimization? Why so early in the course? Instructor: Amir Ali Ahmadi

### Hybrid Constraint Solvers

Hybrid Constraint Solvers - An overview Why Hybrid Solvers CP and SAT: Lazy Clause Generation CP and LP: Reification of Linear Constraints Conclusions 9 November 2011 Pedro Barahona - EPCL - Hybrid Solvers

### COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

COT 6936: Topics in Algorithms! Giri Narasimhan ECS 254A / EC 2443; Phone: x3748 giri@cs.fiu.edu http://www.cs.fiu.edu/~giri/teach/cot6936_s12.html https://moodle.cis.fiu.edu/v2.1/course/view.php?id=174

### Reductions. designing algorithms establishing lower bounds establishing intractability classifying problems. Bird s-eye view

Bird s-eye view Reductions designing algorithms establishing lower bounds establishing intractability classifying problems Desiderata. Classify problems according to computational requirements. Linear:

### 9.1 Cook-Levin Theorem

CS787: Advanced Algorithms Scribe: Shijin Kong and David Malec Lecturer: Shuchi Chawla Topic: NP-Completeness, Approximation Algorithms Date: 10/1/2007 As we ve already seen in the preceding lecture, two

### On the Complexity of the Policy Improvement Algorithm. for Markov Decision Processes

On the Complexity of the Policy Improvement Algorithm for Markov Decision Processes Mary Melekopoglou Anne Condon Computer Sciences Department University of Wisconsin - Madison 0 West Dayton Street Madison,

### A Short SVM (Support Vector Machine) Tutorial

A Short SVM (Support Vector Machine) Tutorial j.p.lewis CGIT Lab / IMSC U. Southern California version 0.zz dec 004 This tutorial assumes you are familiar with linear algebra and equality-constrained optimization/lagrange

### Topic: Local Search: Max-Cut, Facility Location Date: 2/13/2007

CS880: Approximations Algorithms Scribe: Chi Man Liu Lecturer: Shuchi Chawla Topic: Local Search: Max-Cut, Facility Location Date: 2/3/2007 In previous lectures we saw how dynamic programming could be

### Algorithm Design and Analysis

Algorithm Design and Analysis LECTURE 29 Approximation Algorithms Load Balancing Weighted Vertex Cover Reminder: Fill out SRTEs online Don t forget to click submit Sofya Raskhodnikova 12/7/2016 Approximation

### PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING KELLER VANDEBOGERT AND CHARLES LANNING 1. Introduction Interior point methods are, put simply, a technique of optimization where, given a problem

### 1 Definition of Reduction

1 Definition of Reduction Problem A is reducible, or more technically Turing reducible, to problem B, denoted A B if there a main program M to solve problem A that lacks only a procedure to solve problem

### SUBSTITUTING GOMORY CUTTING PLANE METHOD TOWARDS BALAS ALGORITHM FOR SOLVING BINARY LINEAR PROGRAMMING

ASIAN JOURNAL OF MATHEMATICS AND APPLICATIONS Volume 2014, Article ID ama0156, 11 pages ISSN 2307-7743 http://scienceasia.asia SUBSTITUTING GOMORY CUTTING PLANE METHOD TOWARDS BALAS ALGORITHM FOR SOLVING

### CSE 40/60236 Sam Bailey

CSE 40/60236 Sam Bailey Solution: any point in the variable space (both feasible and infeasible) Cornerpoint solution: anywhere two or more constraints intersect; could be feasible or infeasible Feasible

### Convex Optimization CMU-10725

Convex Optimization CMU-10725 2. Linear Programs Barnabás Póczos & Ryan Tibshirani Please ask questions! Administrivia Lecture = 40 minutes part 1-5 minutes break 35 minutes part 2 Slides: http://www.stat.cmu.edu/~ryantibs/convexopt/

### Parameterized Complexity - an Overview

Parameterized Complexity - an Overview 1 / 30 Parameterized Complexity - an Overview Ue Flarup 1 flarup@imada.sdu.dk 1 Department of Mathematics and Computer Science University of Southern Denmark, Odense,

### arxiv: v1 [cs.cc] 2 Sep 2017

Complexity of Domination in Triangulated Plane Graphs Dömötör Pálvölgyi September 5, 2017 arxiv:1709.00596v1 [cs.cc] 2 Sep 2017 Abstract We prove that for a triangulated plane graph it is NP-complete to

### Figure : Example Precedence Graph

CS787: Advanced Algorithms Topic: Scheduling with Precedence Constraints Presenter(s): James Jolly, Pratima Kolan 17.5.1 Motivation 17.5.1.1 Objective Consider the problem of scheduling a collection of

### Crossing Numbers and Parameterized Complexity

Crossing Numbers and Parameterized Complexity MichaelJ.Pelsmajer 1, Marcus Schaefer 2, and Daniel Štefankovič3 1 Illinois Institute of Technology, Chicago, IL 60616, USA pelsmajer@iit.edu 2 DePaul University,

### Lecture notes on Transportation and Assignment Problem (BBE (H) QTM paper of Delhi University)

Transportation and Assignment Problems The transportation model is a special class of linear programs. It received this name because many of its applications involve determining how to optimally transport

### 4.1 Review - the DPLL procedure

Applied Logic Lecture 4: Efficient SAT solving CS 4860 Spring 2009 Thursday, January 29, 2009 The main purpose of these notes is to help me organize the material that I used to teach today s lecture. They

### Boolean Functions (Formulas) and Propositional Logic

EECS 219C: Computer-Aided Verification Boolean Satisfiability Solving Part I: Basics Sanjit A. Seshia EECS, UC Berkeley Boolean Functions (Formulas) and Propositional Logic Variables: x 1, x 2, x 3,, x

### Lecture 10 October 7, 2014

6.890: Algorithmic Lower Bounds: Fun With Hardness Proofs Fall 2014 Lecture 10 October 7, 2014 Prof. Erik Demaine Scribes: Fermi Ma, Asa Oines, Mikhail Rudoy, Erik Waingarten Overview This lecture begins

### HARNESSING CERTAINTY TO SPEED TASK-ALLOCATION ALGORITHMS FOR MULTI-ROBOT SYSTEMS

HARNESSING CERTAINTY TO SPEED TASK-ALLOCATION ALGORITHMS FOR MULTI-ROBOT SYSTEMS An Undergraduate Research Scholars Thesis by DENISE IRVIN Submitted to the Undergraduate Research Scholars program at Texas

### Investigating Mixed-Integer Hulls using a MIP-Solver

Investigating Mixed-Integer Hulls using a MIP-Solver Matthias Walter Otto-von-Guericke Universität Magdeburg Joint work with Volker Kaibel (OvGU) Aussois Combinatorial Optimization Workshop 2015 Outline

### Review for Mastery Using Graphs and Tables to Solve Linear Systems

3-1 Using Graphs and Tables to Solve Linear Systems A linear system of equations is a set of two or more linear equations. To solve a linear system, find all the ordered pairs (x, y) that make both equations

### FACES OF CONVEX SETS

FACES OF CONVEX SETS VERA ROSHCHINA Abstract. We remind the basic definitions of faces of convex sets and their basic properties. For more details see the classic references [1, 2] and [4] for polytopes.

### arxiv: v1 [math.co] 27 Feb 2015

Mode Poset Probability Polytopes Guido Montúfar 1 and Johannes Rauh 2 arxiv:1503.00572v1 [math.co] 27 Feb 2015 1 Max Planck Institute for Mathematics in the Sciences, Inselstraße 22, 04103 Leipzig, Germany,

### A PARAMETRIC SIMPLEX METHOD FOR OPTIMIZING A LINEAR FUNCTION OVER THE EFFICIENT SET OF A BICRITERIA LINEAR PROBLEM. 1.

ACTA MATHEMATICA VIETNAMICA Volume 21, Number 1, 1996, pp. 59 67 59 A PARAMETRIC SIMPLEX METHOD FOR OPTIMIZING A LINEAR FUNCTION OVER THE EFFICIENT SET OF A BICRITERIA LINEAR PROBLEM NGUYEN DINH DAN AND

### Optimization and approximation on systems of geometric objects van Leeuwen, E.J.

UvA-DARE (Digital Academic Repository) Optimization and approximation on systems of geometric objects van Leeuwen, E.J. Link to publication Citation for published version (APA): van Leeuwen, E. J. (2009).

### Linear Programming and Integer Linear Programming

Linear Programming and Integer Linear Programming Kurt Mehlhorn May 26, 2013 revised, May 20, 2014 1 Introduction 2 2 History 3 3 Expressiveness 3 4 Duality 4 5 Algorithms 8 5.1 Fourier-Motzkin Elimination............................

### 2 Computation with Floating-Point Numbers

2 Computation with Floating-Point Numbers 2.1 Floating-Point Representation The notion of real numbers in mathematics is convenient for hand computations and formula manipulations. However, real numbers

### Boolean Representations and Combinatorial Equivalence

Chapter 2 Boolean Representations and Combinatorial Equivalence This chapter introduces different representations of Boolean functions. It then discusses the applications of these representations for proving

### IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I Lecture 3: Linear Programming, Continued Prof. John Gunnar Carlsson September 15, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 15, 2010

### COVERING POINTS WITH AXIS PARALLEL LINES. KAWSAR JAHAN Bachelor of Science, Bangladesh University of Professionals, 2009

COVERING POINTS WITH AXIS PARALLEL LINES KAWSAR JAHAN Bachelor of Science, Bangladesh University of Professionals, 2009 A Thesis Submitted to the School of Graduate Studies of the University of Lethbridge

### Pearls of Algorithms

Part 3: Randomized Algorithms and Probabilistic Analysis Prof. Dr. Institut für Informatik Winter 2013/14 Efficient Algorithms When is an algorithm considered efficient? Efficient Algorithms When is an

### Integer Programming Chapter 9

Integer Programming Chapter 9 University of Chicago Booth School of Business Kipp Martin October 25, 2017 1 / 40 Outline Key Concepts MILP Set Monoids LP set Relaxation of MILP Set Formulation Quality

### Approximation Algorithms: The Primal-Dual Method. My T. Thai

Approximation Algorithms: The Primal-Dual Method My T. Thai 1 Overview of the Primal-Dual Method Consider the following primal program, called P: min st n c j x j j=1 n a ij x j b i j=1 x j 0 Then the

### The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

### Convex Optimization - Chapter 1-2. Xiangru Lian August 28, 2015

Convex Optimization - Chapter 1-2 Xiangru Lian August 28, 2015 1 Mathematical optimization minimize f 0 (x) s.t. f j (x) 0, j=1,,m, (1) x S x. (x 1,,x n ). optimization variable. f 0. R n R. objective

### Lecture 12 March 4th

Math 239: Discrete Mathematics for the Life Sciences Spring 2008 Lecture 12 March 4th Lecturer: Lior Pachter Scribe/ Editor: Wenjing Zheng/ Shaowei Lin 12.1 Alignment Polytopes Recall that the alignment

### Algebraic Geometry of Segmentation and Tracking

Ma191b Winter 2017 Geometry of Neuroscience Geometry of lines in 3-space and Segmentation and Tracking This lecture is based on the papers: Reference: Marco Pellegrini, Ray shooting and lines in space.

### A Simplied NP-complete MAXSAT Problem. Abstract. It is shown that the MAX2SAT problem is NP-complete even if every variable

A Simplied NP-complete MAXSAT Problem Venkatesh Raman 1, B. Ravikumar 2 and S. Srinivasa Rao 1 1 The Institute of Mathematical Sciences, C. I. T. Campus, Chennai 600 113. India 2 Department of Computer

### P Is Not Equal to NP. ScholarlyCommons. University of Pennsylvania. Jon Freeman University of Pennsylvania. October 1989

University of Pennsylvania ScholarlyCommons Technical Reports (CIS) Department of Computer & Information Science October 1989 P Is Not Equal to NP Jon Freeman University of Pennsylvania Follow this and

### 9. Graph Isomorphism and the Lasserre Hierarchy.

Sum of squares and integer programming relaxations Lecture 9 March, 014 9. Graph Isomorphism and the Lasserre Hierarchy. Lecturer: Massimo Lauria Let G and H be two graphs, we say that two graphs are isomorphic,

### Bilinear Programming

Bilinear Programming Artyom G. Nahapetyan Center for Applied Optimization Industrial and Systems Engineering Department University of Florida Gainesville, Florida 32611-6595 Email address: artyom@ufl.edu

### Projection-Based Methods in Optimization

Projection-Based Methods in Optimization Charles Byrne (Charles Byrne@uml.edu) http://faculty.uml.edu/cbyrne/cbyrne.html Department of Mathematical Sciences University of Massachusetts Lowell Lowell, MA

### Orientation of manifolds - definition*

Bulletin of the Manifold Atlas - definition (2013) Orientation of manifolds - definition* MATTHIAS KRECK 1. Zero dimensional manifolds For zero dimensional manifolds an orientation is a map from the manifold

### Division of the Humanities and Social Sciences. Convex Analysis and Economic Theory Winter Separation theorems

Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory Winter 2018 Topic 8: Separation theorems 8.1 Hyperplanes and half spaces Recall that a hyperplane in

### arxiv: v1 [math.co] 17 Jan 2014

Regular matchstick graphs Sascha Kurz Fakultät für Mathematik, Physik und Informatik, Universität Bayreuth, Germany Rom Pinchasi Mathematics Dept., Technion Israel Institute of Technology, Haifa 2000,

### MATLAB Solution of Linear Programming Problems

MATLAB Solution of Linear Programming Problems The simplex method is included in MATLAB using linprog function. All is needed is to have the problem expressed in the terms of MATLAB definitions. Appendix

### Lecture 2. 1 Introduction. 2 The Set Cover Problem. COMPSCI 632: Approximation Algorithms August 30, 2017

COMPSCI 632: Approximation Algorithms August 30, 2017 Lecturer: Debmalya Panigrahi Lecture 2 Scribe: Nat Kell 1 Introduction In this lecture, we examine a variety of problems for which we give greedy approximation

### Dynamic Programming Algorithms

CSC 364S Notes University of Toronto, Fall 2003 Dynamic Programming Algorithms The setting is as follows. We wish to find a solution to a given problem which optimizes some quantity Q of interest; for

### Lecture 5. If, as shown in figure, we form a right triangle With P1 and P2 as vertices, then length of the horizontal

Distance; Circles; Equations of the form Lecture 5 y = ax + bx + c In this lecture we shall derive a formula for the distance between two points in a coordinate plane, and we shall use that formula to

### A tree traversal algorithm for decision problems in knot theory and 3-manifold topology

A tree traversal algorithm for decision problems in knot theory and 3-manifold topology Benjamin A. Burton and Melih Ozlen Author s self-archived version Available from http://www.maths.uq.edu.au/~bab/papers/

### Operations Research. Unit-I. Course Description:

Operations Research Course Description: Operations Research is a very important area of study, which tracks its roots to business applications. It combines the three broad disciplines of Mathematics, Computer

### Optimality certificates for convex minimization and Helly numbers

Optimality certificates for convex minimization and Helly numbers Amitabh Basu Michele Conforti Gérard Cornuéjols Robert Weismantel Stefan Weltge October 20, 2016 Abstract We consider the problem of minimizing

### Pivot Rules for Linear Programming: A Survey on Recent Theoretical Developments

Pivot Rules for Linear Programming: A Survey on Recent Theoretical Developments T. Terlaky and S. Zhang November 1991 1 Address of the authors: Tamás Terlaky 1 Faculty of Technical Mathematics and Computer

### Convexity. 1 X i is convex. = b is a hyperplane in R n, and is denoted H(p, b) i.e.,

Convexity We ll assume throughout, without always saying so, that we re in the finite-dimensional Euclidean vector space R n, although sometimes, for statements that hold in any vector space, we ll say