A Parallel Macro Partitioning Framework for Solving Mixed Integer Programs

Similar documents
Heuristics in MILP. Group 1 D. Assouline, N. Molyneaux, B. Morén. Supervisors: Michel Bierlaire, Andrea Lodi. Zinal 2017 Winter School

The Gurobi Optimizer. Bob Bixby

State-of-the-Optimization using Xpress-MP v2006

Motivation for Heuristics

The Gurobi Solver V1.0

Using Multiple Machines to Solve Models Faster with Gurobi 6.0

Repairing MIP infeasibility through Local Branching

Solving Difficult MIP Problems using GAMS and Condor

Parallel and Distributed Optimization with Gurobi Optimizer

Active-Constraint Variable Ordering for Faster Feasibility of Mixed Integer Linear Programs

George Reloaded. M. Monaci (University of Padova, Italy) joint work with M. Fischetti. MIP Workshop, July 2010

In this paper we address the problem of finding a feasible solution of a generic MIP problem of the form

The feasibility pump. Matteo Fischetti Fred Glover Andrea Lodi. 1. Introduction

Outline. Modeling. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Models Lecture 5 Mixed Integer Programming Models and Exercises

Applied Mixed Integer Programming: Beyond 'The Optimum'

Heuristics in Commercial MIP Solvers Part I (Heuristics in IBM CPLEX)

Restrict-and-relax search for 0-1 mixed-integer programs

Exploiting Degeneracy in MIP

FATCOP 2.0: Advanced Features in an Opportunistic Mixed Integer Programming Solver

Randomized Rounding: A Primal Heuristic for General Mixed Integer Programming Problems

On Mixed-Integer (Linear) Programming and its connection with Data Science

Mixed Integer Programming Class Library (MIPCL)

Column Generation Based Primal Heuristics

c 2001 Society for Industrial and Applied Mathematics

How to use your favorite MIP Solver: modeling, solving, cannibalizing. Andrea Lodi University of Bologna, Italy

Alternating Criteria Search: A Parallel Large Neighborhood Search Algorithm for Mixed Integer Programs

Computational Integer Programming. Lecture 12: Branch and Cut. Dr. Ted Ralphs

The Heuristic (Dark) Side of MIP Solvers. Asja Derviskadic, EPFL Vit Prochazka, NHH Christoph Schaefer, EPFL

Implementing Scalable Parallel Search Algorithms for Data-Intensive Applications

Computational Experience with Parallel Integer Programming using the CHiPPS Framework. INFORMS Annual Conference, San Diego, CA October 12, 2009

Parallel Branch & Bound

Parallel Branch, Cut, and Price for Large-Scale Discrete Optimization

Comparisons of Commercial MIP Solvers and an Adaptive Memory (Tabu Search) Procedure for a Class of 0-1 Integer Programming Problems

Facility Location-Routing-Scheduling Problem: LRS

Column Generation Method for an Agent Scheduling Problem

A LARGE SCALE INTEGER AND COMBINATORIAL OPTIMIZER

Pivot and Gomory Cut. A MIP Feasibility Heuristic NSERC

Crew Scheduling Problem: A Column Generation Approach Improved by a Genetic Algorithm. Santos and Mateus (2007)

Algorithms II MIP Details

Solving a Challenging Quadratic 3D Assignment Problem

Doing It in Parallel (DIP) with The COIN-OR High-Performance Parallel Search Framework (CHiPPS)

A Generalized Wedelin Heuristic for Integer Programming

Implementing Scalable Parallel Search Algorithms for Data-intensive Applications

Received: 27 October 2008 / Accepted: 1 September 2009 / Published online: 17 September 2009 Springer and Mathematical Programming Society 2009

Solving a Challenging Quadratic 3D Assignment Problem

On the Global Solution of Linear Programs with Linear Complementarity Constraints

Solving Hard Integer Programs with MW

Parallel Solvers for Mixed Integer Linear Optimization

Integer Programming ISE 418. Lecture 7. Dr. Ted Ralphs

Experiments On General Disjunctions

Martin W.P. Savelsbergh. Georgia Institute of Technology. School of Industrial and Systems Engineering

Computational Experience with a Software Framework for Parallel Integer Programming

A Feasibility Pump heuristic for general Mixed-Integer Problems

A Computational Study of Conflict Graphs and Aggressive Cut Separation in Integer Programming

Primal Heuristics for Branch-and-Price Algorithms

Discrete Optimization with Decision Diagrams

Restrict and Relax Search

Welcome to the Webinar. What s New in Gurobi 7.5

A Study of High Performance Computing and the Cray SV1 Supercomputer. Michael Sullivan TJHSST Class of 2004

(2009) A 193 (2) ISSN

Parallel Processing in Mixed Integer Programming

Advanced Use of GAMS Solver Links

A Parallel Local Search Framework for the Fixed-Charge Multicommodity Network Flow Problem

Assessing Performance of Parallel MILP Solvers

(Duality), Warm Starting, and Sensitivity Analysis for MILP

Experience with CGL in the PICO Mixed- Integer Programming Solver

Parallel Solvers for Mixed Integer Linear Programming

Cutting Planes for Some Nonconvex Combinatorial Optimization Problems

Lookahead Branching for Mixed Integer Programming

A Lifted Linear Programming Branch-and-Bound Algorithm for Mixed Integer Conic Quadratic Programs

Recursive column generation for the Tactical Berth Allocation Problem

Lookahead Branching for Mixed Integer Programming

Stochastic Separable Mixed-Integer Nonlinear Programming via Nonconvex Generalized Benders Decomposition

INTERFACING A MIP HEURISTIC BASED ON ILOG CPLEX WITH DIFFERENT LP SOLVERS

Cloud Branching MIP workshop, Ohio State University, 23/Jul/2014

A Heuristic Approach for Big Bucket Production Planning Problems

The SYMPHONY Callable Library for Mixed-Integer Linear Programming

Large-Scale Optimization and Logical Inference

LP SCIP NEOS URL. example1.lp 2.1 LP 1. minimize. subject to, bounds, free, general, binary, end. .lp 1 2.2

Towards ParadisEO-MO-GPU: a Framework for GPU-based Local Search Metaheuristics

LaGO. Ivo Nowak and Stefan Vigerske. Humboldt-University Berlin, Department of Mathematics

Primal Heuristics in SCIP

Benders in a nutshell Matteo Fischetti, University of Padova

Exact solutions to mixed-integer linear programming problems

A hard integer program made easy by lexicography

On the selection of Benders cuts

Improving the heuristic performance of Benders decomposition

The SYMPHONY Callable Library for Mixed-Integer Linear Programming

Interactive Analysis of Large Distributed Systems with Scalable Topology-based Visualization

Column Generation based Primal Heuristics

A simulated annealing algorithm for the vehicle routing problem with time windows and synchronization constraints

Algorithms for Decision Support. Integer linear programming models

Outline. Column Generation: Cutting Stock A very applied method. Introduction to Column Generation. Given an LP problem

Column Generation: Cutting Stock

Linking GAMS to Solvers Using COIN-OSI. Michael Bussieck Steve Dirkse GAMS Development Corporation

Conflict Analysis in Mixed Integer Programming

Construction of Minimum-Weight Spanners Mikkel Sigurd Martin Zachariasen

Implementing Custom Applications with CHiPPS. INFORMS Annual Conference, San Diego, CA October 12, 2009

Linear & Integer Programming: A Decade of Computation

V. Solving Integer Linear Programs

Transcription:

This research is funded by NSF, CMMI and CIEG 0521953: Exploiting Cyberinfrastructure to Solve Real-time Integer Programs A Parallel Macro Partitioning Framework for Solving Mixed Integer Programs Mahdi Namazifar Industrial and Systems Engineering Department University of Wisconsin-Madison Andrew J. Miller RealOpt, INRIA Bordeaux Sud-Ouest Université de Bordeaux 1 (with thanks to Michael C. Ferris) ARS08 Workshop, LIX, Ecole Polytechnique,, Paris 1

Outline Background MIP heuristics Parallel branch-andbound: current state of the art A Macro Partitioning Approach Brancher Parallel computing architectures Assigner Workers Massively parallel Grid computing Early computational results Challenges in parallelizing MIP solvers To-do lists

Background We consider a general 0-1 mixed integer programming (MIP) problem THE DIFFICULTY: Problems of realistic size are often hard to solve and even harder to solve quickly. THE OPPURTUNITY: Increasing availability of multiple CPUs in parallel architectures. THE CHALLENGE: How can we fully exploit available computational resources in the solution of large MIPs? 3

Example: MIP2003 A library of test problems, available at http://miplib.zib.de/ These problems come from a variety of applications and many remained unsolved for years atlanta-ip (21732 constraints 106 general integer variables 46667 binary variables, 1965 continuous variables): unsolvable before 2006, Xpress-MP can now solve this in about five hours with specified settings protfold (2112 constraints,1835 binary variables): Xpress-MP can solve this in several days on a dual core machine with optimized settings dano3mip (3202 constraints, 13321 continuous variables, 552 binary variables): still unsolved Numerous other problems are still unsolved, or take many hours or even days of computation time to solve. These instances are not particularly large!

NSF-CMMI 0521953: Real-Time Mixed Integer Programming Premise: MIP has proven to be a powerful methodolgy for solving design and strategic problems; less so for real-time operational problems Can we use all the computational power at our disposal to turn MIP into a technology that can provide decision support in real time? Optimization: either the true problem or a pre-computing stage Re-optimization: sensitivity analysis, warm starts, etc.

A Great Unsolved Problem Until recently, most applications of integer programming have been to planning models where solution time is not an issue. Significant improvements in methodology, high-speed computing and data availability have made it possible to apply integer programming at the operational level for instances of modest size, where solution time may take minutes The next challenge is real-time mixed-integer programming (RTMIP). While such problems are prevalent in numerous application areas, the technology available for their solution is still at the research level We believe that this pioneering use of cyberinfrastructure will open up new possibilities for the operations research community to exploit the computational resources, data storage capabilities and communication bandwidth that are now available for use in real-time decisionmaking. -George Nemhauser, Need and Potential for Real-Time Mixed-Integer Programming, Great Unsolved Problems in OR feature, ORMS Today, February 2007. http://www.lionhrtpub.com/orms/orms-2-07/frinside.html

Parallel Computing: Massively Parallel Computers many, many dedicated processors very centralized: if one processor crashes, the whole system may be affected emphasis on defining subproblems quickly (ramp-up process); otherwise, many dedicated processors are doing nothing at the beginning strong emphasis on load balancing (otherwise, many dedicated processors are doing nothing at the end) little emphasis on reducing the amount of information passed but little is not 0! This is framework that we will focus on in this talk but we will keep the other one in mind.

Parallel Computing: Grid Computing many spare processors very decentralized: if one processor crashes, the work will be re-started, and the rest keep going without noticing large emphasis on reducing the amount of information passed significant emphasis on load balancing (defining work for many processors) less emphasis on efficient ramp-up

The current state of the art The most robust methods for solving general MIPs are LP-based branch-andbound and branch-and-cut methods. A number of researchers have investigated parallelizing these methods (Linderoth, Perumalla, Savelsbergh [1997]; Ralphs (2002); Ferris, Pataki, Schmieta (2003); Eckstein (2003)) The best commercial solvers can use up to 32 processors in their branch-and-cut codes

The current state of the art Considerable speedup (though not close to linear) can often be obtained through the right search strategies: passing only long, skinny subtrees sophisticated subtree allocation to processors based on regular checkpointing However, this approach has evident performance bottlenecks: Generating enough interesting subtrees (not too large, not too trivial) Passing all this information Hence the lack of implementation for more than ~32 processors

Research Issues If we want to be able to use hundreds or thousands of processors to solve MIPs, we need to re-think the framework that we use. In particular, we need to address at least the following questions. Question: What should each processor do? How can we effectively use many at once? Answer: We need to partition the problem into many nonoverlapping, tractable, nontrivial sub-problems very quickly (so that each can be assigned to a different processor). Question: How can we define these subproblems? (We have seen that we need alternatives to single variable branching.) Answer: We use LP-and-FIX, RINS, Local Branching, and Solution Crossing cuts. 11

Primal Heuristics Two main classes: Construction heuristics: These produce a feasible solution from scratch (Example: LP-and-FIX). Improvement heuristics Improvement heuristics: These try to improve a given feasible solution (Example: RINS, Local Branching, and Solution Crossing). 12

LP-and-FIX IDEA: Explore a sub-space defined by the current LP relaxation solution. HOW: Fix the variables with integral value in LP relaxation solution and solve the resulting problem: 13

RINS (Relaxation Induced Neighborhood Search) [E. Danna, E. Rothberg, and C. Le Pape 2005] IDEA: Explore the sub-space defined by the intersection of the LP relaxation solution and an MIP feasible solution. HOW: If a binary variable has the same value in both solutions, fix the its value: 14

Local Branching IDEA: The same as RINS (Explore the neighborhood around an MIP feasible solution ( [Fischetti and Lodi 2003] HOW: The neighborhood consists of vectors that do not indices differ from in more than Note that this strategy is orthogonal to that defined by RINS. 15

Solution Crossing IDEA: Using concepts of Evolutionary Methods in improving the existing feasible solutions ([E. Rothberg 2007]) HOW: o o Population: A set of feasible solutions ( RINS Combination: (Similar to o Mutation: 16

A Parallel Macro Partitioning Framework (PMaP) Brancher: Generates sub-problems ( processor (single Workers: Solve sub-problems ( processors (Many Assigner: Assigns the generated sub-problems by Brancher to Slaves ( processor (single 17

Brancher Starts solving the main problem using Branch and Bound At each node of Branch and Bound tree if there exist any feasible solution, for each one generates a RINS problem, puts this problem in the sub-problem pool, and adds the following cut to the problem it is solving: The complement of the RINS cut S 0 : Set of variables with value 0 in the feasible solution S 1 : Set of variables with value 1 in the feasible solution At each node of Branch and Bound tree if there is no feasible solution, generates a LP-and-FIX problem, puts it in the sub-problem pool, and adds the following cut to the problem it is solving: The complement of the LP-and-FIX cut S 0 : Set of variables with value 0 in the LP relaxation solution S 1 : Set of variables with value 1 in the LP relaxation solution 18

Brancher Brancher Assigner Worker Sub-problem Pool Worker Worker Feasible Solution Pool 19

Worker Processor Waits until one sub-problem is assigned to it Starts solving the sub-problem using Branch and Bound Whenever finds a feasible solution, writes that into the Feasible Solution Pool When the solution process is over, sends a message to assigner Waits until the next sub-problem is assigned to it and does the same procedure 20

Worker Brancher Assigner Worker Sub-problem Pool Worker Worker Feasible Solution Pool 21

Assigner While the program is running o o o o o Checks the sub-problem pool. If there exists one or more sub-problems in the sub-problem pool, gets one of them; Otherwise waits until one appears. Checks the status of worker processors. If there is any idle worker processor, assigns the problem in hand to that; otherwise waits until one becomes free and then assigns the problem. Updates the status of worker processors 22

Assigner Brancher Assigner Worker Sub-problem Pool Worker Worker Feasible Solution Pool 23

The Framework Brancher Assigner Worker Sub-problem Pool Worker Worker Feasible Solution Pool 24

Some Notes! Implemented using Cbc-COIN Communication between processors is done through two channels: MPI (Message Passing Interface) Text Files We run the program on the Datastar machine in the San Diego Supercomputer Center (SDSC) DataStar consists of nodes of two types: 8-way p655+ and 32- way p690+ nodes For test runs we usually use p690+ nodes with 128GB memory per node and 1.7GHz CPU s PMaP is at a preliminary stage. There is still a long way to go! 25

Results Problem COIN-Cbc PMaP (using Cbc) Optimal Solution aflow40b 1274 1168 1168 seymour 435 425 423 harp2-7.29769e+7-7.38998e+7-7.38998e+7 markshare1 14 4 1 markshare2 33 16 1 mas74 12886 11801.2 11801.2 The experiments ran for 30 minutes. We used 35 processors for each run of PMaP. mas76 40935.1 40005.05 40005.05 nsrand-ipx 54560 54080 54080

Newer Results Problem COIN-Cbc PMaP (using Cbc) Optimal Solution dano3mip 791.385 719.782? protfold - -21-31 sp97ar - 685860294.1? glass4 2.20002e+9 1.90002e+9 1.20001e+9 Again each problem ran for 30 minutes, and PMaP used 35 processors. 27

The Most Recent Results Problem CPLEX PMaP (with Cbc) Optimal Solution glass4 1.60001e+09 1.60001e+09 1.20001e+09 markshare1 7 4 1 markshare2 25 16 1 protfold -20-21 -31 atlanta-ip - 95.0098 90.0099 sp97ar 6.62541e+08 6.8753e+08? seymour 425 425 423 Danoint 65.6667 65.6667 65.6667 Dano3mip 698.6296 719.782? A1c1s1 11790.1684 12054.8 11503.4 swath 730.1 577 467.407 liu 1236 1870? We compare the results with parallel CPLEX 10 installed on Datastar. Each problem ran for 30 minutes. We used 32 processors for each run of PMaP and CPLEX. CPLEX can be run on the limited number of processors which share the same memory (on Datastar at most 32), but PMaP can be run on as many processors as the machine has.

Preliminary Conclusions PMaP is capable of using many different processors to considerable advantage (it improve both COIN- CBC and MINTO enormously). PMaP is already competitive with the best commercial solvers on the most powerful parallel frameworks that these solvers can use. 29

To-Do List: Ongoing Explore re-optimization possibilities (we are excited about the potential here). 30

To-Do List: Immediate Extract lower bounds from PMaP. Test PMaP on lots of processors (>1000). Implement local branching cuts. Implement solution crossing cuts. Compare the performance of PMaP with other parallel solvers. 31

To-Do List: Immediately After That Incorporate multiple brancher capability. Fine-tune the number of variables fixed, and other parameterers) to perform better dynamic load balancing. Optimize pre-processing at worker nodes. 32