Operations Research and Optimization: A Primer

Similar documents
Optimization with Multiple Objectives

Integer Programming ISE 418. Lecture 1. Dr. Ted Ralphs

February 19, Integer programming. Outline. Problem formulation. Branch-andbound

Introduction to Linear Programming. Algorithmic and Geometric Foundations of Optimization

Treatment Planning Optimization for VMAT, Tomotherapy and Cyberknife

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang

Machine Learning for Software Engineering

Classification of Optimization Problems and the Place of Calculus of Variations in it

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Chapter 3: Towards the Simplex Method for Efficient Solution of Linear Programs

Discrete Optimization. Lecture Notes 2

LECTURE NOTES Non-Linear Programming

5 Machine Learning Abstractions and Numerical Optimization

Fundamentals of Integer Programming

3.6.2 Generating admissible heuristics from relaxed problems

Financial Optimization ISE 347/447. Lecture 13. Dr. Ted Ralphs

Interactive Treatment Planning in Cancer Radiotherapy

Machine Learning and Data Mining. Clustering (1): Basics. Kalev Kask

A Brief Overview of Optimization Problems. Steven G. Johnson MIT course , Fall 2008

An Extension of the Multicut L-Shaped Method. INEN Large-Scale Stochastic Optimization Semester project. Svyatoslav Trukhanov

Optimization Techniques for Design Space Exploration

Programming, numerics and optimization

Introduction. Chapter 15. Optimization Modeling: Applications. Integer Programming. Manufacturing Example. Three Types of ILP Models

OR 674 DYNAMIC PROGRAMMING Rajesh Ganesan, Associate Professor Systems Engineering and Operations Research George Mason University

Introduction to Stochastic Combinatorial Optimization

Optimal Crane Scheduling

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur

REAL-CODED GENETIC ALGORITHMS CONSTRAINED OPTIMIZATION. Nedim TUTKUN

Particle Swarm Optimization applied to Pattern Recognition

Combinatorial Optimization

Interior Penalty Functions. Barrier Functions A Problem and a Solution

The MIP-Solving-Framework SCIP

LP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008

CS 331: Artificial Intelligence Local Search 1. Tough real-world problems

Algorithms for Integer Programming

II. Linear Programming

Embedding Formulations, Complexity and Representability for Unions of Convex Sets

A Brief Overview of Optimization Problems. Steven G. Johnson MIT course , Fall 2008

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg

On the Global Solution of Linear Programs with Linear Complementarity Constraints

March 19, Heuristics for Optimization. Outline. Problem formulation. Genetic algorithms

MITOCW watch?v=kz7jjltq9r4

Vertex Cover Approximations

V. Solving Integer Linear Programs

Crew Scheduling Problem: A Column Generation Approach Improved by a Genetic Algorithm. Santos and Mateus (2007)

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

3 INTEGER LINEAR PROGRAMMING

Heuristic Optimisation

Cloud Branching MIP workshop, Ohio State University, 23/Jul/2014

Radiation therapy treatment plan optimization

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/18/14

Algorithm Design (4) Metaheuristics

Improving Dual Bound for Stochastic MILP Models Using Sensitivity Analysis

Data Mining Chapter 8: Search and Optimization Methods Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University

Local Search Methods. CS 188: Artificial Intelligence Fall Announcements. Hill Climbing. Hill Climbing Diagram. Today

Artificial Intelligence

Machine Learning for Software Engineering

Optimization in Brachytherapy. Gary A. Ezzell, Ph.D. Mayo Clinic Scottsdale

Big Data Analytics CSCI 4030

III. CONCEPTS OF MODELLING II.

25. NLP algorithms. ˆ Overview. ˆ Local methods. ˆ Constrained optimization. ˆ Global methods. ˆ Black-box methods.

CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM

Algorithms for Decision Support. Integer linear programming models

Outline of the module

Outline of Lecture. Scope of Optimization in Practice. Scope of Optimization (cont.)

A hybrid framework for optimizing beam angles in radiation therapy planning

Origins of Operations Research: World War II

Modelling of LP-problems (2WO09)

Gradient Descent. 1) S! initial state 2) Repeat: Similar to: - hill climbing with h - gradient descent over continuous space

Today. Golden section, discussion of error Newton s method. Newton s method, steepest descent, conjugate gradient

Solving lexicographic multiobjective MIPs with Branch-Cut-Price

MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms

Stochastic branch & bound applying. target oriented branch & bound method to. optimal scenario tree reduction

Unit.9 Integer Programming

Introduction to Linear Programming

7/29/2017. Making Better IMRT Plans Using a New Direct Aperture Optimization Approach. Aim of Radiotherapy Research. Aim of Radiotherapy Research

APPLIED OPTIMIZATION WITH MATLAB PROGRAMMING

CMPSCI611: The Simplex Algorithm Lecture 24

Parallel Computing in Combinatorial Optimization

9.4 SOME CHARACTERISTICS OF INTEGER PROGRAMS A SAMPLE PROBLEM

ADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL

10703 Deep Reinforcement Learning and Control

OPTIMUM DESIGN. Dr. / Ahmed Nagib Elmekawy. Lecture 3

Introduction to Optimization

Basics of treatment planning II

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Non-deterministic Search techniques. Emma Hart

Adaptive Large Neighborhood Search

Outline. Informed Search. Recall: Uninformed Search. An Idea. Heuristics Informed search techniques More on heuristics Iterative improvement

Combinatorial optimization and its applications in image Processing. Filip Malmberg

Introduction to Optimization

The AIMMS Outer Approximation Algorithm for MINLP

Optimal Design of a Parallel Beam System with Elastic Supports to Minimize Flexural Response to Harmonic Loading

Iterative regularization in intensity-modulated radiation therapy optimization. Carlsson, F. and Forsgren, A. Med. Phys. 33 (1), January 2006.

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science

SBB: A New Solver for Mixed Integer Nonlinear Programming

The Heuristic (Dark) Side of MIP Solvers. Asja Derviskadic, EPFL Vit Prochazka, NHH Christoph Schaefer, EPFL

a) Alternative Optima, b) Infeasible(or non existing) solution, c) unbounded solution.

Resource Management in Computer Networks -- Mapping from engineering problems to mathematical formulations

Local Search and Optimization Chapter 4. Mausam (Based on slides of Padhraic Smyth, Stuart Russell, Rao Kambhampati, Raj Rao, Dan Weld )

Transcription:

Operations Research and Optimization: A Primer Ron Rardin, PhD NSF Program Director, Operations Research and Service Enterprise Engineering also Professor of Industrial Engineering, Purdue University

Introduction Operations Research (OR) is the study of math modeling tools for complex, usually large-scale engineering and management design/planning/control problems Major components include optimization methods, stochastic/probability modeling, and event-oriented simulation Purpose here is to present an elementary primer on the optimization part to acquaint those not trained in OR with some fundamental concepts and definitions How do optimization researchers think about planning problems?

A Toy Conformal Therapy Example To begin, I will ask you to suspend reality and consider a massively oversimplified, toy example based loosely on Conformal Radiotherapy No claim of correctness in the application, but it allows us to discuss optimization issues in a familiar context dose <= 80 Beam 1 Target Beam 2 dose <= 100 Beam 3 dose <= 60 May use at most two of the beams Beam intensity is controllable Tissues considered as single points If a beam intersects a tissue, it adds dose equal to beam intensity Goal is to maximize tumor dose Limit dose on healthy tissues

Parameters and Decisions Parameters of an optimization problem are values taken as given Here dose limits 60, 80, 100, and the limit of 2 beams Decisions (variables in our models) are what we get to decide/control Discrete are logical/on-off type (here which beams on) Continuous take on numeric values (here beam intensities) dose <= 80 Beam 1 Target Beam 2 dose <= 100 Beam 3 dose <= 60 May use at most two of the beams Beam intensity is controllable Tissues considered as single points If a beam intersects a tissue, it adds dose equal to beam intensity Goal is to maximize tumor dose Limit dose on healthy tissues

Constraints and Feasible Solutions Constraints of an optimization problem define the applicable limits on decision choice Here 2-beam and healthy tissue total dose limits Feasible solutions are those that satisfy all constraints B1=B2=30 B2=110, B3=20 B1=B2=B3=10 Feasible Infeasible Infeasible dose <= 80 Beam 1 Target Beam 2 dose <= 100 Beam 3 dose <= 60 May use at most two of the beams Beam intensity is controllable Tissues considered as single points If a beam intersects a tissue, it adds dose equal to beam intensity Goal is to maximize tumor dose Limit dose on healthy tissues

Objective Functions and Optimal Solutions Objective or criterion function is a numerical measure of preference among decision choices Here max total tumor dose Optimal solution is a feasible solution with best objective value B1=B2=30, TD=60 Feasible but not Optimal B2=60, B3=40, TD=100 Optimal dose <= 80 Beam 1 Target Beam 2 dose <= 100 Beam 3 dose <= 60 May use at most two of the beams Beam intensity is controllable Tissues considered as single points If a beam intersects a tissue, it adds dose equal to beam intensity Goal is to maximize tumor dose Limit dose on healthy tissues

Some Implications Parameters (given constants) Decisions (discrete or continuous choices) Constraints (limits on decision choice) Feasible solutions (satisfy all constraints) Objective function (quantifies preference) Optimal solution (feasible and best in objective) Optimal is a well-defined mathematical concept Too often used casually Every optimal solution has the same objective value Can be multiple optimal solns Infeasible solutions can have better than optimal obj values Computing an optimum implies search over the decision choices Parameters are fixed Looking for feasible solns with good objective values

Challenge of Multiple Criteria To apply optimization or talk about an optimal solution, must reduce to a single preference measure dose <= 80 Beam 1 Target dose <= 100 Beam 3 Extremely common to encounter multiobjective planning problems were more than one criterion should be made as big or small as possible E.g. in our toy problem, max tumor dose and min purple dose Beam 2 dose <= 60 May use at most two of the beams Beam intensity is controllable Tissues considered as single points If a beam intersects a tissue, it adds dose equal to beam intensity Goal is to maximize tumor dose Limit dose on healthy tissues

Challenge of Multiple Criteria Common approach: make all but one constraints E.g. toy prob with tumor dose Could have been any single one Can refine with sensitivity analysis = multiple runs with different values of the parameters E.g. try purple <= 55, 60, 65 Tune in to Eva Lee tomorrow morning for more refined options dose <= 80 Beam 1 Target Beam 2 dose <= 100 Beam 3 dose <= 60 May use at most two of the beams Beam intensity is controllable Tissues considered as single points If a beam intersects a tissue, it adds dose equal to beam intensity Goal is to maximize tumor dose Limit dose on healthy tissues

Inverse Methods Inverse methods make everything a constraint and minimize the violation E.g. add min TD restriction Does give single objective Challenge: how to weight violations? There is usually no solution that satisfies all reqs Balancing violation by weighting may produce critical infeasibilities dose <= 80 Beam 1 TD >= 150 Target Beam 2 dose <= 100 Beam 3 dose <= 60 May use at most two of the beams Beam intensity is controllable Tissues considered as single points If a beam intersects a tissue, it adds dose equal to beam intensity Goal is to maximize tumor dose Limit dose on healthy tissues

Models & Tractability To apply formal optimization methods, need to represent decisions as variables, and both constraints and the objective as functions of those variables in a mathematical model, e.g. max B1 + B2 + B3 B1 + B3 <= 80.... (tumor dose) (green limit) Math forms are critical to tractability = convenience for solution Search strategies determine what is tractable dose <= 80 Beam 1 Target Beam 2 dose <= 100 Beam 3 dose <= 60 May use at most two of the beams Beam intensity is controllable Tissues considered as single points If a beam intersects a tissue, it adds dose equal to beam intensity Goal is to maximize tumor dose Limit dose on healthy tissues

Hillclimbing (Local Search) First consider unconstrained search with only an objective Can draw an image with a surface representing objective value at different choices of vbls x1 & x2 Maximizing goal is to find the values that correspond to the top of the highest mountain Hillclimbing process: Survey the nearby neighborhood Find an up-hill search direction Follow it while it helps & repeat Stop when no such direction exists E.g. gradient, conjugate gradient objective value <- x2 value optimal solution x1 value ->

Local and Global Optimal Solutions Local optimum is a feasible solution that is best in the neighborhood of current one Global optimum is overall best Easy to see that hillclimbing can lead us to a local optimum that is not global Search s vision does not extend beyond the immediate neighborhood Implication: tractability is enhanced if the objective has no local optima not global objective value local optimum <- x2 value optimal solution x1 value ->

Hillclimbing with Constraints For models with constraints hillclimbing usually tries to stay feasible Search from one feasible solution to another with better objective value Constraints introduce barriers If the constraints have irregular form can easily block the search at a local optimum Implication: tractability is enhanced if constraint functions are smooth and regular objective value <- x2 value feasible solutions optimal solution x1 value ->

Penalty Methods Can avoid dealing with constraints by weighting violations in the objective and searching unconstrained Objective terms = penalty functions Frees the search to move Lots of potential problems Can make objective have local optima when it did not originally Have to choose the penalty weights big enough to make sure any unconstrained optimum is feasible Choosing the penalties too high will lose search freedom of movement objective value <- x2 value feasible solutions optimal solution penalized region x1 value ->

Discrete Decisions and Enumeration Continuous decisions have infinitely many choices When decisions are discrete, we can think of solution by enumeration = trying all (or many) of the combinations E.g B1&B2, then B2&B3, then B1&B3 in our toy example and keep best Enumeration quickly becomes impractical with problem size 2 yes/no decisions gives 4 combinations 10 yes/no decisions makes 2048 combinations 100 yes/no decisions would occupy a computer evaluating a trillion per second for about 402 million centuries Real methods do careful partial enumeration of choices Implication: discrete elements in a model decrease tractability

Math Forms and Tractability Linear functions are weighted sums of variables E.g. 3x1 + 2x2 +1.9x3 Much easier to deal with in both the objective and the constraints Nonlinear functions are everything else E.g. 3x1*x2 + 1.9x3 + sqrt(x2) Can lead to local optima and difficult searches Discrete decisions are usually modeled by integer decision variables (restricted to whole numbers) Leads to more difficult searches and need for some enumeration

Classes of Optimization Models nonlinear objective linear objective linear constraints nonlinear constraints discrete (integer) MIP MINLP variables continuous variables LP NLP LP = Linear Program (highly tractable) NLP = Nonlinear Program (some tractable) MIP = Mixed Integer Program (some tractable) MINLP = Mixed Integer Nonlinear Program (tough)

Strategies: Relaxations & Bounds Relaxations are easier forms of optimization models obtained by weakening some constraints E.g. let discrete variables deciding which beams be continuous (allow fractions) Now LP gives a solution with all beams part on and TD=120 vs. MIP optimum of TD=100 Relaxations yield bounds Easier problem can only have better answer (120 >= 100) dose <= 80 Beam 1 Target Beam 2 dose <= 100 Beam 3 dose <= 60 May use at most two of the beams Beam intensity is controllable Tissues considered as single points If a beam intersects a tissue, it adds dose equal to beam intensity Goal is to maximize tumor dose Limit dose on healthy tissues

Strategies: Using Relaxation Bounds Bounds from relaxations can be used to narrow the search If the bound for one part of the feasible region is poorer than a known, fully feasible solution elsewhere, we do not have to search that region (the idea of Branch and Bound) Bounds can also help evaluate solutions obtained by means not assuring global optima Compare what was obtained to what might be possible

Strategies: Heuristics So far we have dealt mainly with exact optimization Goal to find a mathematically optimal solution (or very close) Heuristic methods seek only a good feasible solution Many heuristic strategies Rounding = solve a relaxation and adjust to a nearby feasible solution (often in the context of an MIP) Constructive = make decisions one by one in sequence, each time making the choice that seems best at the moment (rare in radiation therapy planning) Improving = mimic local search in moving to neighboring (and better) feasible solutions (examples include Simulated Annealing and Genetic Algorithms) Expert judgment or past experience with similar instances

Concerns with Heuristics Heuristics are often the only way to get a usable solution to an poorly tractable optimization model One issue: how near are solutions to optimal? Desirable to have a a bound on error (suboptimality) in the heuristic solution (automatic if a relaxation was solved) Methods like Simulated Annealing provide no guarantees at all (may eventually find an optimum but won t know it has done it, must rely on historical experience) Another issue is handling of constraints Many improving search heuristics (e.g. Simulated Annealing, Genetic Algs) can really only do unconstrained optimization Constraints must be weighted with penalty functions which raises issues of what weights to choose and whether the solutions that result will satisfy all constraints

Stochastic Optimization Everything so far is deterministic optimization = parameters know with certainty This is an obvious oversimplification because almost everything is estimated and has some uncertainty Especially where the system changes through time Stochastic optimization methods assume probability distributions on parameters to model this uncertainty prob param value

Tractability of Stochastic Opt Stochastic usually implies much tougher and more limited math Often leads to Monte Carlo sampling of possibilities Can be slow and misled by sampling error Another issue: output values will have prob distributions Raises issue of how to compare and choose a best decision choice Implication: stochastic modeling reduces tractability prob obj value

Themes Optimal is a mathematically precise concept = a best feasible solution for a single measure of preference Constant balancing of tractability vs. usefulness of results in choice of optimization methods and models Model must be somewhat tractable to get any results Too many assumptions lead to useless outcomes Users need to be aware of limitations of various methods Are methods prone to local optima? How critical are any needed penalty weights? Do methods at least guarantee a feasible solution? If a solution is not guaranteed to be optimal, is error bounded? Can stochastic effects be neglected?