Shows nodes and links, Node pair A-B, and a route between A and B.

Size: px
Start display at page:

Download "Shows nodes and links, Node pair A-B, and a route between A and B."

Transcription

1 Solving Stochastic Optimization Problems on Computational Grids Steve Wright Argonne ational Laboratory University of Chicago University of Wisconsin-Madison Dundee, June 26, Outline Stochastic Programming (SP) Formulation and Basic Algorithms for SP Condor and metaeos Asynchronous Trust-Region Algorithm Computational Results: Algorithm Performance Sampling Methodology Computational Results: Quality of Solutions Joint work with Je Linderoth (Axioma Inc) and Alex Shapiro (Georgia Tech). Stochastic Programming Optimization of a model with uncertainty. Example: etwork Planning Adding capacity on a telecommunications network for private-line services. (Sen et al., 1994.) Often formulated mathematically as min x f(x) def = E g(x; ) = Z g(x; )p()d; (p is probability density function) subject to constraints on x 2 R n. A Shows nodes and links, ode pair A-B, and a route between A and B. B Arises in planning-under-uncertainty applications, where each represents a possible scenario (a possible way in which the model could evolve). Space can contain nite or innite scenarios. g(x; ) could be the value function of some second level optimization problem parametrized by x. (Recourse.) Add capacity to some links, to attempt to meet (uncertain) demand for trac between nodes. Sample demand prole: Third node pair: 0 (prob :855) 5:39 (prob :095) 75:1 (prob :05)

2 Data: - network topology: n = 89 links. - point-to-point pairs: i = 1; 2; : : : ; demands d i for each pair i are random and independent, with 3 to 7 possible scenarios. Total about scenarios! Decision variables: x j, j = 1; 2; : : : ; n: amount of capacity to add on link j. Total new capacity bounded by B. Objective: minimize the expected amount of unmet demand, summed over the m point-to-point pairs. We can't hope to solve the problem by accounting for all the possible scenarios exhaustively it's much too large. Practical approach is to use sampling to select a subset of scenarios, randomly. The sample average approximation (SAA) is still large, but manageable. 2-stage stochastic LP with recourse min x Q(x) = c T x + E P Q(x;!) subj. to Ax = b; x 0; where P is a probability measure on the space (; F), and Q(x;!) = min y q(!) T y subject to W y = h(!) T(!)x; y 0: x = rst-stage vars, y = second-stage vars. Sampled approximation: Sample points! j, j = 1; 2; : : : ; from P, and solve min x Q(x) = c T x + 1 P j=1 Q(x;! j ) subj. to Ax = b; x 0: Each Q(x;! j ) is convex, piecewise-linear in x. \bundle" methods Q(x) subgradients Build up a lower bounding, piecewise linear approximation to Q(x), based on function values Q(x`) and subgradients g` at iterates x`. Compute subgradients of Q by nding dual solutions j of the secondstage LP's for j = 1; 2; : : : ; (concurrently!) summing: Q(x;! j ) : min yj q(! j ) T y j ; subj. to W y = h(! j ) T(! j )x; y j 0; c 1 X T(! j ) T j : j=1 x Model function M k (x) after k iterates is h M k (x) = Q(x`) + (g`) T (x x`) i : sup `=0;1;:::;k Choose next iterate as x k+1 = arg min M k (x); subj. to Ax = b; x 0: which can be formulated as: min x; ; subject to Ax = b; x 0; Q(x`) + (g`) T (x x`); ` = 0; 1; : : : ; k: (Each constraint is called a cut.)

3 Example: After rst two iterations 0; 1: Q(x) enhancements x 0 x1 M (x) 1 x 2 is the minimizer of M 1 ; add new subgradient to obtain M 2 ; take minimizer to obtain x 3 : Q(x) x 2 x3 x M (x) 2 x trust-region (allows more steady progress, exploits good starting point); algorithm that allows deletion of old cuts; group the second-stage problems Q(x;! j ) into T \chunks" t, t = 1; 2; : : : ; T, with f1; 2; : : : ; g = [ t=1;2;:::;t t : and assign each t to a worker processor; multiple cuts at each x (each chunk can return its own subgradients); asynchronous variant is preferred for our target parallel platform. trust-region (TR) Choose next iterate as x k+ = arg min M k (x); subj. to Ax = b; x 0; kx x k k 1 k ; where k is the trust-region radius. Trivial to modify the LP subproblem: just add the bounds k e x x k k e: If candidate point x k+ is \signicantly better" (achieves some fraction of the decrease predicted by the model) then set x k+1 x k+. Possibly delete cuts, increase the trust region. Otherwise, set x k+1 x k and add subgradient information from x k+ to improve the model. Possibly delete uninteresting cuts, decrease trust region. TR properties Denoting the solution set by S, Can delete cuts liberally, between major iterations; dist(x k ; S)! 0; The algorithm may still be too synchronous: requires complete evaluation of Q(x) at a candidate iterate x before proceeding.

4 related work Condor! Marsten et al \box step" (1975) builds up an exact model of the problem over the TR. Lemarechal \bundle" (1975 et seq.) Kiwiel (1983 et seq.) Ruszczynski \regularized decomposition" (1986) specically for stochastic programming. The last three give quadratic programming subproblems, not LP. More like a Levenberg approach than a trust-region approach. High-throughput computing on pools of distributively owned computers Developed at Wisconsin: Livny. Condor pools consist of user workstations, nodes from multiprocessor systems and clusters. Handles scheduling, matching of user requirements to machine characteristics. Checkpointing and migration. Flocking and Glide-in mechanisms allow jobs to execute across multiple pools. a challenging environment... The Condor environment is powerful and inexpensive, but challenging to algorithm designers and implementers. dynamic/opportunistic: size and composition of worker pool changes unpredictably during computation heterogeneous: old workstations and fast new linux machines, dierent versions of Solaris, software licenses valid on only some machines. latency unpredictable, generally slow: workers can be next to each other in a rack, or separated by 6000 miles and the Internet. Problems that are large and compute-intensive and algorithms that are asynchronous work best on this platform. MW (Master-Worker) Many interesting optimization algorithms can be shoehorned into the master-worker paradigm. MW is a runtime support library for implementing master-worker computations on the Condor system. (Yoder, Kulkarni, Linderoth, Goux) MW abstracts the issues of resource management and communication. Condor handles resource management; Communication is either via shared les or Condor-PVM.

5 TR may not be asynchronous enough! The TR approach still synchronizes on the function evaluation at each candidate point x k+. If there are T chunks of second-stage scenarios, can't use more than T processors. For many problems of interest, we cannot make T very large (10 { 100) without making the work-perchunk too small and creating too much contention at the master. May wait for a long time for the last chunk to be evaluated, if its host is suspended or disappears. An asynchronous trust-region () algorithm increases parallelism and throughput. Maintain an incumbent x I : the best point found so far (smallest value of Q). Maintain a basket B of 3 20 other x points possible new incumbents for which the second-stage LPs are currently being solved. When space becomes available in B, generate a new candidate point by solving a TR subproblem around the current incumbent: kx x I k 1. (x I becomes the parent incumbent of the new point.) (continued) convergence of When evaluation of a point x 2 B is completed, accept it as the new incumbent if Q(x) < Q(x I ); and Q(x) gives a signicant decrease over its parent incumbent. Populate B initially by solving TR subproblems around early incumbents, using partial subgradient information. (Synchronicity parameter.) Strategies for cut deletion and adjustment of trust region are adapted from the strategies for the synchronous TR algorithm. Because of the strategy for selecting incumbents, and for cut management and adjustment of, can re-use much of the theory for the serial case. From a given incumbent, can trace back a chain through successive parents, to the initial point. A sucient decrease condition is satised with respect to each link in this chain. By applying synchronous TR theory to this chain, and assuming that all tasks terminate nitely, we have dist(x I ; S)! 0.

6 SS, = 10; M 7.06M = 100; M 70.6M. SS, ALS TR TR TR TR TR TR = M 7.06M. SS, SS: computational results rst stage: 89 variables, 1 constraint; second stage: 706 variables, 175 constraints, 2284 nonzeros. Study the eect of asynchronicity, parallelism on large sampled instances. We report results for = 10 4 and = 10 5 scenarios, with synch parameter = :7. par. eciency cuts/iter procs av. iter chunks run wall clock (min) run iter jbj chunks cuts/iter procs av. eciency par. clock (min) wall run iter jbj chunks wall clock (min) cuts/iter procs av. par. eciency

7 TR = Columns: 1: storm, Solved second-stage linear programs during the run (3472 per second). Average task 774 seconds. Total computation time 9014 hours (more than one year). storm, = 250; M 315M. storm: computational results wall clock (min) Cargo ight scheduling problem (Mulvey and Ruszczynski). rst stage: 121 variables; second stage: 1259 variables. For a scenario sampled approx, LP has size 132; 000; ; 750; 121 Started from a solution for a 3000-scenario approximation, whose quality is very good. TR takes a single step and terminates, doesn't take any steps, just veries quality of starting point. chunks cuts/iter procs av. eciency par. iter jbj run (For a chunk of 2000 scenarios, task size is about 150 seconds.) storm with 10 7 scenarios LP has size approximately 5: rows; 1: columns: Used machines at Wisconsin, CSA (Illinois), ew Mexico, Argonne, Italy, Columbia. 800 machines requested, 556 actually used during the run (average of 433 at any one time). #workers Sec. wall clock (hrs) procs av. eciency par. cuts/iter chunks iter jbj run

8 notation reminder solution quality Can we get useful estimates for the optimal objective values of the true problem from the sampled problem? Can we get condence intervals? How do the solutions of the sampled approximation relate to those of the real problem? Using, along with relevant theory (some recent), we have performed computational and statistical studies of these issues for some dif- cult problems from the literature. min Q(x) = X Ax=b;x0 ct x + K p i Q(x;! i ); i=1 (where K 10 70, say) while sample average approximation (SAA) can be formed by sampling points f! 1 ;! 2 ; : : : ;! g from the distribution P and solving X min Q (x) = c T x + 1 Q(x;! j ): Ax=b;x0 Denote j=1 Z = min Q(x); Z = min Q (x): Ax=b;x0 Ax=b;x0 Can use Monte Carlo sampling (sampling with replacement). Can also use variance reduction techniques to reduce Var(Z ). We implemented Latin hypercube sampling (sampling without replacement). lower bound for Z condence interval for lower bound It's well known that EZ Z : This is true for any unbiased estimator. In particular can use certain variance reduction techniques (e.g. Latin hypercube) to select the sample f! 1 ;! 2 ; : : : ;! g. Generate M batches each a sampled approximation of size of the form f! (i) 1 ;!(i) i = 1; 2; : : : ; M and solve the M SAAs to obtain optimal values Z (1) Then estimate EZ by ; Z(2) ; : : : ; Z(M) : L M = M 1 M X i=1 Z (i) : g, 2 ; : : : ;!(i) Have in the limit that p M [LM EZ ] (0; 2 L) where 2 L = VarZ : Can approximate L 2 estimator s L (M) 2 = 1 MX M 1 i=1 Then dening z such that by the sample variance 2 Z (i) L M P f(0; 1) z g = 1 ; our estimate of the width of the (1 2) con- dence interval is z s L (M)= p M: (For = :025 have z 1:96.)

9 application scenarios stage 1 stage 2 name hydro power LandS 6: gbd cargo ights storm vehicle assignment 1: term network design ssn upper bound for Z Given any feasible point ^x, we have Q(^x) Q(x ): Choose an ^x that appears to be nearly optimal, e.g. minimizer of some Q. Choose T i.i.d. samples, each of size (using Monte Carlo or Latin Hypercube): Dening f! (i) 1 ;!(i) 2 ; : : : ;!(i) g; i = 1; 2; : : : ; T; bq (i) (^x) = ct ^x + 1 X j=1 we get an unbiased estimator: U ;T = T 1 T X i=1 Q(^x;! (i) j ): bq (i) (^x): condence interval for upper bound Since the batches are i.i.d. we have p T h U ;T Q(^x) i (0; 2 U) where U 2 = Var Q b (^x). Approximate U 2 by sample variance estimator s U (T) 2 = 1 TX bq (i) T 1 (^x) 2 ;T U : i=1 Get condence interval width z s U (T)= p T : test problems Performed experiments with ve problems from the literature. Solved SAA's for sample sizes ranging from 50 to Solved between 9 and 12 SAAs (M) for each value of. In upper-bound evaluation, for each optimizing ^x from the SAAs, T = 50 and = 20; 000. Report the value of ^x for which the estimate U ;T is lowest, together with its condence interval. In selecting samples of size (for SAA) and (for evaluation), used both Monte Carlo and Latin Hypercube distribution, as reported in the tables.

10 results on bounds: SS Results of Mak et al (1999). lower upper batch/sample estimate % condence 0:21 0:11 Using dierent techniques, Mak et al, generate an approximate solution ^x using = 2000, and obtain upper bound 10:06 0:12 (95% condence interval); with 95% likelihood, the optimal Z is within 0:77 of this value. solution estimates for SS 95% condence intervals. Monte Carlo: Lower Upper 50 4:11 1:23 12:88 0: :66 1:31 11:31 0: :54 0:34 10:42 0: :31 0:23 10:20 0: :98 0:21 10:01 0:09 Latin Hypercube: Lower Upper 50 10:10 0:81 11:39 0: :90 0:36 10:52 0: :87 0:22 10:05 0: :83 0:29 9:97 0: :84 0:10 9:90 0: SS Monte Carlo SS Latin Hypercube

11 solution estimates for gbd 95% condence intervals. Monte Carlo: Lower Upper :62 66: :33 1: :24 42: :13 2: :66 13: :42 2: :50 12: :35 4: :13 4: :81 3:56 Latin Hypercube: Lower Upper :21 10: :62 0: :62 0: :62 0: :62 0: :62 0: :62 0: :62 0: :62 0: :62 0: gbd Monte Carlo gbd Latin Hypercube solution estimates for storm 95% condence intervals. Monte Carlo: Lower Upper : : :00 964: : : : : :58 415: : : :42 484: : : :67 188: : :66 Latin Hypercube: Lower Upper :90 107: :00 17: :90 101: :00 20: :60 28: :00 22: :20 14: :00 17: :10 7: :00 15:04

12 1.552e e+06 Storm Monte Carlo 1.55e e e e e e e e+06 Storm Latin Hypercube 1.551e e e e e e e+06 solution estimates for 20term 95% condence intervals. Monte Carlo: Lower Upper :33 944: :00 41: :89 800: :00 40: :33 194: :00 36: :22 234: :00 46: :78 85: :00 51:05 Latin Hypercube: Lower Upper :57 371: :00 4: :00 252: :00 5: :43 117: :00 5: :00 95: :00 5: :57 38: :00 5:80 20term Monte Carlo

13 term Latin Hypercube LandS Monte Carlo LandS Latin Hypercube conditioning and exact solutions Recent results (Shapiro and Homem-de-Mello, 2000) indicate that a discrete SAA can identify the exact solution of a stochastic linear program over a discrete probablity space. If solution is unique, chance of identifying it exactly approaches 1 exponentially rapidly in : P(^x = x ) 1 e ; some > 0: We investigated the solutions obtained for the nest SAAs (=5000) for each problem instance, using Latin Hypercube sampling. We plotted distance matrices for six SAA solutions for each problem.

14 2.49e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e distance of gbd solutions Sharp, well dened minimizer. Solutions are far apart, though their objective values appear to be similar. Indicates a shallow minimum distance of SS solutions distance of 20term solutions distance of storm solutions Also a well dened minimizer.

15 Latin hypercube sampling Aim to reduce variance in Z. Example: Scenario space! 1! 2! 3, where each! is distributed according to: P(! = A) = :5; P(! = B) = :25; P(! = C) = :25: A B C The following slides will not be used in this talk but are retained in the le for reference. ω 1 ω 2 ω 3 Sample size = 4. Divide [0; 1] into 4 intervals, allow exactly one sample in each interval for each random variable ! 1 B C A A! 2 A A B C! 3 A C A B We use Latin hypercube sampling, larger sample sizes. Lower bound: 39 batches of size = Obtained L M = 9:9163; standard error = :0273: 95% condence interval is [9:8610; 9:9716]. Upper bound: For each of the SA solutions ^x obtained from the lower bound calculation, took 21 batches of size P = 20; 000. Used these to estimate U P (^x) for each ^x, together with its std error. For the \best" ^x, obtained 9:9001; standard error :0190: 95% condence interval is [9:8614; 9:9397]. Suggests strongly that the optimal Z is close to 9:91.

Parallel and High Performance Computing for Stochastic Programming

Parallel and High Performance Computing for Stochastic Programming IE 495 Lecture 13 Parallel and High Performance Computing for Stochastic Programming Prof. Jeff Linderoth February 26, 2003 February 26, 2003 Stochastic Programming Lecture 13 Slide 1 Outline Review Regularlizing

More information

CHAPTER 4 AN INTEGRATED APPROACH OF PERFORMANCE PREDICTION ON NETWORKS OF WORKSTATIONS. Xiaodong Zhang and Yongsheng Song

CHAPTER 4 AN INTEGRATED APPROACH OF PERFORMANCE PREDICTION ON NETWORKS OF WORKSTATIONS. Xiaodong Zhang and Yongsheng Song CHAPTER 4 AN INTEGRATED APPROACH OF PERFORMANCE PREDICTION ON NETWORKS OF WORKSTATIONS Xiaodong Zhang and Yongsheng Song 1. INTRODUCTION Networks of Workstations (NOW) have become important distributed

More information

Optimization: beyond the normal

Optimization: beyond the normal Optimization: beyond the normal Michael C. Ferris Joint work with: Michael Bussieck, Jan Jagla, Lutz Westermann and Roger Wets Supported partly by AFOSR, DOE and NSF University of Wisconsin, Madison Lunchtime

More information

Multi-stage Stochastic Programming, Stochastic Decomposition, and Connections to Dynamic Programming: An Algorithmic Perspective

Multi-stage Stochastic Programming, Stochastic Decomposition, and Connections to Dynamic Programming: An Algorithmic Perspective Multi-stage Stochastic Programming, Stochastic Decomposition, and Connections to Dynamic Programming: An Algorithmic Perspective Suvrajeet Sen Data Driven Decisions Lab, ISE Department Ohio State University

More information

An Extension of the Multicut L-Shaped Method. INEN Large-Scale Stochastic Optimization Semester project. Svyatoslav Trukhanov

An Extension of the Multicut L-Shaped Method. INEN Large-Scale Stochastic Optimization Semester project. Svyatoslav Trukhanov An Extension of the Multicut L-Shaped Method INEN 698 - Large-Scale Stochastic Optimization Semester project Svyatoslav Trukhanov December 13, 2005 1 Contents 1 Introduction and Literature Review 3 2 Formal

More information

The Stochastic Generalized Assignment Problem with Coordination. Constraints

The Stochastic Generalized Assignment Problem with Coordination. Constraints Department of Industrial Engineering and Management Sciences Northwestern University, Evanston, Illinois, 60208-3119, U.S.A. Working Paper No. 12-03 The Stochastic Generalized Assignment Problem with Coordination

More information

Introduction to Stochastic Combinatorial Optimization

Introduction to Stochastic Combinatorial Optimization Introduction to Stochastic Combinatorial Optimization Stefanie Kosuch PostDok at TCSLab www.kosuch.eu/stefanie/ Guest Lecture at the CUGS PhD course Heuristic Algorithms for Combinatorial Optimization

More information

Convexization in Markov Chain Monte Carlo

Convexization in Markov Chain Monte Carlo in Markov Chain Monte Carlo 1 IBM T. J. Watson Yorktown Heights, NY 2 Department of Aerospace Engineering Technion, Israel August 23, 2011 Problem Statement MCMC processes in general are governed by non

More information

MATLAB Based Optimization Techniques and Parallel Computing

MATLAB Based Optimization Techniques and Parallel Computing MATLAB Based Optimization Techniques and Parallel Computing Bratislava June 4, 2009 2009 The MathWorks, Inc. Jörg-M. Sautter Application Engineer The MathWorks Agenda Introduction Local and Smooth Optimization

More information

Cluster quality 15. Running time 0.7. Distance between estimated and true means Running time [s]

Cluster quality 15. Running time 0.7. Distance between estimated and true means Running time [s] Fast, single-pass K-means algorithms Fredrik Farnstrom Computer Science and Engineering Lund Institute of Technology, Sweden arnstrom@ucsd.edu James Lewis Computer Science and Engineering University of

More information

Introduction to Optimization

Introduction to Optimization Introduction to Optimization Second Order Optimization Methods Marc Toussaint U Stuttgart Planned Outline Gradient-based optimization (1st order methods) plain grad., steepest descent, conjugate grad.,

More information

Targeted Random Sampling for Reliability Assessment: A Demonstration of Concept

Targeted Random Sampling for Reliability Assessment: A Demonstration of Concept Illinois Institute of Technology; Chicago, IL Targeted Random Sampling for Reliability Assessment: A Demonstration of Concept Michael D. Shields Assistant Professor Dept. of Civil Engineering Johns Hopkins

More information

2 The Service Provision Problem The formulation given here can also be found in Tomasgard et al. [6]. That paper also details the background of the mo

2 The Service Provision Problem The formulation given here can also be found in Tomasgard et al. [6]. That paper also details the background of the mo Two-Stage Service Provision by Branch and Bound Shane Dye Department ofmanagement University of Canterbury Christchurch, New Zealand s.dye@mang.canterbury.ac.nz Asgeir Tomasgard SINTEF, Trondheim, Norway

More information

Ecient Implementation of Sorting Algorithms on Asynchronous Distributed-Memory Machines

Ecient Implementation of Sorting Algorithms on Asynchronous Distributed-Memory Machines Ecient Implementation of Sorting Algorithms on Asynchronous Distributed-Memory Machines Zhou B. B., Brent R. P. and Tridgell A. y Computer Sciences Laboratory The Australian National University Canberra,

More information

Normal mode acoustic propagation models. E.A. Vavalis. the computer code to a network of heterogeneous workstations using the Parallel

Normal mode acoustic propagation models. E.A. Vavalis. the computer code to a network of heterogeneous workstations using the Parallel Normal mode acoustic propagation models on heterogeneous networks of workstations E.A. Vavalis University of Crete, Mathematics Department, 714 09 Heraklion, GREECE and IACM, FORTH, 711 10 Heraklion, GREECE.

More information

A LARGE SCALE INTEGER AND COMBINATORIAL OPTIMIZER

A LARGE SCALE INTEGER AND COMBINATORIAL OPTIMIZER A LARGE SCALE INTEGER AND COMBINATORIAL OPTIMIZER By Qun Chen A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Industrial Engineering) at the

More information

J. Weston, A. Gammerman, M. Stitson, V. Vapnik, V. Vovk, C. Watkins. Technical Report. February 5, 1998

J. Weston, A. Gammerman, M. Stitson, V. Vapnik, V. Vovk, C. Watkins. Technical Report. February 5, 1998 Density Estimation using Support Vector Machines J. Weston, A. Gammerman, M. Stitson, V. Vapnik, V. Vovk, C. Watkins. Technical Report CSD-TR-97-3 February 5, 998!()+, -./ 3456 Department of Computer Science

More information

Fast Generation of Nested Space-filling Latin Hypercube Sample Designs. Keith Dalbey, PhD

Fast Generation of Nested Space-filling Latin Hypercube Sample Designs. Keith Dalbey, PhD Fast Generation of Nested Space-filling Latin Hypercube Sample Designs Keith Dalbey, PhD Sandia National Labs, Dept 1441 Optimization & Uncertainty Quantification George N. Karystinos, PhD Technical University

More information

A GENETIC ALGORITHM APPROACH TO OPTIMAL TOPOLOGICAL DESIGN OF ALL TERMINAL NETWORKS

A GENETIC ALGORITHM APPROACH TO OPTIMAL TOPOLOGICAL DESIGN OF ALL TERMINAL NETWORKS A GENETIC ALGORITHM APPROACH TO OPTIMAL TOPOLOGICAL DESIGN OF ALL TERMINAL NETWORKS BERNA DENGIZ AND FULYA ALTIPARMAK Department of Industrial Engineering Gazi University, Ankara, TURKEY 06570 ALICE E.

More information

Column Generation and its applications

Column Generation and its applications Column Generation and its applications Murat Firat, dept. IE&IS, TU/e BPI Cluster meeting Outline Some real-life decision problems Standard formulations Basics of Column Generation Master formulations

More information

Parallel Branch & Bound

Parallel Branch & Bound Parallel Branch & Bound Bernard Gendron Université de Montréal gendron@iro.umontreal.ca Outline Mixed integer programming (MIP) and branch & bound (B&B) Linear programming (LP) based B&B Relaxation and

More information

FATCOP 2.0: Advanced Features in an Opportunistic Mixed Integer Programming Solver

FATCOP 2.0: Advanced Features in an Opportunistic Mixed Integer Programming Solver Annals of Operations Research 0 (2000) 1 20 1 FATCOP 2.0: Advanced Features in an Opportunistic Mixed Integer Programming Solver Qun Chen a, Michael Ferris b and Jeff Linderoth c a Department of Industrial

More information

Ecient Implementation of Sorting Algorithms on Asynchronous Distributed-Memory Machines

Ecient Implementation of Sorting Algorithms on Asynchronous Distributed-Memory Machines Ecient Implementation of Sorting Algorithms on Asynchronous Distributed-Memory Machines B. B. Zhou, R. P. Brent and A. Tridgell Computer Sciences Laboratory The Australian National University Canberra,

More information

European Journal of Operational Research

European Journal of Operational Research European Journal of Operational Research 26 (21) 395 46 Contents lists available at ScienceDirect European Journal of Operational Research journal homepage: www.elsevier.com/locate/ejor Stochastics and

More information

The only known methods for solving this problem optimally are enumerative in nature, with branch-and-bound being the most ecient. However, such algori

The only known methods for solving this problem optimally are enumerative in nature, with branch-and-bound being the most ecient. However, such algori Use of K-Near Optimal Solutions to Improve Data Association in Multi-frame Processing Aubrey B. Poore a and in Yan a a Department of Mathematics, Colorado State University, Fort Collins, CO, USA ABSTRACT

More information

Solving Hard Integer Programs with MW

Solving Hard Integer Programs with MW Solving Hard Integer Programs with MW Jeff Linderoth ISE Department COR@L Lab Lehigh University jtl3@lehigh.edu 2007 Condor Jamboree Madison, WI May 2, 2007 Thanks! NSF OCI-0330607, CMMI-0522796, DOE DE-FG02-05ER25694

More information

A derivative-free trust-region algorithm for reliability-based optimization

A derivative-free trust-region algorithm for reliability-based optimization Struct Multidisc Optim DOI 10.1007/s00158-016-1587-y BRIEF NOTE A derivative-free trust-region algorithm for reliability-based optimization Tian Gao 1 Jinglai Li 2 Received: 3 June 2016 / Revised: 4 September

More information

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs 15.082J and 6.855J Lagrangian Relaxation 2 Algorithms Application to LPs 1 The Constrained Shortest Path Problem (1,10) 2 (1,1) 4 (2,3) (1,7) 1 (10,3) (1,2) (10,1) (5,7) 3 (12,3) 5 (2,2) 6 Find the shortest

More information

The Markov Chain Monte Carlo Approach to Importance Sampling in Stochastic Programming. Berk Ustun

The Markov Chain Monte Carlo Approach to Importance Sampling in Stochastic Programming. Berk Ustun The Markov Chain Monte Carlo Approach to Importance Sampling in Stochastic Programming by Berk Ustun B.S., Operations Research, University of California, Berkeley (2009) B.A., Economics, University of

More information

Algorithms for two-stage stochastic linear programmming

Algorithms for two-stage stochastic linear programmming Algorithms for two-stage stochastic linear programmming Basic Course on Stochastic Programming, IMPA 2016 Description Consider the following two-stage stochastic linear program c x + N min x s.t. Ax =

More information

Brian Borchers and John E. Mitchell. Rensselaer Polytechnic Institute. Abstract. nonlinear programs with convex objective functions and constraints.

Brian Borchers and John E. Mitchell. Rensselaer Polytechnic Institute. Abstract. nonlinear programs with convex objective functions and constraints. R.P.I. Math Report No. 200 September 17, 1991 An improved branch and bound algorithm for mixed integer nonlinear programs. 12 Brian Borchers and John E. Mitchell Department of Mathematical Sciences Rensselaer

More information

Network. Department of Statistics. University of California, Berkeley. January, Abstract

Network. Department of Statistics. University of California, Berkeley. January, Abstract Parallelizing CART Using a Workstation Network Phil Spector Leo Breiman Department of Statistics University of California, Berkeley January, 1995 Abstract The CART (Classication and Regression Trees) program,

More information

A. Atamturk. G.L. Nemhauser. M.W.P. Savelsbergh. Georgia Institute of Technology. School of Industrial and Systems Engineering.

A. Atamturk. G.L. Nemhauser. M.W.P. Savelsbergh. Georgia Institute of Technology. School of Industrial and Systems Engineering. A Combined Lagrangian, Linear Programming and Implication Heuristic for Large-Scale Set Partitioning Problems 1 A. Atamturk G.L. Nemhauser M.W.P. Savelsbergh Georgia Institute of Technology School of Industrial

More information

F k G A S S1 3 S 2 S S V 2 V 3 V 1 P 01 P 11 P 10 P 00

F k G A S S1 3 S 2 S S V 2 V 3 V 1 P 01 P 11 P 10 P 00 PRLLEL SPRSE HOLESKY FTORIZTION J URGEN SHULZE University of Paderborn, Department of omputer Science Furstenallee, 332 Paderborn, Germany Sparse matrix factorization plays an important role in many numerical

More information

Note Set 4: Finite Mixture Models and the EM Algorithm

Note Set 4: Finite Mixture Models and the EM Algorithm Note Set 4: Finite Mixture Models and the EM Algorithm Padhraic Smyth, Department of Computer Science University of California, Irvine Finite Mixture Models A finite mixture model with K components, for

More information

The Size Robust Multiple Knapsack Problem

The Size Robust Multiple Knapsack Problem MASTER THESIS ICA-3251535 The Size Robust Multiple Knapsack Problem Branch and Price for the Separate and Combined Recovery Decomposition Model Author: D.D. Tönissen, Supervisors: dr. ir. J.M. van den

More information

Lecture: Simulation. of Manufacturing Systems. Sivakumar AI. Simulation. SMA6304 M2 ---Factory Planning and scheduling. Simulation - A Predictive Tool

Lecture: Simulation. of Manufacturing Systems. Sivakumar AI. Simulation. SMA6304 M2 ---Factory Planning and scheduling. Simulation - A Predictive Tool SMA6304 M2 ---Factory Planning and scheduling Lecture Discrete Event of Manufacturing Systems Simulation Sivakumar AI Lecture: 12 copyright 2002 Sivakumar 1 Simulation Simulation - A Predictive Tool Next

More information

Convex Optimization / Homework 2, due Oct 3

Convex Optimization / Homework 2, due Oct 3 Convex Optimization 0-725/36-725 Homework 2, due Oct 3 Instructions: You must complete Problems 3 and either Problem 4 or Problem 5 (your choice between the two) When you submit the homework, upload a

More information

Using Local Trajectory Optimizers To Speed Up Global. Christopher G. Atkeson. Department of Brain and Cognitive Sciences and

Using Local Trajectory Optimizers To Speed Up Global. Christopher G. Atkeson. Department of Brain and Cognitive Sciences and Using Local Trajectory Optimizers To Speed Up Global Optimization In Dynamic Programming Christopher G. Atkeson Department of Brain and Cognitive Sciences and the Articial Intelligence Laboratory Massachusetts

More information

The Global Standard for Mobility (GSM) (see, e.g., [6], [4], [5]) yields a

The Global Standard for Mobility (GSM) (see, e.g., [6], [4], [5]) yields a Preprint 0 (2000)?{? 1 Approximation of a direction of N d in bounded coordinates Jean-Christophe Novelli a Gilles Schaeer b Florent Hivert a a Universite Paris 7 { LIAFA 2, place Jussieu - 75251 Paris

More information

Bias-Variance Tradeos Analysis Using Uniform CR Bound. Mohammad Usman, Alfred O. Hero, Jerey A. Fessler and W. L. Rogers. University of Michigan

Bias-Variance Tradeos Analysis Using Uniform CR Bound. Mohammad Usman, Alfred O. Hero, Jerey A. Fessler and W. L. Rogers. University of Michigan Bias-Variance Tradeos Analysis Using Uniform CR Bound Mohammad Usman, Alfred O. Hero, Jerey A. Fessler and W. L. Rogers University of Michigan ABSTRACT We quantify fundamental bias-variance tradeos for

More information

LaGO - A solver for mixed integer nonlinear programming

LaGO - A solver for mixed integer nonlinear programming LaGO - A solver for mixed integer nonlinear programming Ivo Nowak June 1 2005 Problem formulation MINLP: min f(x, y) s.t. g(x, y) 0 h(x, y) = 0 x [x, x] y [y, y] integer MINLP: - n

More information

(X 1:n η) 1 θ e 1. i=1. Using the traditional MLE derivation technique, the penalized MLEs for η and θ are: = n. (X i η) = 0. i=1 = 1.

(X 1:n η) 1 θ e 1. i=1. Using the traditional MLE derivation technique, the penalized MLEs for η and θ are: = n. (X i η) = 0. i=1 = 1. EXAMINING THE PERFORMANCE OF A CONTROL CHART FOR THE SHIFTED EXPONENTIAL DISTRIBUTION USING PENALIZED MAXIMUM LIKELIHOOD ESTIMATORS: A SIMULATION STUDY USING SAS Austin Brown, M.S., University of Northern

More information

Transactions. Database Counting Process. : CheckPoint

Transactions. Database Counting Process. : CheckPoint An Adaptive Algorithm for Mining Association Rules on Shared-memory Parallel Machines David W. Cheung y Kan Hu z Shaowei Xia z y Department of Computer Science, The University of Hong Kong, Hong Kong.

More information

Planning and Control: Markov Decision Processes

Planning and Control: Markov Decision Processes CSE-571 AI-based Mobile Robotics Planning and Control: Markov Decision Processes Planning Static vs. Dynamic Predictable vs. Unpredictable Fully vs. Partially Observable Perfect vs. Noisy Environment What

More information

March 6, 2000 Applications that process and/or transfer Continuous Media (audio and video) streams become

March 6, 2000 Applications that process and/or transfer Continuous Media (audio and video) streams become Increasing the Clock Interrupt Frequency for Better Support of Real-Time Applications Constantinos Dovrolis Parameswaran Ramanathan Department of Electrical and Computer Engineering University of Wisconsin-Madison

More information

SOLVING LONG-TERM HYDROTHERMAL SCHEDULING PROBLEMS

SOLVING LONG-TERM HYDROTHERMAL SCHEDULING PROBLEMS SOLVING LONG-TERM HYDROTHERMAL SCHEDULING PROBLEMS Vitor L. de Matos Andrew B. Philpott Erlon C. Finardi Universidade Federal de Santa Catarina University of Auckland Universidade Federal de Santa Catarina

More information

c 2001 Society for Industrial and Applied Mathematics

c 2001 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 11, No. 4, pp. 1019 1036 c 2001 Society for Industrial and Applied Mathematics FATCOP: A FAULT TOLERANT CONDOR-PVM MIXED INTEGER PROGRAMMING SOLVER QUN CHEN AND MICHAEL C. FERRIS Abstract.

More information

Implementing Scalable Parallel Search Algorithms for Data-Intensive Applications

Implementing Scalable Parallel Search Algorithms for Data-Intensive Applications Implementing Scalable Parallel Search Algorithms for Data-Intensive Applications Ted Ralphs Industrial and Systems Engineering Lehigh University http://www.lehigh.edu/~tkr2 Laszlo Ladanyi IBM T.J. Watson

More information

Computer Experiments. Designs

Computer Experiments. Designs Computer Experiments Designs Differences between physical and computer Recall experiments 1. The code is deterministic. There is no random error (measurement error). As a result, no replication is needed.

More information

On the Global Solution of Linear Programs with Linear Complementarity Constraints

On the Global Solution of Linear Programs with Linear Complementarity Constraints On the Global Solution of Linear Programs with Linear Complementarity Constraints J. E. Mitchell 1 J. Hu 1 J.-S. Pang 2 K. P. Bennett 1 G. Kunapuli 1 1 Department of Mathematical Sciences RPI, Troy, NY

More information

Selected Topics in Column Generation

Selected Topics in Column Generation Selected Topics in Column Generation February 1, 2007 Choosing a solver for the Master Solve in the dual space(kelly s method) by applying a cutting plane algorithm In the bundle method(lemarechal), a

More information

Computer vision: models, learning and inference. Chapter 10 Graphical Models

Computer vision: models, learning and inference. Chapter 10 Graphical Models Computer vision: models, learning and inference Chapter 10 Graphical Models Independence Two variables x 1 and x 2 are independent if their joint probability distribution factorizes as Pr(x 1, x 2 )=Pr(x

More information

Stochastic Network Interdiction / June 2001

Stochastic Network Interdiction / June 2001 Calhoun: The NPS Institutional Archive Faculty and Researcher Publications Faculty and Researcher Publications 2001-06 Stochastic Network Interdiction / June 2001 Wood, Kevin Monterey, California. Naval

More information

Overview. Monte Carlo Methods. Statistics & Bayesian Inference Lecture 3. Situation At End Of Last Week

Overview. Monte Carlo Methods. Statistics & Bayesian Inference Lecture 3. Situation At End Of Last Week Statistics & Bayesian Inference Lecture 3 Joe Zuntz Overview Overview & Motivation Metropolis Hastings Monte Carlo Methods Importance sampling Direct sampling Gibbs sampling Monte-Carlo Markov Chains Emcee

More information

2 J. Karvo et al. / Blocking of dynamic multicast connections Figure 1. Point to point (top) vs. point to multipoint, or multicast connections (bottom

2 J. Karvo et al. / Blocking of dynamic multicast connections Figure 1. Point to point (top) vs. point to multipoint, or multicast connections (bottom Telecommunication Systems 0 (1998)?? 1 Blocking of dynamic multicast connections Jouni Karvo a;, Jorma Virtamo b, Samuli Aalto b and Olli Martikainen a a Helsinki University of Technology, Laboratory of

More information

Improving Dual Bound for Stochastic MILP Models Using Sensitivity Analysis

Improving Dual Bound for Stochastic MILP Models Using Sensitivity Analysis Improving Dual Bound for Stochastic MILP Models Using Sensitivity Analysis Vijay Gupta Ignacio E. Grossmann Department of Chemical Engineering Carnegie Mellon University, Pittsburgh Bora Tarhan ExxonMobil

More information

How Learning Differs from Optimization. Sargur N. Srihari

How Learning Differs from Optimization. Sargur N. Srihari How Learning Differs from Optimization Sargur N. srihari@cedar.buffalo.edu 1 Topics in Optimization Optimization for Training Deep Models: Overview How learning differs from optimization Risk, empirical

More information

Outline: Embarrassingly Parallel Problems

Outline: Embarrassingly Parallel Problems Outline: Embarrassingly Parallel Problems what they are Mandelbrot Set computation cost considerations static parallelization dynamic parallelizations and its analysis Monte Carlo Methods parallel random

More information

Algebraic Constructions of Ecient Broadcast Networks. Michael J. Dinneen and Michael R. Fellows. University of Victoria.

Algebraic Constructions of Ecient Broadcast Networks. Michael J. Dinneen and Michael R. Fellows. University of Victoria. Algebraic Constructions of Ecient Broadcast Networks Michael J. Dinneen and Michael R. Fellows Department of Computer Science University of Victoria Victoria, B.C. Canada V8W P6 Vance Faber Los Alamos

More information

Foundations of Computing

Foundations of Computing Foundations of Computing Darmstadt University of Technology Dept. Computer Science Winter Term 2005 / 2006 Copyright c 2004 by Matthias Müller-Hannemann and Karsten Weihe All rights reserved http://www.algo.informatik.tu-darmstadt.de/

More information

Inner and outer approximation of capture basin using interval analysis

Inner and outer approximation of capture basin using interval analysis Inner and outer approximation of capture basin using interval analysis M. Lhommeau 1 L. Jaulin 2 L. Hardouin 1 1 Laboratoire d'ingénierie des Systèmes Automatisés ISTIA - Université d'angers 62, av. Notre

More information

Rearrangement of DNA fragments: a branch-and-cut algorithm Abstract. In this paper we consider a problem that arises in the process of reconstruction

Rearrangement of DNA fragments: a branch-and-cut algorithm Abstract. In this paper we consider a problem that arises in the process of reconstruction Rearrangement of DNA fragments: a branch-and-cut algorithm 1 C. E. Ferreira 1 C. C. de Souza 2 Y. Wakabayashi 1 1 Instituto de Mat. e Estatstica 2 Instituto de Computac~ao Universidade de S~ao Paulo e-mail:

More information

Networks for Control. California Institute of Technology. Pasadena, CA Abstract

Networks for Control. California Institute of Technology. Pasadena, CA Abstract Learning Fuzzy Rule-Based Neural Networks for Control Charles M. Higgins and Rodney M. Goodman Department of Electrical Engineering, 116-81 California Institute of Technology Pasadena, CA 91125 Abstract

More information

A Low Level Introduction to High Dimensional Sparse Grids

A Low Level Introduction to High Dimensional Sparse Grids A Low Level Introduction to High Dimensional Sparse Grids http://people.sc.fsu.edu/ jburkardt/presentations/sandia 2007.pdf... John 1 Clayton Webster 2 1 Virginia Tech 2 Sandia National Laboratory. 21

More information

The Cross-Entropy Method for Mathematical Programming

The Cross-Entropy Method for Mathematical Programming The Cross-Entropy Method for Mathematical Programming Dirk P. Kroese Reuven Y. Rubinstein Department of Mathematics, The University of Queensland, Australia Faculty of Industrial Engineering and Management,

More information

Research Interests Optimization:

Research Interests Optimization: Mitchell: Research interests 1 Research Interests Optimization: looking for the best solution from among a number of candidates. Prototypical optimization problem: min f(x) subject to g(x) 0 x X IR n Here,

More information

Bayesian model ensembling using meta-trained recurrent neural networks

Bayesian model ensembling using meta-trained recurrent neural networks Bayesian model ensembling using meta-trained recurrent neural networks Luca Ambrogioni l.ambrogioni@donders.ru.nl Umut Güçlü u.guclu@donders.ru.nl Yağmur Güçlütürk y.gucluturk@donders.ru.nl Julia Berezutskaya

More information

Outline: Embarrassingly Parallel Problems. Example#1: Computation of the Mandelbrot Set. Embarrassingly Parallel Problems. The Mandelbrot Set

Outline: Embarrassingly Parallel Problems. Example#1: Computation of the Mandelbrot Set. Embarrassingly Parallel Problems. The Mandelbrot Set Outline: Embarrassingly Parallel Problems Example#1: Computation of the Mandelbrot Set what they are Mandelbrot Set computation cost considerations static parallelization dynamic parallelizations and its

More information

Integration. Volume Estimation

Integration. Volume Estimation Monte Carlo Integration Lab Objective: Many important integrals cannot be evaluated symbolically because the integrand has no antiderivative. Traditional numerical integration techniques like Newton-Cotes

More information

Modern Benders (in a nutshell)

Modern Benders (in a nutshell) Modern Benders (in a nutshell) Matteo Fischetti, University of Padova (based on joint work with Ivana Ljubic and Markus Sinnl) Lunteren Conference on the Mathematics of Operations Research, January 17,

More information

Strategic Design of Robust Global Supply Chains under Uncertainty

Strategic Design of Robust Global Supply Chains under Uncertainty Strategic Design of Robust Global Supply Chains under Uncertainty Marc Goetschalckx, Shabbir Ahmed, Alexander Shapiro, Tjendera Santoso, and Gonzalo Cordova School of Industrial and Systems Engineering,

More information

α Coverage to Extend Network Lifetime on Wireless Sensor Networks

α Coverage to Extend Network Lifetime on Wireless Sensor Networks Noname manuscript No. (will be inserted by the editor) α Coverage to Extend Network Lifetime on Wireless Sensor Networks Monica Gentili Andrea Raiconi Received: date / Accepted: date Abstract An important

More information

A Two-Stage Stochastic Programming Approach for Location-Allocation Models in Uncertain Environments

A Two-Stage Stochastic Programming Approach for Location-Allocation Models in Uncertain Environments A Two-Stage Stochastic Programming Approach for Location-Allocation in Uncertain Environments Markus Kaiser, Kathrin Klamroth Optimization & Approximation Department of Mathematics University of Wuppertal

More information

PARALLEL OPTIMIZATION

PARALLEL OPTIMIZATION PARALLEL OPTIMIZATION Theory, Algorithms, and Applications YAIR CENSOR Department of Mathematics and Computer Science University of Haifa STAVROS A. ZENIOS Department of Public and Business Administration

More information

Convexity Theory and Gradient Methods

Convexity Theory and Gradient Methods Convexity Theory and Gradient Methods Angelia Nedić angelia@illinois.edu ISE Department and Coordinated Science Laboratory University of Illinois at Urbana-Champaign Outline Convex Functions Optimality

More information

Column Generation Based Primal Heuristics

Column Generation Based Primal Heuristics Column Generation Based Primal Heuristics C. Joncour, S. Michel, R. Sadykov, D. Sverdlov, F. Vanderbeck University Bordeaux 1 & INRIA team RealOpt Outline 1 Context Generic Primal Heuristics The Branch-and-Price

More information

Heuristics in Commercial MIP Solvers Part I (Heuristics in IBM CPLEX)

Heuristics in Commercial MIP Solvers Part I (Heuristics in IBM CPLEX) Andrea Tramontani CPLEX Optimization, IBM CWI, Amsterdam, June 12, 2018 Heuristics in Commercial MIP Solvers Part I (Heuristics in IBM CPLEX) Agenda CPLEX Branch-and-Bound (B&B) Primal heuristics in CPLEX

More information

Exercise set #2 (29 pts)

Exercise set #2 (29 pts) (29 pts) The deadline for handing in your solutions is Nov 16th 2015 07:00. Return your solutions (one.pdf le and one.zip le containing Python code) via e- mail to Becs-114.4150@aalto.fi. Additionally,

More information

New developments in LS-OPT

New developments in LS-OPT 7. LS-DYNA Anwenderforum, Bamberg 2008 Optimierung II New developments in LS-OPT Nielen Stander, Tushar Goel, Willem Roux Livermore Software Technology Corporation, Livermore, CA94551, USA Summary: This

More information

A Brief Look at Optimization

A Brief Look at Optimization A Brief Look at Optimization CSC 412/2506 Tutorial David Madras January 18, 2018 Slides adapted from last year s version Overview Introduction Classes of optimization problems Linear programming Steepest

More information

Lecture 2. Topology of Sets in R n. August 27, 2008

Lecture 2. Topology of Sets in R n. August 27, 2008 Lecture 2 Topology of Sets in R n August 27, 2008 Outline Vectors, Matrices, Norms, Convergence Open and Closed Sets Special Sets: Subspace, Affine Set, Cone, Convex Set Special Convex Sets: Hyperplane,

More information

Computer Vision 2 Lecture 8

Computer Vision 2 Lecture 8 Computer Vision 2 Lecture 8 Multi-Object Tracking (30.05.2016) leibe@vision.rwth-aachen.de, stueckler@vision.rwth-aachen.de RWTH Aachen University, Computer Vision Group http://www.vision.rwth-aachen.de

More information

Global Solution of Mixed-Integer Dynamic Optimization Problems

Global Solution of Mixed-Integer Dynamic Optimization Problems European Symposium on Computer Arded Aided Process Engineering 15 L. Puigjaner and A. Espuña (Editors) 25 Elsevier Science B.V. All rights reserved. Global Solution of Mixed-Integer Dynamic Optimization

More information

Approximation in Linear Stochastic Programming Using L-Shaped Method

Approximation in Linear Stochastic Programming Using L-Shaped Method Approximation in Linear Stochastic Programming Using L-Shaped Method Liza Setyaning Pertiwi 1, Rini Purwanti 2, Wilma Handayani 3, Prof. Dr. Herman Mawengkang 4 1,2,3,4 University of North Sumatra, Indonesia

More information

Solving lexicographic multiobjective MIPs with Branch-Cut-Price

Solving lexicographic multiobjective MIPs with Branch-Cut-Price Solving lexicographic multiobjective MIPs with Branch-Cut-Price Marta Eso (The Hotchkiss School) Laszlo Ladanyi (IBM T.J. Watson Research Center) David Jensen (IBM T.J. Watson Research Center) McMaster

More information

In a two-way contingency table, the null hypothesis of quasi-independence. (QI) usually arises for two main reasons: 1) some cells involve structural

In a two-way contingency table, the null hypothesis of quasi-independence. (QI) usually arises for two main reasons: 1) some cells involve structural Simulate and Reject Monte Carlo Exact Conditional Tests for Quasi-independence Peter W. F. Smith and John W. McDonald Department of Social Statistics, University of Southampton, Southampton, SO17 1BJ,

More information

Sensor Scheduling and Energy Allocation For Lifetime Maximization in User-Centric Visual Sensor Networks

Sensor Scheduling and Energy Allocation For Lifetime Maximization in User-Centric Visual Sensor Networks 1 Sensor Scheduling and Energy Allocation For Lifetime Maximization in User-Centric Visual Sensor Networks Chao Yu IP Lab, University of Rochester December 4, 2008 2 1 Introduction User-Centric VSN Camera

More information

Benders in a nutshell Matteo Fischetti, University of Padova

Benders in a nutshell Matteo Fischetti, University of Padova Benders in a nutshell Matteo Fischetti, University of Padova ODS 2017, Sorrento, September 2017 1 Benders decomposition The original Benders decomposition from the 1960s uses two distinct ingredients for

More information

Monte Carlo Methods. Lecture slides for Chapter 17 of Deep Learning Ian Goodfellow Last updated

Monte Carlo Methods. Lecture slides for Chapter 17 of Deep Learning   Ian Goodfellow Last updated Monte Carlo Methods Lecture slides for Chapter 17 of Deep Learning www.deeplearningbook.org Ian Goodfellow Last updated 2017-12-29 Roadmap Basics of Monte Carlo methods Importance Sampling Markov Chains

More information

Ray shooting from convex ranges

Ray shooting from convex ranges Discrete Applied Mathematics 108 (2001) 259 267 Ray shooting from convex ranges Evangelos Kranakis a, Danny Krizanc b, Anil Maheshwari a;, Jorg-Rudiger Sack a, Jorge Urrutia c a School of Computer Science,

More information

Gauss for Econometrics: Simulation

Gauss for Econometrics: Simulation Gauss for Econometrics: Simulation R.G. Pierse 1. Introduction Simulation is a very useful tool in econometric modelling. It allows the economist to examine the properties of models and estimators when

More information

Probabilistic Robotics

Probabilistic Robotics Probabilistic Robotics Discrete Filters and Particle Filters Models Some slides adopted from: Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Kai Arras and Probabilistic Robotics Book SA-1 Probabilistic

More information

A Graph Theoretic Approach to Image Database Retrieval

A Graph Theoretic Approach to Image Database Retrieval A Graph Theoretic Approach to Image Database Retrieval Selim Aksoy and Robert M. Haralick Intelligent Systems Laboratory Department of Electrical Engineering University of Washington, Seattle, WA 98195-2500

More information

Linear Regression Optimization

Linear Regression Optimization Gradient Descent Linear Regression Optimization Goal: Find w that minimizes f(w) f(w) = Xw y 2 2 Closed form solution exists Gradient Descent is iterative (Intuition: go downhill!) n w * w Scalar objective:

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Given an NP-hard problem, what should be done? Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one of three desired features. Solve problem to optimality.

More information

Two Image-Template Operations for Binary Image Processing. Hongchi Shi. Department of Computer Engineering and Computer Science

Two Image-Template Operations for Binary Image Processing. Hongchi Shi. Department of Computer Engineering and Computer Science Two Image-Template Operations for Binary Image Processing Hongchi Shi Department of Computer Engineering and Computer Science Engineering Building West, Room 331 University of Missouri - Columbia Columbia,

More information

Extending Algebraic Modeling Languages to Support. Algorithm Development for Solving Stochastic. Programming Models

Extending Algebraic Modeling Languages to Support. Algorithm Development for Solving Stochastic. Programming Models Extending Algebraic Modeling Languages to Support Algorithm Development for Solving Stochastic Programming Models Suleyman Karabuk University of Oklahoma, Industrial Engineering, karabuk@ou.edu Abstract

More information

15. Cutting plane and ellipsoid methods

15. Cutting plane and ellipsoid methods EE 546, Univ of Washington, Spring 2012 15. Cutting plane and ellipsoid methods localization methods cutting-plane oracle examples of cutting plane methods ellipsoid method convergence proof inequality

More information

15-780: MarkovDecisionProcesses

15-780: MarkovDecisionProcesses 15-780: MarkovDecisionProcesses J. Zico Kolter Feburary 29, 2016 1 Outline Introduction Formal definition Value iteration Policy iteration Linear programming for MDPs 2 1988 Judea Pearl publishes Probabilistic

More information