On the Dual Approach to Recursive Optimization

Similar documents
PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction

Convex Optimization and Machine Learning

Dynamic Stochastic General Equilibrium Models

Introduction to Constrained Optimization

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach

Sparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual

CSE 417 Network Flows (pt 4) Min Cost Flows

Introduction to Optimization

DM6 Support Vector Machines

A Linear Programming Approach to Concave Approximation and Value Function Iteration

Constrained optimization

Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2

The Alternating Direction Method of Multipliers

Note on Neoclassical Growth Model: Value Function Iteration + Discretization

Value Function Iteration versus Euler equation methods

Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles

Targeting Nominal GDP or Prices: Expectation Dynamics and the Interest Rate Lower Bound

Natural Language Processing

Lecture 10: SVM Lecture Overview Support Vector Machines The binary classification problem

Introduction to Optimization

Algorithms for Integer Programming

Network Flows. 7. Multicommodity Flows Problems. Fall 2010 Instructor: Dr. Masoud Yaghini

A Comparison of Mixed-Integer Programming Models for Non-Convex Piecewise Linear Cost Minimization Problems

Algorithms for convex optimization

THE RECURSIVE LOGIT MODEL TUTORIAL

1 The Permanent Income Hypothesis

Performance Evaluation of an Interior Point Filter Line Search Method for Constrained Optimization

Chapter II. Linear Programming

Modified Augmented Lagrangian Coordination and Alternating Direction Method of Multipliers with

Computational Methods. Constrained Optimization

Characterizing Improving Directions Unconstrained Optimization

Lagrangian Relaxation: An overview

Inequality Constraints in Recursive Economies

ME 391Q Network Flow Programming

Lecture 4 Duality and Decomposition Techniques

Lecture VIII. Global Approximation Methods: I

Multi-fidelity optimization of horizontal axis wind turbines

Throughput-Optimal Broadcast in Wireless Networks with Point-to-Multipoint Transmissions

A Short SVM (Support Vector Machine) Tutorial

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs

Convexity Theory and Gradient Methods

Robust Principal Component Analysis (RPCA)

Fast-Lipschitz Optimization

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

CS2 Algorithms and Data Structures Note 1

Programming, numerics and optimization

Integer Programming Theory

Gate Sizing by Lagrangian Relaxation Revisited

Distributed Alternating Direction Method of Multipliers

CONLIN & MMA solvers. Pierre DUYSINX LTAS Automotive Engineering Academic year

Mathematical Programming and Research Methods (Part II)

A MRF Shape Prior for Facade Parsing with Occlusions Supplementary Material

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs

and 6.855J Lagrangian Relaxation I never missed the opportunity to remove obstacles in the way of unity. Mohandas Gandhi

COMS 4771 Support Vector Machines. Nakul Verma

The Alternating Direction Method of Multipliers

Applying Machine Learning to Real Problems: Why is it Difficult? How Research Can Help?

6.001 Notes: Section 4.1

Euler equations. Jonathan A. Parker Northwestern University and NBER

Lecture 7: Support Vector Machine

Introduction to Machine Learning

The Size Robust Multiple Knapsack Problem

In Homework 1, you determined the inverse dynamics model of the spinbot robot to be

In other words, we want to find the domain points that yield the maximum or minimum values (extrema) of the function.

Total variation tomographic inversion via the Alternating Direction Method of Multipliers

Multiple Constraint Satisfaction by Belief Propagation: An Example Using Sudoku

You ve already read basics of simulation now I will be taking up method of simulation, that is Random Number Generation

CS 534: Computer Vision Segmentation and Perceptual Grouping

Numerical schemes for Hamilton-Jacobi equations, control problems and games

LOAD SHEDDING AN EFFICIENT USE OF LTC TRANSFORMERS

Data Analysis 3. Support Vector Machines. Jan Platoš October 30, 2017

Just the Facts Small-Sliding Contact in ANSYS Mechanical

Chapter 1. Introduction

Lecture 19: Convex Non-Smooth Optimization. April 2, 2007

Kernel Methods & Support Vector Machines

Lecture 19: November 5

Introduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs

10. Network dimensioning

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

The 3D DSC in Fluid Simulation

Theoretical Concepts of Machine Learning

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014

Maximizing Network Utilization with Max-Min Fairness in Wireless Sensor Networks

Inverse KKT Motion Optimization: A Newton Method to Efficiently Extract Task Spaces and Cost Parameters from Demonstrations

15-780: MarkovDecisionProcesses

AC : USING A SCRIPTING LANGUAGE FOR DYNAMIC PROGRAMMING

Data Mining: Concepts and Techniques. Chapter 9 Classification: Support Vector Machines. Support Vector Machines (SVMs)

Generalized Network Flow Programming

Stanford University. A Distributed Solver for Kernalized SVM

Robust Regression. Robust Data Mining Techniques By Boonyakorn Jantaranuson

Predicting Tumour Location by Modelling the Deformation of the Breast using Nonlinear Elasticity

PIPA: A New Proximal Interior Point Algorithm for Large-Scale Convex Optimization

Compact Sets. James K. Peterson. September 15, Department of Biological Sciences and Department of Mathematical Sciences Clemson University

Linear Programming Problems

Math 5593 Linear Programming Lecture Notes

Practical Implementations Of The Active Set Method For Support Vector Machine Training With Semi-definite Kernels

Learning with infinitely many features

Lecture 5: Graphs. Rajat Mittal. IIT Kanpur

Level-set MCMC Curve Sampling and Geometric Conditional Simulation

A Deterministic Dynamic Programming Approach for Optimization Problem with Quadratic Objective Function and Linear Constraints

Transcription:

On the Dual Approach to Recursive Optimization Messner - Pavoni - Sleet Discussion by: Ctirad Slavík, Goethe Uni Frankfurt Conference on Private Information, Interdependent Preferences and Robustness: Theory and Applications Max Planck Institute Bonn, May 2013 1 / 18

Outline Introduction and relationship to literature. (Brief) summary of the paper. My comments. Conclusion. 2 / 18

Introduction Great paper: I learned a lot. Contribution of this paper: Provide recursive dual formulations for a (large) class of dynamic (incentive) problems. Extend/generalize Marcet and Marimon (2011) and Messner, Pavoni and Sleet (RED, 2012). 3 / 18

Discussion Why do we need recursive formulations? Facilitate characterization. Enable quantitative analysis. Why do we need to use the dual problem? Makes characterization easier. Makes quantitative analysis easier. 4 / 18

Marcet and Marimon (1999-2011) Consider a problem with limited commitment. Rewrite it as a recursive problem with capital, current shock and (up-to-date) sum of Lagrange multipliers being the state. Assumptions avoiding technical difficulties. 5 / 18

Messner, Pavoni, Sleet (RED, 2012) Sequential primal Recursive primal Sequential dual Recursive dual Equivalence between sequential primal and recursive dual under somewhat restrictive conditions: Environment without backward-looking state variables (capital), i.e. not a generalization of Marcet-Marimon. Linearity in forward looking constraints (incentive constraints, limited commitment constraints etc.). Provide convergence results for the Bellman operator in the recursive dual problem. 6 / 18

This Paper Sequential primal Recursive primal Sequential dual Recursive dual Equivalence between sequential primal and recursive dual under very general conditions: An environment with backward-looking state variables. General form of forward looking constraints (including non-separability across states). Provide contraction results for the Bellman operator in the recursive dual uniqueness and speed of convergence. 7 / 18

Brief Outline Sequential primal Recursive dual Steps: 1 Sequential primal with constraints. 2 Rewrite as a Lagrangian, rewrite in sup-inf form. 3 Consider the sequential dual, i.e. inf-sup. 4 Decompose, rewrite as recursive dual. 8 / 18

Super-simple Example 2 period problem with capital and limited commitment: Lender has y in period 1 and 2. Can eat it or lend it to entrepreneur, who can use it as capital to produce. Initial capital is zero, no depreciation, irreversibility. Entrepreneur can walk away with installed capital and current borrowing. Step 1: Lender s problem: V = sup u l (c1) l + u l (c2) l c1 l,cl 2,ce 1,ce 2,k1,k2 f (k 1 ) + (y k 1 ) c l 1 c e 1 0 f (k 2 ) + (y k 2 + k 1 ) c l 2 + c e 2 0 u e (c e 1 ) + u e (c e 2 ) v 1 (k 1 ) 0 u e (c e 2 ) v 2 (k 2 ) 0 Denote a = (c l 1, cl 2, ce 1, ce 2, k 1, k 2 ) A = R 6 +. 9 / 18

Step 2: Rewrite as Lagrangian: Denote λ = (λ 1, λ 2, λ 3, λ 4 ) Λ = R 4 +. L(a, λ) = u l (c1) l + u l (c2) l + λ 1 [f (k 1 ) + (y k 1 ) c1 l c1 e ] + λ 2 [f (k 2 ) + (y k 2 + k 1 ) c2 l + c2 e ] + λ 3 [u e (c1 e ) + u e (c2 e ) v 1 (k 1 )] + λ 4 [u e (c2 e ) v 2 (k 2 )] In sup-inf notation: V = sup inf L(a, λ) a A λ Λ Intuition: infinite penalization if constraints violated. 10 / 18

Step 3: Rewrite in inf-sup form: D = inf sup λ Λ a A L(a, λ) This is the sequential dual problem. 11 / 18

Step 4: First separate period 1 and period 2. D = inf sup u l (c1) l + λ 1 [f (k 1 ) + (y k 1 ) c1 l c1 e ] + λ Λ c1 l,ce 1,k1 λ 2 k 1 + λ 3 [u e (c e 1 ) v 1 (k 1 )] + sup u l (c2) l + λ 2 [f (k 2 ) + y k 2 c2 l + c2 e ] + λ 3 u e (c2 e ) + c2 l,ce 2,k2 λ 4 [u e (c e 2 ) v 2 (k 2 )] Period 1 multipliers: λ 1, λ 3. Period 2 multipliers: λ 2, λ 4. Define φ = λ 2. Summarizes how period 2 multiplier λ 2 affects period 1 problem. Forward looking? Define µ = λ 3. Summarizes how period 1 multiplier λ 3 affects period 2 problem. Backward looking? 12 / 18

Now rewrite recursively: D 1 := D 2 (µ, φ) = inf λ 1,λ 3,φ,µ sup u l (c1) l + λ 1 [f (k 1 ) + (y k 1 ) c1 l c1 e ] + c1 l,ce 1,k1 φk 1 + λ 3 [u e (c1 e ) v 1 (k 1 )] + D 2 (µ, φ) s.t. µ = λ 3 inf sup u l (c2) l + λ 2 [f (k 2 ) + y k 2 c2 l + c2 e ] + λ 2,λ 4 c2 l,ce 2,k2 µu e (c2 e ) + λ 4 [u e (c2 e ) v 2 (k 2 )] s.t. φ = λ 2 Bellman principle: D = D 1. 13 / 18

Further Results Do all this in a much more general horizon environment. Contraction result for the Bellman operator in the recursive dual problem, i.e. D unique and have convergence. Directly relate recursive dual with sequential primal. Provide a method to check if no sufficiency, i.e. if cannot show solution to recursive dual solves sequential primal. 14 / 18

Comments What economic problems have not had a (tractable) recursive formulation prior to this paper? Examples in MPS (RED, 2012): Limited commitment models, see Marcet and Marimon (2011) for a Lagrangian approach. In primal form with promised value as state see e.g. Ljungqvist and Sargent textbook. Private info with i.i.d. shocks: Thomas and Worrall (1990). Private info with persistent shocks: Fernandes and Phelan (2000) primal approach. Kapicka (2013) uses the first order approach (in the primal). Fukushima and Waki (2013). 15 / 18

Comments What is the benefit of using this method over other methods? Theoretically: very general, less restrictive assumptions, easier to characterize the set of feasible states. Quantitatively: Computation time, robustness? We do not know: Solve a sample problem and compare speed and precision. Contractive methods robust, but tend to be slow and imprecise (value function iteration in the standard growth model or incomplete market model a la Aiyagari). Rate of convergence ρ < 1, growth model converges at rate β. Could improve upon this method using insights from quantitative work in that literature (e.g. Howard algorithm, iterating on the policy functions using the Euler equation)? 16 / 18

Minor Comments Would like an application with: backward and forward looking constraint and shocks, to guide through the theory. Are Assumptions 1 and 2 strong? They seem to be important for the contraction results. 17 / 18

Conclusion Great theoretical step forward. Importance for quantitative analysis? Time will tell as in Marcet and Marimon case. 18 / 18