Dynamic Stochastic General Equilibrium Models

Size: px
Start display at page:

Download "Dynamic Stochastic General Equilibrium Models"

Transcription

1 Dynamic Stochastic General Equilibrium Models Dr. Andrea Beccarini Willi Mutschler Summer 2012 A. Beccarini () DSGE Summer / 27

2 Note on Dynamic Programming TheStandardRecursiveMethod One considers a representative agent whose objective is to nd a control (or decision) sequence fu t gt=0 such that E 0 " t=0β t U(x t, u t ) # (1) is maximised, subject to x t+1 = F (x t, u t, z t ) (2) A. Beccarini (CQE) DSGE Summer / 27

3 where x t is a vector of m state variables at period t; u t is a vector of n control variables; z t is a vector of s exogenous variables whose dynamics does not depend on x t and u t ; β 2 (0, 1) denotes the discount factor. The uncertainty of the model only comes from the exogenous z t. Assume that z t follow an AR(1) process: z t+1 = a + bz t + ɛ t+1 (3) where ɛ is independently and identically distributed (i.i.d.). The initial condition (x 0, z 0 ) in this formulation is assumed to be given. A. Beccarini (CQE) DSGE Summer / 27

4 The problem to solve a dynamic decision problem is to seek a time-invariant policy function G mapping from the state and exogenous (x, z) into the control u. u t = G (x t, z t ) (4) With such a policy function (or control equation), the sequences of state fx t gt=0 and control fu tgt=0 can be generated by iterating the control equation as well as the state eq. (2), given the initial condition (x 0, z 0 ) and the exogenous sequence fz t gt=0 generated by eq. (3). A. Beccarini (CQE) DSGE Summer / 27

5 To nd the policy function G by the recursive method, one rst de nes a value function V: " V (x 0, z 0 ) = max 0 β fu t gt=0e t U(x t, u t ) t=0 # (5) Expression (5) could be transformed to unveil its recursive structure. A. Beccarini (CQE) DSGE Summer / 27

6 For this purpose, one rst rewrites eq. (5) as follows: V (x 0, z 0 ) = max fu t g t=0 ( U(x 0, u 0 ) + βe 0 " t=0β t U(x t+1, u t+1 ) #) (6) It is easy to nd that the second term in eq. (6) can be expressed as being β times the value V as de ned in eq. (5) with the initial condition (x 1, z 1 ). Therefore, we could rewrite eq. (5) as V (x 0, z 0 ) = max fu(x 0, u 0 ) + βe 0 [V (x 1, z 1 )]g (7) fu t gt=0 A. Beccarini (CQE) DSGE Summer / 27

7 The formulation of eq. (7) represents a dynamic programming problem which highlights the recursive structure of the decision problem. A. Beccarini (CQE) DSGE Summer / 27

8 The formulation of eq. (7) represents a dynamic programming problem which highlights the recursive structure of the decision problem. In every period t, the planner faces the same decision problem. A. Beccarini (CQE) DSGE Summer / 27

9 The formulation of eq. (7) represents a dynamic programming problem which highlights the recursive structure of the decision problem. In every period t, the planner faces the same decision problem. Choosing the control variable u t that maximizes the current return plus the discounted value of the optimum plan from period t + 1 onwards. A. Beccarini (CQE) DSGE Summer / 27

10 The formulation of eq. (7) represents a dynamic programming problem which highlights the recursive structure of the decision problem. In every period t, the planner faces the same decision problem. Choosing the control variable u t that maximizes the current return plus the discounted value of the optimum plan from period t + 1 onwards. Since the problem repeats itself every period the time subscripts become irrelevant. A. Beccarini (CQE) DSGE Summer / 27

11 One thus can write eq. (7) as V (x, z) = max u h nu(x, u) + βe 0 V ( x, io z ) (8) where the tilde () over x and z denotes the corresponding next period values. They are subject to eq. (2) and (3). Eq. (8) is said to be the Bellman equation, named after Richard Bellman (1957). If one knows the function V, we then can solve u via the Bellman equation but in generally, this is not the case. A. Beccarini (CQE) DSGE Summer / 27

12 The typical method in this case is to construct a sequence of value functions by iterating the following equation: h V j+1 (x, z) = max nu(x, u) + βe 0 V j ( x, io z ) (9) u In terms of an algorithm, the method can be described as follows: A. Beccarini (CQE) DSGE Summer / 27

13 Step 1 Guess a di erentiable and concave candidate value function, V j A. Beccarini (CQE) DSGE Summer / 27

14 Step 1 Guess a di erentiable and concave candidate value function, V j Step 2. Use the Bellman equation to nd the optimum u and then compute V j+1 according to eq. (9). A. Beccarini (CQE) DSGE Summer / 27

15 Step 1 Guess a di erentiable and concave candidate value function, V j Step 2. Use the Bellman equation to nd the optimum u and then compute V j+1 according to eq. (9). Step 3. If V j+1 = V j, stop. Otherwise, update V j and go to step 1. A. Beccarini (CQE) DSGE Summer / 27

16 Under some regularity conditions regarding the function U and F, the convergence of this algorithm is warranted. However, the di culty of this algorithm is that in each Step 2, we need to nd the optimum u that maximize the right side of eq. (9). This task makes it di cult to write a closed form algorithm for iterating the Bellman equation. A. Beccarini (CQE) DSGE Summer / 27

17 The Euler Equation Alternatevely, one starts from the Bellman equation (8). The rst-order condition for maximizing the right side of the equation takes the form: U(x, u) u + βe " F (x, u, z) u V ( x, # z ) = 0 (10) x The objective here is to nd V / x. Assume V is di erentiable and thus from eq. (8) and exploiting eq. (10) it satis es V (x, z) x = U(x, G (x, z)) x + βe " F (x, G (x, z), z) x V ( x, # z ) F (11) This equation is often called the Benveniste-Scheinkman formula. A. Beccarini (CQE) DSGE Summer / 27

18 Assume F / x = 0. The above formula becomes V (x, z) x = U(x, G (x, z)) x (12) Substituting this formula into eq (10) gives rise to the Euler equation: " U(x, u) F (x, u, z) U( x, # z ) + βe u u = 0 x In economic analysis, one often encounters models, after some transformation, in which x does not appear in the transition law so that F / x = 0 is satis ed. However, there are still models in which such transformation is not feasible. A. Beccarini (CQE) DSGE Summer / 27

19 Deriving the First-Order Condition from the Lagrangian Suppose for the dynamic optimization problem as represented by eq.(1) and (2), we can de ne the Lagrangian L: L = E 0 ( t=0β t U(x t, u t ) β t+1 λ 0 t+1 [x t+1 F (x t, u t, z t )] ) (13) where λ t, the Lagrangian multiplier, is a m 1 vector. A. Beccarini (CQE) DSGE Summer / 27

20 Setting the partial derivatives of L to zero with respect to λ t, x t and u t will yield eq. (2) as well as U(x t, u t ) F (xt+1, u t+1, z t+1 ) + βe t λ t+1 = λ t x t x t+1 U(x t, u t ) u t + E t λ t+1 F (x t, u t, z t) u t = 0 (14) In comparison with the Euler equation, we nd that there is an unobservable variable λ t appearing in the system. Yet, using eq. (13) and (14), one does not have to transform the model into the setting that F / x = 0. This is an important advantage over the Euler equation. A. Beccarini (CQE) DSGE Summer / 27

21 The Log-linear Approximation Method Solving nonlinear dynamic optimization model with log-linear approximation has been proposed in particular by King et al. (1988) and Campbell (1994) in the context of Real Business Cycle models. In principle, this approximation method can be applied to the rst-order condition either in terms of the Euler equation or derived from the Langrangean. _ X Formally, let X t be the variables, the corresponding steady state. Then, _ x t lnx t lnx is regarded to be the log-deviation of X t. In particular, 100x t is the _ percentage of X t that it deviates from X. A. Beccarini (CQE) DSGE Summer / 27

22 The general idea of this method is to replace all the necessary equations in the model by approximations, which are linear in the log-deviation form. A. Beccarini (CQE) DSGE Summer / 27

23 The general idea of this method is to replace all the necessary equations in the model by approximations, which are linear in the log-deviation form. Given the approximate log-linear system, one then uses the method of undetermined coe cients to solve for the decision rule, which is also in the form of log-linear deviations. A. Beccarini (CQE) DSGE Summer / 27

24 The general idea of this method is to replace all the necessary equations in the model by approximations, which are linear in the log-deviation form. Given the approximate log-linear system, one then uses the method of undetermined coe cients to solve for the decision rule, which is also in the form of log-linear deviations. The general procedure involves the following steps: A. Beccarini (CQE) DSGE Summer / 27

25 The general idea of this method is to replace all the necessary equations in the model by approximations, which are linear in the log-deviation form. Given the approximate log-linear system, one then uses the method of undetermined coe cients to solve for the decision rule, which is also in the form of log-linear deviations. The general procedure involves the following steps: Step 1. Find the necessary equations characterizing the equilibrium law of motion of the system. These necessary equations should include the state equation (2), the exogenous equation (3) and the rst-order condition derived either as Euler equation (??) or from the Lagrangian (13) and (14). A. Beccarini (CQE) DSGE Summer / 27

26 The general idea of this method is to replace all the necessary equations in the model by approximations, which are linear in the log-deviation form. Given the approximate log-linear system, one then uses the method of undetermined coe cients to solve for the decision rule, which is also in the form of log-linear deviations. The general procedure involves the following steps: Step 1. Find the necessary equations characterizing the equilibrium law of motion of the system. These necessary equations should include the state equation (2), the exogenous equation (3) and the rst-order condition derived either as Euler equation (??) or from the Lagrangian (13) and (14). Step 2. Derive the steady state of the model. This requires rst parameterizing the model and then evaluating the model at its certainty equivalence form. A. Beccarini (CQE) DSGE Summer / 27

27 The general idea of this method is to replace all the necessary equations in the model by approximations, which are linear in the log-deviation form. Given the approximate log-linear system, one then uses the method of undetermined coe cients to solve for the decision rule, which is also in the form of log-linear deviations. The general procedure involves the following steps: Step 1. Find the necessary equations characterizing the equilibrium law of motion of the system. These necessary equations should include the state equation (2), the exogenous equation (3) and the rst-order condition derived either as Euler equation (??) or from the Lagrangian (13) and (14). Step 2. Derive the steady state of the model. This requires rst parameterizing the model and then evaluating the model at its certainty equivalence form. Step 3. Log-linearize the necessary equations characterizing the equilibrium law of motion of the system A. Beccarini (CQE) DSGE Summer / 27

28 The following building block for such log-linearization can be used: X t _ X e x t (15) e x t +ay t 1 + x t + ay t (16) x t y t 0 (17) A. Beccarini (CQE) DSGE Summer / 27

29 The following building block for such log-linearization can be used: X t _ X e x t (15) e x t +ay t 1 + x t + ay t (16) x t y t 0 (17) Step 4. Solve the log-linearized system for the decision rule (which is also in log-linear form) with the method of undetermined coe cients. A. Beccarini (CQE) DSGE Summer / 27

30 The Ramsey Problem The Model Ramsey (1928) posed a problem of optimal resource allocation, which is now often used as a prototype model of dynamic optimization. Let C t denote consumption, Y t output and K t the capital stock. Assume that the output is produced by capital stock and it is either consumed or invested, that is, added to the capital stock. Formally, Y t = A t K α t (18) Y t = C t + K t+1 K t (19) A. Beccarini (CQE) DSGE Summer / 27

31 where α 2 (0, 1) and A t is the technology which may follow an AR(1) process: A t+1 = a 0 + a 1 A t + ɛ t+1 (20) Assume ɛ t to be i.i.d. Equation (18) and (19) indicates that we could write the transition law of capital stock as K t+1 = A t K α t C t (21) Note that one has assumed here that the depreciation rate of capital stock is equal to 1. This is a simpli ed assumption by which the exact solution is computable. A. Beccarini (CQE) DSGE Summer / 27

32 The representative agent is assumed to nd the control sequence fc t gt=0 such that " # maxe 0 β t ln C t t=0 (22) given the initial condition (K 0, A 0 ). A. Beccarini (CQE) DSGE Summer / 27

33 The Ramsey Problem The Euler Equation The rst task is to transform the model into a setting so that the state variable K t does not appear in F (). This can be done by assuming K t+1 (instead of C t ) as model s decision variable. To achieve a notational consistency in the time subscript, one may denote the decision variable as Z t. Therefore the model can be rewritten as maxe 0 " t=0β t ln A t K α t K t+1# (23) subject to K t+1 = Z t A. Beccarini (CQE) DSGE Summer / 27

34 Note that here one has used (21) to express C t in the utility function. Also note that in this formulation the state variable in period t is still K t. Therefore F / x = 0 and F / u = 1. The Bellman equation in this case can be written as V (K t, A t ) = max K t+1 fln(a t K α t K t+1 ) + βe t [V (K t+1, A t+1 )]g (24) The necessary condition for maximizing the right hand side of the Bellman equation (2.12) is given by 1 V (Kt+1, A t+1 ) A t Kt α + βe t = 0 (25) K t+1 K t+1 A. Beccarini (CQE) DSGE Summer / 27

35 Meanwhile applying the Benveniste-Scheinkman formula, V (K t, A t ) = αa tk α 1 t K t A t Kt α (26) K t+1 Substituting eq. (26) into (25) allows us to obtain the Euler equation: " # 1 αa t+1 Kt+1 α 1 A t Kt α + βe t K t+1 A t+1 Kt+1 α = 0 K t+2 which can further be written as " # 1 αa t+1 Kt+1 α 1 + βe t C t A t+1 Kt+1 α = 0 (27) K t+2 This Euler equations (27) along with (21) and (20) determine the transition sequences of {K t+1 } t=1, {A t+1} t=1 and {C t} t=0 given the initial condition K 0 and A 0. A. Beccarini (CQE) DSGE Summer / 27

36 The First-Order Condition Derived from the Lagrangian Next, derive the rst-order condition from the Lagrangian. De ne the Lagrangian: ( L = E 0 β t ln C t E t β t+1 λt+1(k 0 t+1 A t, Kt α + C t ) ) t=0 t=0 Setting to zero the derivatives of L with respect to λ t, C t and K t, one obtains (21) as well as 1/C t βe t λ t+1 = 0 (28) βe t λ t+1 αa t K α 1 t = λ t (29) These are the rst-order conditions derived from the Lagrangian. Substituting out for λ one obtains the Euler equation (27). A. Beccarini (CQE) DSGE Summer / 27

37 The Ramsey Problem The Exact Solution and the Steady States The exact solution for this model which could be derived from the standard methods described below can be written as K t+1 = αβa t K α t (30) This further implies from (21) that C t = (1 αβ)a t K α t (31) Given the solution paths for C t and K t+1, one is then able to derive the steady state. One steady state is on the boundary, that is K = 0 and C = 0. A. Beccarini (CQE) DSGE Summer / 27

38 To obtain a more meaningful interior steady state, we take logarithm for both sides of (30) and evaluate the equation at its certainty equivalence form: logk t+1 = log(αβa t ) + αlogk t (32) At the steady state, K t+1 = K t = _ K. Solving (32) for logk, we _ obtain logk = Therefore, log (αβa) 1 α. K = (αβa) 1/(1 α) (33) Given K, C is resolved from (21): _ C = _ A _ K α _ K (34) _ A. Beccarini (CQE) DSGE Summer / 27

Value Function Iteration versus Euler equation methods

Value Function Iteration versus Euler equation methods Value Function Iteration versus Euler equation methods Wouter J. Den Haan London School of Economics c by Wouter J. Den Haan Overview 1 How to do value function iteration (VFI) 2 equation methods 1 convergence

More information

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING KELLER VANDEBOGERT AND CHARLES LANNING 1. Introduction Interior point methods are, put simply, a technique of optimization where, given a problem

More information

The Fundamentals of Economic Dynamics and Policy Analyses: Learning through Numerical Examples. Part II. Dynamic General Equilibrium

The Fundamentals of Economic Dynamics and Policy Analyses: Learning through Numerical Examples. Part II. Dynamic General Equilibrium The Fundamentals of Economic Dynamics and Policy Analyses: Learning through Numerical Examples. Part II. Dynamic General Equilibrium Hiroshi Futamura The objective of this paper is to provide an introductory

More information

On the Dual Approach to Recursive Optimization

On the Dual Approach to Recursive Optimization On the Dual Approach to Recursive Optimization Messner - Pavoni - Sleet Discussion by: Ctirad Slavík, Goethe Uni Frankfurt Conference on Private Information, Interdependent Preferences and Robustness:

More information

NBER WORKING PAPER SERIES HOW TO SOLVE DYNAMIC STOCHASTIC MODELS COMPUTING EXPECTATIONS JUST ONCE. Kenneth L. Judd Lilia Maliar Serguei Maliar

NBER WORKING PAPER SERIES HOW TO SOLVE DYNAMIC STOCHASTIC MODELS COMPUTING EXPECTATIONS JUST ONCE. Kenneth L. Judd Lilia Maliar Serguei Maliar NBER WORKING PAPER SERIES HOW TO SOLVE DYNAMIC STOCHASTIC MODELS COMPUTING EXPECTATIONS JUST ONCE Kenneth L. Judd Lilia Maliar Serguei Maliar Working Paper 17418 http://www.nber.org/papers/w17418 NATIONAL

More information

Constrained Optimization

Constrained Optimization Constrained Optimization Dudley Cooke Trinity College Dublin Dudley Cooke (Trinity College Dublin) Constrained Optimization 1 / 46 EC2040 Topic 5 - Constrained Optimization Reading 1 Chapters 12.1-12.3

More information

CW High School. Algebra I A

CW High School. Algebra I A 1. Functions (20.00%) 1.1 I can solve a two or more step equation involving parenthesis and negative numbers including those with no solution or all real numbers solutions. 4 Pro cient I can solve a two

More information

Euler equations. Jonathan A. Parker Northwestern University and NBER

Euler equations. Jonathan A. Parker Northwestern University and NBER Euler equations Jonathan A. Parker Northwestern University and NBER Abstract An Euler equation is a difference or differential equation that is an intertemporal first-order condition for a dynamic choice

More information

Physics 235 Chapter 6. Chapter 6 Some Methods in the Calculus of Variations

Physics 235 Chapter 6. Chapter 6 Some Methods in the Calculus of Variations Chapter 6 Some Methods in the Calculus of Variations In this Chapter we focus on an important method of solving certain problems in Classical Mechanics. In many problems we need to determine how a system

More information

Lecture 25 Nonlinear Programming. November 9, 2009

Lecture 25 Nonlinear Programming. November 9, 2009 Nonlinear Programming November 9, 2009 Outline Nonlinear Programming Another example of NLP problem What makes these problems complex Scalar Function Unconstrained Problem Local and global optima: definition,

More information

How to Solve Dynamic Stochastic Models Computing Expectations Just Once

How to Solve Dynamic Stochastic Models Computing Expectations Just Once How to Solve Dynamic Stochastic Models Computing Expectations Just Once Kenneth L. Judd, Lilia Maliar, Serguei Maliar and Inna Tsener November 29, 2016 Abstract We introduce a computational technique precomputation

More information

Note on Neoclassical Growth Model: Value Function Iteration + Discretization

Note on Neoclassical Growth Model: Value Function Iteration + Discretization 1 Introduction Note on Neoclassical Growth Model: Value Function Iteration + Discretization Makoto Nakajima, UIUC January 27 We study the solution algorithm using value function iteration, and discretization

More information

Applied Lagrange Duality for Constrained Optimization

Applied Lagrange Duality for Constrained Optimization Applied Lagrange Duality for Constrained Optimization Robert M. Freund February 10, 2004 c 2004 Massachusetts Institute of Technology. 1 1 Overview The Practical Importance of Duality Review of Convexity

More information

Heuristics for selecting machines and determining buffer capacities in assembly systems

Heuristics for selecting machines and determining buffer capacities in assembly systems Computers & Industrial Engineering 38 (2000) 341±360 www.elsevier.com/locate/dsw Heuristics for selecting machines and determining buffer capacities in assembly systems K.-C. Jeong a, Y.-D. Kim b, * a

More information

. Tutorial Class V 3-10/10/2012 First Order Partial Derivatives;...

. Tutorial Class V 3-10/10/2012 First Order Partial Derivatives;... Tutorial Class V 3-10/10/2012 1 First Order Partial Derivatives; Tutorial Class V 3-10/10/2012 1 First Order Partial Derivatives; 2 Application of Gradient; Tutorial Class V 3-10/10/2012 1 First Order

More information

Unconstrained Optimization

Unconstrained Optimization Unconstrained Optimization Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 13, 2013 1 Denitions Economics is a science of optima We maximize utility functions, minimize

More information

Envelope Condition Method with an Application to Default Risk Models

Envelope Condition Method with an Application to Default Risk Models Envelope Condition Method with an Application to Default Risk Models Cristina Arellano, Lilia Maliar, Serguei Maliar and Viktor Tsyrennikov May 1, 2015 Abstract We develop an envelope condition method

More information

Lecture 2 Optimization with equality constraints

Lecture 2 Optimization with equality constraints Lecture 2 Optimization with equality constraints Constrained optimization The idea of constrained optimisation is that the choice of one variable often affects the amount of another variable that can be

More information

EC5555 Economics Masters Refresher Course in Mathematics September Lecture 6 Optimization with equality constraints Francesco Feri

EC5555 Economics Masters Refresher Course in Mathematics September Lecture 6 Optimization with equality constraints Francesco Feri EC5555 Economics Masters Refresher Course in Mathematics September 2013 Lecture 6 Optimization with equality constraints Francesco Feri Constrained optimization The idea of constrained optimisation is

More information

Fine Arts in Solow Model. Abstract

Fine Arts in Solow Model. Abstract Fine Arts in Solow Model Tetsuya Saito Department of Economics, SUNY at Buffalo Abstract Applying some built-in functions of Mathematica, this note provides some graphics derived from convergent paths

More information

We assume uniform hashing (UH):

We assume uniform hashing (UH): We assume uniform hashing (UH): the probe sequence of each key is equally likely to be any of the! permutations of 0,1,, 1 UH generalizes the notion of SUH that produces not just a single number, but a

More information

Lecture VIII. Global Approximation Methods: I

Lecture VIII. Global Approximation Methods: I Lecture VIII Global Approximation Methods: I Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Global Methods p. 1 /29 Global function approximation Global methods: function

More information

Open and Closed Sets

Open and Closed Sets Open and Closed Sets Definition: A subset S of a metric space (X, d) is open if it contains an open ball about each of its points i.e., if x S : ɛ > 0 : B(x, ɛ) S. (1) Theorem: (O1) and X are open sets.

More information

QEM Optimization, WS 2017/18 Part 4. Constrained optimization

QEM Optimization, WS 2017/18 Part 4. Constrained optimization QEM Optimization, WS 2017/18 Part 4 Constrained optimization (about 4 Lectures) Supporting Literature: Angel de la Fuente, Mathematical Methods and Models for Economists, Chapter 7 Contents 4 Constrained

More information

Convexity Theory and Gradient Methods

Convexity Theory and Gradient Methods Convexity Theory and Gradient Methods Angelia Nedić angelia@illinois.edu ISE Department and Coordinated Science Laboratory University of Illinois at Urbana-Champaign Outline Convex Functions Optimality

More information

MATH3016: OPTIMIZATION

MATH3016: OPTIMIZATION MATH3016: OPTIMIZATION Lecturer: Dr Huifu Xu School of Mathematics University of Southampton Highfield SO17 1BJ Southampton Email: h.xu@soton.ac.uk 1 Introduction What is optimization? Optimization is

More information

Introduction to Optimization Problems and Methods

Introduction to Optimization Problems and Methods Introduction to Optimization Problems and Methods wjch@umich.edu December 10, 2009 Outline 1 Linear Optimization Problem Simplex Method 2 3 Cutting Plane Method 4 Discrete Dynamic Programming Problem Simplex

More information

Value Function Iteration as a Solution Method for the Ramsey Model

Value Function Iteration as a Solution Method for the Ramsey Model Value Function Iteration as a Solution Method for the Ramsey Model By Burkhard Heer a,b and Alfred Maußner c a Free University of Bolzano-Bozen, School of Economics and Management, Via Sernesi 1, 39100

More information

Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

More information

AM 221: Advanced Optimization Spring 2016

AM 221: Advanced Optimization Spring 2016 AM 221: Advanced Optimization Spring 2016 Prof. Yaron Singer Lecture 2 Wednesday, January 27th 1 Overview In our previous lecture we discussed several applications of optimization, introduced basic terminology,

More information

A Linear Programming Approach to Concave Approximation and Value Function Iteration

A Linear Programming Approach to Concave Approximation and Value Function Iteration A Linear Programming Approach to Concave Approximation and Value Function Iteration Ronaldo Carpio Takashi Kamihigashi May 18, 2015 Abstract A basic task in numerical computation is to approximate a continuous

More information

Solution Methods Numerical Algorithms

Solution Methods Numerical Algorithms Solution Methods Numerical Algorithms Evelien van der Hurk DTU Managment Engineering Class Exercises From Last Time 2 DTU Management Engineering 42111: Static and Dynamic Optimization (6) 09/10/2017 Class

More information

We augment RBTs to support operations on dynamic sets of intervals A closed interval is an ordered pair of real

We augment RBTs to support operations on dynamic sets of intervals A closed interval is an ordered pair of real 14.3 Interval trees We augment RBTs to support operations on dynamic sets of intervals A closed interval is an ordered pair of real numbers ], with Interval ]represents the set Open and half-open intervals

More information

CS 372: Computational Geometry Lecture 10 Linear Programming in Fixed Dimension

CS 372: Computational Geometry Lecture 10 Linear Programming in Fixed Dimension CS 372: Computational Geometry Lecture 10 Linear Programming in Fixed Dimension Antoine Vigneron King Abdullah University of Science and Technology November 7, 2012 Antoine Vigneron (KAUST) CS 372 Lecture

More information

Lagrangian Relaxation: An overview

Lagrangian Relaxation: An overview Discrete Math for Bioinformatics WS 11/12:, by A. Bockmayr/K. Reinert, 22. Januar 2013, 13:27 4001 Lagrangian Relaxation: An overview Sources for this lecture: D. Bertsimas and J. Tsitsiklis: Introduction

More information

On the Parameter Estimation of the Generalized Exponential Distribution Under Progressive Type-I Interval Censoring Scheme

On the Parameter Estimation of the Generalized Exponential Distribution Under Progressive Type-I Interval Censoring Scheme arxiv:1811.06857v1 [math.st] 16 Nov 2018 On the Parameter Estimation of the Generalized Exponential Distribution Under Progressive Type-I Interval Censoring Scheme Mahdi Teimouri Email: teimouri@aut.ac.ir

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-4: Constrained optimization Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428 June

More information

Math 443/543 Graph Theory Notes 11: Graph minors and Kuratowski s Theorem

Math 443/543 Graph Theory Notes 11: Graph minors and Kuratowski s Theorem Math 443/543 Graph Theory Notes 11: Graph minors and Kuratowski s Theorem David Glickenstein November 26, 2008 1 Graph minors Let s revisit some de nitions. Let G = (V; E) be a graph. De nition 1 Removing

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 5 Inference

More information

Comparison of Solutions to the Incomplete Markets Model with Aggregate Uncertainty

Comparison of Solutions to the Incomplete Markets Model with Aggregate Uncertainty Comparison of Solutions to the Incomplete Markets Model with Aggregate Uncertainty Wouter J. DEN HAAN y July 1, 00 Abstract This paper compares numerical solutions to the model of Krusell and Smith (1)

More information

Elements of Economic Analysis II Lecture III: Cost Minimization, Factor Demand and Cost Function

Elements of Economic Analysis II Lecture III: Cost Minimization, Factor Demand and Cost Function Elements of Economic Analysis II Lecture III: Cost Minimization, Factor Demand and Cost Function Kai Hao Yang 10/05/2017 1 Cost Minimization In the last lecture, we saw a firm s profit maximization problem.

More information

Chapter II. Linear Programming

Chapter II. Linear Programming 1 Chapter II Linear Programming 1. Introduction 2. Simplex Method 3. Duality Theory 4. Optimality Conditions 5. Applications (QP & SLP) 6. Sensitivity Analysis 7. Interior Point Methods 1 INTRODUCTION

More information

MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS

MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS MATHEMATICS II: COLLECTION OF EXERCISES AND PROBLEMS GRADO EN A.D.E. GRADO EN ECONOMÍA GRADO EN F.Y.C. ACADEMIC YEAR 2011-12 INDEX UNIT 1.- AN INTRODUCCTION TO OPTIMIZATION 2 UNIT 2.- NONLINEAR PROGRAMMING

More information

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization

More information

Using Local Trajectory Optimizers To Speed Up Global. Christopher G. Atkeson. Department of Brain and Cognitive Sciences and

Using Local Trajectory Optimizers To Speed Up Global. Christopher G. Atkeson. Department of Brain and Cognitive Sciences and Using Local Trajectory Optimizers To Speed Up Global Optimization In Dynamic Programming Christopher G. Atkeson Department of Brain and Cognitive Sciences and the Articial Intelligence Laboratory Massachusetts

More information

Dual Subgradient Methods Using Approximate Multipliers

Dual Subgradient Methods Using Approximate Multipliers Dual Subgradient Methods Using Approximate Multipliers Víctor Valls, Douglas J. Leith Trinity College Dublin Abstract We consider the subgradient method for the dual problem in convex optimisation with

More information

Planning and Control: Markov Decision Processes

Planning and Control: Markov Decision Processes CSE-571 AI-based Mobile Robotics Planning and Control: Markov Decision Processes Planning Static vs. Dynamic Predictable vs. Unpredictable Fully vs. Partially Observable Perfect vs. Noisy Environment What

More information

Higher-Order Perturbation Solutions to Dynamic, Discrete-Time Rational Expectations Models

Higher-Order Perturbation Solutions to Dynamic, Discrete-Time Rational Expectations Models FEDERAL RESERVE BANK OF SAN FRANCISCO WORKING PAPER SERIES Higher-Order Perturbation Solutions to Dynamic, Discrete-Time Rational Expectations Models Eric Swanson Federal Reserve Bank of San Francisco

More information

Machine Learning for Signal Processing Lecture 4: Optimization

Machine Learning for Signal Processing Lecture 4: Optimization Machine Learning for Signal Processing Lecture 4: Optimization 13 Sep 2015 Instructor: Bhiksha Raj (slides largely by Najim Dehak, JHU) 11-755/18-797 1 Index 1. The problem of optimization 2. Direct optimization

More information

The Euler Equidimensional Equation ( 3.2)

The Euler Equidimensional Equation ( 3.2) The Euler Equidimensional Equation ( 3.) The Euler Equidimensional Equation ( 3.) The Euler Equidimensional Equation Definition The Euler equidimensional equation for the unknown function y with singular

More information

15. Cutting plane and ellipsoid methods

15. Cutting plane and ellipsoid methods EE 546, Univ of Washington, Spring 2012 15. Cutting plane and ellipsoid methods localization methods cutting-plane oracle examples of cutting plane methods ellipsoid method convergence proof inequality

More information

EC422 Mathematical Economics 2

EC422 Mathematical Economics 2 EC422 Mathematical Economics 2 Chaiyuth Punyasavatsut Chaiyuth Punyasavatust 1 Course materials and evaluation Texts: Dixit, A.K ; Sydsaeter et al. Grading: 40,30,30. OK or not. Resources: ftp://econ.tu.ac.th/class/archan/c

More information

Euler s Method for Approximating Solution Curves

Euler s Method for Approximating Solution Curves Euler s Method for Approximating Solution Curves As you may have begun to suspect at this point, time constraints will allow us to learn only a few of the many known methods for solving differential equations.

More information

Exponential and Logarithmic Functions. College Algebra

Exponential and Logarithmic Functions. College Algebra Exponential and Logarithmic Functions College Algebra Exponential Functions Suppose you inherit $10,000. You decide to invest in in an account paying 3% interest compounded continuously. How can you calculate

More information

Dynamic Regulation of Fisheries: the Case of the Bowhead Whale

Dynamic Regulation of Fisheries: the Case of the Bowhead Whale Marine Resource Economirt. Volume 8, pp. 145-154 O738-U6O/9J $300 +.00 Printed in the USA. All rights reserved. Copyright O 1993 Marine Resources Foundation Dynamic Regulation of Fisheries: the Case of

More information

AC : USING A SCRIPTING LANGUAGE FOR DYNAMIC PROGRAMMING

AC : USING A SCRIPTING LANGUAGE FOR DYNAMIC PROGRAMMING AC 2008-2623: USING A SCRIPTING LANGUAGE FOR DYNAMIC PROGRAMMING Louis Plebani, Lehigh University American Society for Engineering Education, 2008 Page 13.1325.1 Using a Scripting Language for Dynamic

More information

Introduction to Modeling. Lecture Overview

Introduction to Modeling. Lecture Overview Lecture Overview What is a Model? Uses of Modeling The Modeling Process Pose the Question Define the Abstractions Create the Model Analyze the Data Model Representations * Queuing Models * Petri Nets *

More information

A Set of Tools for Computing Recursive Equilibria

A Set of Tools for Computing Recursive Equilibria A Set of Tools for Computing Recursive Equilibria Simon Mongey November 17, 2015 Contents 1 Introduction 1 2 Baseline model 2 3 Solving the Bellman equation using collocation 2 3.1 Storage................................................

More information

Newton and Quasi-Newton Methods

Newton and Quasi-Newton Methods Lab 17 Newton and Quasi-Newton Methods Lab Objective: Newton s method is generally useful because of its fast convergence properties. However, Newton s method requires the explicit calculation of the second

More information

Finding the Maximum or Minimum of a Quadratic Function. f(x) = x 2 + 4x + 2.

Finding the Maximum or Minimum of a Quadratic Function. f(x) = x 2 + 4x + 2. Section 5.6 Optimization 529 5.6 Optimization In this section we will explore the science of optimization. Suppose that you are trying to find a pair of numbers with a fixed sum so that the product of

More information

Convex Optimization Lecture 2

Convex Optimization Lecture 2 Convex Optimization Lecture 2 Today: Convex Analysis Center-of-mass Algorithm 1 Convex Analysis Convex Sets Definition: A set C R n is convex if for all x, y C and all 0 λ 1, λx + (1 λ)y C Operations that

More information

Targeting Nominal GDP or Prices: Expectation Dynamics and the Interest Rate Lower Bound

Targeting Nominal GDP or Prices: Expectation Dynamics and the Interest Rate Lower Bound Targeting Nominal GDP or Prices: Expectation Dynamics and the Interest Rate Lower Bound Seppo Honkapohja, Bank of Finland Kaushik Mitra, University of Saint Andrews *Views expressed do not necessarily

More information

5 Day 5: Maxima and minima for n variables.

5 Day 5: Maxima and minima for n variables. UNIVERSITAT POMPEU FABRA INTERNATIONAL BUSINESS ECONOMICS MATHEMATICS III. Pelegrí Viader. 2012-201 Updated May 14, 201 5 Day 5: Maxima and minima for n variables. The same kind of first-order and second-order

More information

Convex Optimization and Machine Learning

Convex Optimization and Machine Learning Convex Optimization and Machine Learning Mengliu Zhao Machine Learning Reading Group School of Computing Science Simon Fraser University March 12, 2014 Mengliu Zhao SFU-MLRG March 12, 2014 1 / 25 Introduction

More information

Affine function. suppose f : R n R m is affine (f(x) =Ax + b with A R m n, b R m ) the image of a convex set under f is convex

Affine function. suppose f : R n R m is affine (f(x) =Ax + b with A R m n, b R m ) the image of a convex set under f is convex Affine function suppose f : R n R m is affine (f(x) =Ax + b with A R m n, b R m ) the image of a convex set under f is convex S R n convex = f(s) ={f(x) x S} convex the inverse image f 1 (C) of a convex

More information

Discussion Paper Series

Discussion Paper Series BIRMINGHAM BUSINESS SCHOOL Birmingham Business School Discussion Paper Series Solving Models with Jump Discontinuities in Policy Functions Christoph Görtz Afrasiab Mirza 2016-07 *** This discussion paper

More information

3 Model-Based Adaptive Critic

3 Model-Based Adaptive Critic 3 Model-Based Adaptive Critic Designs SILVIA FERRARI Duke University ROBERT F. STENGEL Princeton University Editor s Summary: This chapter provides an overview of model-based adaptive critic designs, including

More information

9/24/ Hash functions

9/24/ Hash functions 11.3 Hash functions A good hash function satis es (approximately) the assumption of SUH: each key is equally likely to hash to any of the slots, independently of the other keys We typically have no way

More information

4 LINEAR PROGRAMMING (LP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

4 LINEAR PROGRAMMING (LP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 4 LINEAR PROGRAMMING (LP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 Mathematical programming (optimization) problem: min f (x) s.t. x X R n set of feasible solutions with linear objective function

More information

This chapter explains two techniques which are frequently used throughout

This chapter explains two techniques which are frequently used throughout Chapter 2 Basic Techniques This chapter explains two techniques which are frequently used throughout this thesis. First, we will introduce the concept of particle filters. A particle filter is a recursive

More information

Directed Graphical Models (Bayes Nets) (9/4/13)

Directed Graphical Models (Bayes Nets) (9/4/13) STA561: Probabilistic machine learning Directed Graphical Models (Bayes Nets) (9/4/13) Lecturer: Barbara Engelhardt Scribes: Richard (Fangjian) Guo, Yan Chen, Siyang Wang, Huayang Cui 1 Introduction For

More information

Lecture 17: Continuous Functions

Lecture 17: Continuous Functions Lecture 17: Continuous Functions 1 Continuous Functions Let (X, T X ) and (Y, T Y ) be topological spaces. Definition 1.1 (Continuous Function). A function f : X Y is said to be continuous if the inverse

More information

Constrained Optimization with Calculus. Background Three Big Problems Setup and Vocabulary

Constrained Optimization with Calculus. Background Three Big Problems Setup and Vocabulary Constrained Optimization with Calculus Background Three Big Problems Setup and Vocabulary Background Information In unit 3, you learned about linear programming, in which all constraints and the objective

More information

1. Show that the rectangle of maximum area that has a given perimeter p is a square.

1. Show that the rectangle of maximum area that has a given perimeter p is a square. Constrained Optimization - Examples - 1 Unit #23 : Goals: Lagrange Multipliers To study constrained optimization; that is, the maximizing or minimizing of a function subject to a constraint (or side condition).

More information

ORIE 6300 Mathematical Programming I November 13, Lecture 23. max b T y. x 0 s 0. s.t. A T y + s = c

ORIE 6300 Mathematical Programming I November 13, Lecture 23. max b T y. x 0 s 0. s.t. A T y + s = c ORIE 63 Mathematical Programming I November 13, 214 Lecturer: David P. Williamson Lecture 23 Scribe: Mukadder Sevi Baltaoglu 1 Interior Point Methods Consider the standard primal and dual linear programs:

More information

C3 Numerical methods

C3 Numerical methods Verulam School C3 Numerical methods 138 min 108 marks 1. (a) The diagram shows the curve y =. The region R, shaded in the diagram, is bounded by the curve and by the lines x = 1, x = 5 and y = 0. The region

More information

Numerical Dynamic Programming. Jesus Fernandez-Villaverde University of Pennsylvania

Numerical Dynamic Programming. Jesus Fernandez-Villaverde University of Pennsylvania Numerical Dynamic Programming Jesus Fernandez-Villaverde University of Pennsylvania 1 Introduction In the last set of lecture notes, we reviewed some theoretical background on numerical programming. Now,

More information

Convexity. 1 X i is convex. = b is a hyperplane in R n, and is denoted H(p, b) i.e.,

Convexity. 1 X i is convex. = b is a hyperplane in R n, and is denoted H(p, b) i.e., Convexity We ll assume throughout, without always saying so, that we re in the finite-dimensional Euclidean vector space R n, although sometimes, for statements that hold in any vector space, we ll say

More information

Convexity and Optimization

Convexity and Optimization Convexity and Optimization Richard Lusby DTU Management Engineering Class Exercises From Last Time 2 DTU Management Engineering 42111: Static and Dynamic Optimization (3) 18/09/2017 Today s Material Extrema

More information

Tangents of Parametric Curves

Tangents of Parametric Curves Jim Lambers MAT 169 Fall Semester 2009-10 Lecture 32 Notes These notes correspond to Section 92 in the text Tangents of Parametric Curves When a curve is described by an equation of the form y = f(x),

More information

Efficient implementation of Constrained Min-Max Model Predictive Control with Bounded Uncertainties

Efficient implementation of Constrained Min-Max Model Predictive Control with Bounded Uncertainties Efficient implementation of Constrained Min-Max Model Predictive Control with Bounded Uncertainties D.R. Ramírez 1, T. Álamo and E.F. Camacho2 Departamento de Ingeniería de Sistemas y Automática, Universidad

More information

Today. Lecture 4: Last time. The EM algorithm. We examine clustering in a little more detail; we went over it a somewhat quickly last time

Today. Lecture 4: Last time. The EM algorithm. We examine clustering in a little more detail; we went over it a somewhat quickly last time Today Lecture 4: We examine clustering in a little more detail; we went over it a somewhat quickly last time The CAD data will return and give us an opportunity to work with curves (!) We then examine

More information

Lecture 2 - Introduction to Polytopes

Lecture 2 - Introduction to Polytopes Lecture 2 - Introduction to Polytopes Optimization and Approximation - ENS M1 Nicolas Bousquet 1 Reminder of Linear Algebra definitions Let x 1,..., x m be points in R n and λ 1,..., λ m be real numbers.

More information

Network Flows. 7. Multicommodity Flows Problems. Fall 2010 Instructor: Dr. Masoud Yaghini

Network Flows. 7. Multicommodity Flows Problems. Fall 2010 Instructor: Dr. Masoud Yaghini In the name of God Network Flows 7. Multicommodity Flows Problems 7.2 Lagrangian Relaxation Approach Fall 2010 Instructor: Dr. Masoud Yaghini The multicommodity flow problem formulation: We associate nonnegative

More information

THE RECURSIVE LOGIT MODEL TUTORIAL

THE RECURSIVE LOGIT MODEL TUTORIAL THE RECURSIVE LOGIT MODEL TUTORIAL CIRRELT SEMINAR August 2017 MAELLE ZIMMERMANN, TIEN MAI, EMMA FREJINGER CIRRELT and Université de Montréal, Canada Ecole Polytechnique de Montréal, Canada TUTORIAL GOALS

More information

Part I. Problems in this section are mostly short answer and multiple choice. Little partial credit will be given. 5 points each.

Part I. Problems in this section are mostly short answer and multiple choice. Little partial credit will be given. 5 points each. Math 106/108 Final Exam Page 1 Part I. Problems in this section are mostly short answer and multiple choice. Little partial credit will be given. 5 points each. 1. Factor completely. Do not solve. a) 2x

More information

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE

COMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE COMPUTER VISION 2017-2018 > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE OUTLINE Optical flow Lucas-Kanade Horn-Schunck Applications of optical flow Optical flow tracking Histograms of oriented flow Assignment

More information

III Data Structures. Dynamic sets

III Data Structures. Dynamic sets III Data Structures Elementary Data Structures Hash Tables Binary Search Trees Red-Black Trees Dynamic sets Sets are fundamental to computer science Algorithms may require several different types of operations

More information

PARALLEL OPTIMIZATION

PARALLEL OPTIMIZATION PARALLEL OPTIMIZATION Theory, Algorithms, and Applications YAIR CENSOR Department of Mathematics and Computer Science University of Haifa STAVROS A. ZENIOS Department of Public and Business Administration

More information

Lecture 4 Duality and Decomposition Techniques

Lecture 4 Duality and Decomposition Techniques Lecture 4 Duality and Decomposition Techniques Jie Lu (jielu@kth.se) Richard Combes Alexandre Proutiere Automatic Control, KTH September 19, 2013 Consider the primal problem Lagrange Duality Lagrangian

More information

Constrained Optimization COS 323

Constrained Optimization COS 323 Constrained Optimization COS 323 Last time Introduction to optimization objective function, variables, [constraints] 1-dimensional methods Golden section, discussion of error Newton s method Multi-dimensional

More information

Optimization Methods. Final Examination. 1. There are 5 problems each w i t h 20 p o i n ts for a maximum of 100 points.

Optimization Methods. Final Examination. 1. There are 5 problems each w i t h 20 p o i n ts for a maximum of 100 points. 5.93 Optimization Methods Final Examination Instructions:. There are 5 problems each w i t h 2 p o i n ts for a maximum of points. 2. You are allowed to use class notes, your homeworks, solutions to homework

More information

Algorithms for Integer Programming

Algorithms for Integer Programming Algorithms for Integer Programming Laura Galli November 9, 2016 Unlike linear programming problems, integer programming problems are very difficult to solve. In fact, no efficient general algorithm is

More information

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm In the name of God Part 4. 4.1. Dantzig-Wolf Decomposition Algorithm Spring 2010 Instructor: Dr. Masoud Yaghini Introduction Introduction Real world linear programs having thousands of rows and columns.

More information

Constrained and Unconstrained Optimization

Constrained and Unconstrained Optimization Constrained and Unconstrained Optimization Carlos Hurtado Department of Economics University of Illinois at Urbana-Champaign hrtdmrt2@illinois.edu Oct 10th, 2017 C. Hurtado (UIUC - Economics) Numerical

More information

Linear Programming Duality and Algorithms

Linear Programming Duality and Algorithms COMPSCI 330: Design and Analysis of Algorithms 4/5/2016 and 4/7/2016 Linear Programming Duality and Algorithms Lecturer: Debmalya Panigrahi Scribe: Tianqi Song 1 Overview In this lecture, we will cover

More information

Space-Progressive Value Iteration: An Anytime Algorithm for a Class of POMDPs

Space-Progressive Value Iteration: An Anytime Algorithm for a Class of POMDPs Space-Progressive Value Iteration: An Anytime Algorithm for a Class of POMDPs Nevin L. Zhang and Weihong Zhang lzhang,wzhang @cs.ust.hk Department of Computer Science Hong Kong University of Science &

More information

f xx + f yy = F (x, y)

f xx + f yy = F (x, y) Application of the 2D finite element method to Laplace (Poisson) equation; f xx + f yy = F (x, y) M. R. Hadizadeh Computer Club, Department of Physics and Astronomy, Ohio University 4 Nov. 2013 Domain

More information

Ellipsoid Algorithm :Algorithms in the Real World. Ellipsoid Algorithm. Reduction from general case

Ellipsoid Algorithm :Algorithms in the Real World. Ellipsoid Algorithm. Reduction from general case Ellipsoid Algorithm 15-853:Algorithms in the Real World Linear and Integer Programming II Ellipsoid algorithm Interior point methods First polynomial-time algorithm for linear programming (Khachian 79)

More information

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 1 Review Dr. Ted Ralphs IE316 Quiz 1 Review 1 Reading for The Quiz Material covered in detail in lecture. 1.1, 1.4, 2.1-2.6, 3.1-3.3, 3.5 Background material

More information