Inference Based on SVARs Identified with Sign and Zero Restrictions: Theory and Applications

Size: px
Start display at page:

Download "Inference Based on SVARs Identified with Sign and Zero Restrictions: Theory and Applications"

Transcription

1 FEDERAL RESERVE BANK of ATLANTA WORKING PAPER SERIES Inference Based on SVARs Identified wit Sign and Zero Restrictions: Teory and Applications Jonas E. Arias, Juan F. Rubio-Ramírez, and Daniel F. Waggoner Working Paper b Revised October 2017 Abstract: In tis paper, we develop algoritms to independently draw from a family of conjugate posterior distributions over te structural parameterization wen sign and zero restrictions are used to identify SVARs. We call tis family of conjugate posterior distributions normal-generalized-normal. Our algoritms draw from a conjugate uniform-normal-inverse-wisart posterior over te ortogonal reduced-form parameterization and transform te draws into te structural parameterization; tis transformation induces a normal-generalized-normal posterior distribution over te structural parameterization. Te uniform-normal-inverse-wisart posterior over te ortogonal reduced-form parameterization as been prominent after te work of Ulig (2005). We use Beaudry, Nam, and Wang's (2011) work on te relevance of optimism socks to sow te dangers of using alternative approaces to implement sign and zero restrictions to identify SVARs like te penalty function approac. In particular, we analytically sow tat te penalty function approac adds restrictions to te ones described in te identification sceme. JEL classification: C11, C32, E50 Key words: identification, sign restrictions, simulation Te autors tank Paul Beaudry, Andrew Mountford, Deokwoo Nam, and Jian Wang for saring supplementary material wit us and for elpful comments. Tey also tank Grátula Bedátula for er support and elp. Witout er, tis paper would ave been impossible. Tis paper as circulated under te title Algoritm for Inference wit Sign and Zero Restrictions." Juan F. Rubio-Ramírez also tanks te National Science Foundation, te Institute for Economic Analysis (IAE), te "Programa de Excelencia en Educación e Investigación" of te Bank of Spain, and te Spanis ministry of science and tecnology (Ref. ECO c03-01) for support. Te views expressed ere are te autors' and not necessarily tose of te Federal Reserve Bank of Atlanta, te Federal Reserve Bank of Piladelpia, or te Federal Reserve System. Any remaining errors are te autors responsibility. Please address questions regarding content to Jonas E. Arias, Researc Department, Federal Reserve Bank of Piladelpia, Ten Independence Mall, Piladelpia, PA , jonas.arias@pil.frb.org; Juan F. Rubio-Ramírez (corresponding autor), Economics Department, Emory University, Atlanta, GA 30322, jrubior@emory.edu; or Daniel F. Waggoner, Researc Department, Federal Reserve Bank of Atlanta, 1000 Peactree Street NE, Atlanta, GA , , daniel.f.waggoner@atl.frb.org. Federal Reserve Bank of Atlanta working papers, including revised versions, are available on te Atlanta Fed s website at Click Publications and ten Working Papers. To receive notifications about new papers, use frbatlanta.org/forms/subscribe.

2 1 Introduction Structural vector autoregressions (SVARs) identified wit sign and zero restrictions ave become prominent. Te fact tat identification generally comes from fewer restrictions tan in traditional identification scemes and tat any conclusions are robust across te set of SVARs consistent wit te restrictions as made te approac attractive to researcers. Most papers using tis approac work in te Bayesian paradigm. 1 In tis paper we develop algoritms to independently draw from a family of conjugate posterior distributions over te structural parameterization conditional on te sign and zero restrictions. We call tis family of conjugate posterior distributions normal-generalized-normal and we sow tat it is commonly used in te literature, mainly after te work of Sims and Za (1998). We focus on two different parameterizations of SVARs. In addition to te typical structural parameterization, SVARs can also be written as te product of te reduced-form parameters and te set of ortogonal matrices, wic we call te ortogonal reduced-form parameterization. Our algoritms will draw from a conjugate posterior distribution over te ortogonal reduced-form parameterization and ten transform te draws into te structural parameterization. We follow te literature in our coice of te family of conjugate posterior distributions over te reduced parameters and use te normalinverse-wisart density. 2 Tis coice is common because it is a conjugate family and it is extremely easy to independently draw from it. Our coice of conjugate posterior over te set of ortogonal matrices conditional on te reduced-form parameters is uniform. Tis uniform-normal-inverse-wisart density over te ortogonal reduced-form parameterization is a recurrent coice after te work of Ulig (2005). We ten develop a cange of variable teory tat allows us to caracterize te induced family of posterior densities over te structural parameterization. Tis teory sows tat a uniformnormal-inverse-wisart posterior density over te ortogonal reduced-form parameterization induces a normal-generalized-normal distributions posterior distribution over te structural parameterization. Te family of normal-generalized-normal densities over te structural parameterization is also conjugate and it is often used in te literature. In any case, our algoritms can be easily modified to consider a more general family of posterior distributions, bot conjugate and non-conjugate. Using our cange of variable teory we first sow tat current algoritms for SVARs identified only 1 Exceptions are Moon and Scorfeide (2012), Moon, Scorfeide and Granziera (2013), and Gafarov, Meier and Montiel Olea (2016a,b). Moon and Scorfeide (2012) analyze te differences between Bayesian probability bands and frequentist confidence sets in partially identified models. Moon, Scorfeide and Granziera (2013) and Gafarov, Meier and Montiel Olea (2016a,b) develop metods of constructing error bands for impulse response functions of sign-restricted SVARs tat are valid from a frequentist perspective. 2 Alternatively, one could use a normal-wisart density. 1

3 by sign restrictions, as described by Rubio-Ramírez, Waggoner and Za (2010), are in fact making independent draws from te normal-generalized-normal distribution over te structural parameterization conditional on te sign restrictions. Tese algoritms independently draw from te uniform-normalinverse-wisart distribution over te ortogonal reduced-form parameterization, only accepting draws suc tat te sign restrictions old and ten transforming te accepted draws into te structural parameterization. Next, we adapt tese algoritms to consider zero restrictions. Wile te set of all structural parameters satisfying te sign restrictions will be open in te set of all structural parameters, te set of all structural parameters satisfying te signs and zero restrictions is of measure zero in te set of all structural parameters. Tis invalidates te direct use of current algoritms wen zero restrictions are considered. But te set of all structural parameters satisfying bot te sign and zero restrictions is of positive measure in te set of all structural parameters satisfying te zero restrictions. Hence, we describe an algoritm tat makes independent draws from te set of all structural parameters satisfying te zero restrictions. Te key to tis algoritm is tat te class of zero restrictions on te structural parameters maps to linear restrictions on te ortogonal matrices, conditional on te reduced-form parameters. Tis algoritm independently draws from normal-inverse-wisart over te reduced-form parameters and from te set of ortogonal matrices suc tat te zero restrictions old. Because te zero restrictions define a lower dimensional smoot manifold in te set of all structural parameters, our cange of variable teory allows us to do two tings. First, we sow tat tis algoritm does not induce a posterior distribution over te structural parameterization from te family of normal-generalized-normal distributions conditional on te sign and zero restrictions. Second, we calculate te induced density and write an importance sampler tat independently draws from normalgeneralized-normal distributions over te structural parameterization conditional on te sign and zero restrictions. Wen using sign and zero restrictions, a commonly used algoritm is Mountford and Ulig s (2009) penalty function approac PFA encefort. We sow tat te PFA adds restrictions; ence, identification does not solely come from te sign and zero restrictions considered in te identification sceme. We sow te consequences of using te PFA by first replicating te results in Beaudry, Nam and Wang (2011), and by comparing tem wit te results tat a researcer would obtain if our importance sampler were to be used instead. Te aim of Beaudry, Nam and Wang (2011) is to provide new evidence on te relevance of optimism socks as an important driver of macroeconomic fluctuations by means of an SVAR identified by imposing a sign restriction on te impact response of stock prices to optimism socks and a zero restriction on te contemporaneous response of TFP to tese socks. Based on 2

4 te results obtained wit te PFA, Beaudry, Nam and Wang (2011) conclude tat optimism socks are clearly important for explaining standard business cycle type penomena because tey increase consumption and ours. Once our importance sampler is used, te identified optimism socks do not increase consumption and ours and, ence, tere is little evidence supporting te assertion tat optimism socks are important for business cycles. Te results reported in Beaudry, Nam and Wang (2011) are significantly affected by te additional restrictions imposed by te PFA. We are not te first ones to criticize te PFA. Tere is an existing literature tat already does exactly tat. For example, Caldara and Kamps (2012) and Binning (2013) sare some of our concerns, wile adding oters, about te PFA. In related and very original work, Giacomini and Kitawaga (2015) are also concerned wit te coice of te priors densities in SVARs identified using sign and zero restrictions. Tey also work on te ortogonal reduced-form parameterization and propose a metod for conducting posterior inference on IRFs tat is robust to te coice of priors densities. We see our paper as sympatetic to teir concern about te coice of priors densities. Finally, we ave to igligt Baumeister and Hamilton (2015). Tis paper directly draws in structural parameterization. Tis is a very interesting and novel approac since te rest of te literature (including us) works in te ortogonal reduced-form parameterization. Wile working in te structural parameterization as clear advantages, mainly being able to define priors densities directly on economically interpretable structural parameters, tis approac uses a Metropolis-Hastings algoritm to make te draws. Hence tis approac is inefficient compared wit ours and arder to implement in larger models. We wis to state tat te aim of tis paper is neiter to dispute nor to callenge SVARs identified wit sign and zero restrictions. In fact, our metodology preserves te virtues of te pure sign restriction approac developed in te work of Faust (1998), Canova and Nicoló (2002), Ulig (2005), and Rubio-Ramírez, Waggoner and Za (2010). 2 Pitfalls of te Penalty Function Approac Beaudry, Nam and Wang (2011) analyze te relevance of optimism socks as a driver of macroeconomic fluctuations using SVARs identified wit sign and zero restrictions. More details about teir work will be given in Section 8. At tis point it suffices to say tat in teir most basic SVAR, Beaudry, Nam and Wang (2011) use data on total factor productivity (TFP), stock prices, consumption, te real federal funds rate, and ours worked. Teir identification sceme defines optimism socks as 3

5 positively affecting stock prices but not affecting TFP contemporaneously and tey use te PFA to implement it. Beaudry, Nam and Wang (2011) also claim tat identification solely comes from tese two restrictions. Figure 1: IRFs to a one standard deviation optimism sock. Te solid curves represent te point-wise posterior medians and te saded areas represent te 68 percent point-wise probability bands. Te figure is based on 10,000 independent draws obtained using te PFA. Figure 1 replicates te main result in Beaudry, Nam and Wang (2011). As sown by te narrow 68 percent point-wise probability bands of teir IRFs, Beaudry, Nam and Wang (2011) obtain te result tat consumption and ours worked respond positively and strongly to optimism socks. If te IRFs sown in Figure 1 were te IRFs to optimism socks solely identified using te two restrictions described above, tey would clearly endorse te view of tose wo tink tat optimism socks are relevant for business cycle fluctuations. But tis is not te case. Wen te PFA is used, identification does not solely come from te identifying restrictions; as we sow below, te PFA introduces restrictions in addition to te ones specified in te identification sceme. In Section 8 we will sow tat if our importance sampler is used instead, te results do not back as strongly te view tat optimism socks are relevant for business cycle fluctuations. Hence, we can conclude tat te results reported in Figure 1 are mostly driven by te additional restrictions imposed by te PFA. 4

6 3 Our Metodology Tis section first describes te SVAR. It ten discusses te identification problem and te class of sign and zero restrictions considered in tis paper. Next, it introduces te structural and te ortogonal reduced-form parameterization. Our algoritms will draw from te ortogonal reduced-form parameterization and ten transform te draws to te structural parameterization. Hence, we must be able to transform te prior and posterior distributions from one parameterization to anoter. Te necessary teory to accomplis tis is also described in tis section. Te usual cange of variable teorem is not sufficient because we are not transforming between open subsets of Euclidean spaces of te same dimension, but instead are transforming between smoot manifolds of te same dimension. We will briefly outline te volume measure on smoot manifolds and state te appropriate generalizations of te cange of variable teorem. Finally, we explicitly specify te conjugate prior distributions tat will be used. Wile te algoritms developed ere can be applied to oter non-conjugate prior distributions, tey are most efficient in te conjugate case. 3.1 Te Model Consider te SVAR wit te general form, as in Rubio-Ramírez, Waggoner and Za (2010) y ta 0 = p y t la l + c + ε t for 1 t T, (1) l=1 were y t is an n 1 vector of endogenous variables, ε t is an n 1 vector of exogenous structural socks, A l is an n n matrix of parameters for 0 l p wit A 0 invertible, c is a 1 n vector of parameters, p is te lag lengt, and T is te sample size. Te vector ε t, conditional on past information and te initial conditions y 0,..., y 1 p, is Gaussian wit mean zero and covariance matrix I n, te n n identity matrix. Te model described in Equation (1) can be compactly written as y ta 0 = x ta + + ε t for 1 t T, (2) were A + = [ A 1 A p c ] and x t = [ y t 1 y t p 1 ] for 1 t T. Te dimension of A + is m n, were m = np + 1. Te reduced-form representation implied by Equation (2) is y t = x tb + u t for 1 t T, (3) 5

7 were B = A + A 1 0, u t = ε ta 1 0, and E [u t u t] = Σ = (A 0 A 0) 1. Te matrices B and Σ are te reduced-form parameters, wile A 0 and A + are te structural parameters. 3.2 Te Identification Problem and Sign and Zero Restrictions Following Rotenberg (1971), te parameters (A 0, A + ) and (Ã0, Ã+) are observationally equivalent if and only if tey imply te same distribution of y t for all t. For te linear Gaussian models of te type studied in tis paper, tis statement is equivalent to saying tat (A 0, A + ) and (Ã0, Ã+) are observationally equivalent if and only if tey ave te same reduced-form representation. Tis implies tat te structural parameters (A 0, A + ) and (Ã0, Ã+) are observationally equivalent if and only if A 0 = Ã0Q and A + = Ã+Q for some Q O(n), wic is te set of all n n ortogonal matrices. To solve te identification problem, one often imposes sign and/or zero restrictions on eiter te structural parameters or some function of te structural parameters, like te IRFs. For instance, te element in row i and column j of (A 1 0 ) is te contemporaneous response of te i t variable to te j t sock. 3 Restricting tis element to be zero would imply tat te i t variable does not respond contemporaneously to te j t sock. Restricting tis element to be positive would imply tat te initial response of te i t variable to te j t sock is positive. Te teory and simulation tecniques tat we develop apply to sign and zero restrictions on any function F(A 0, A + ) from te structural parameters to te space of r n matrices tat satisfies te condition F(A 0 Q, A + Q) = F(A 0, A + )Q, for every Q O(n), wic is true for IRFs. 4 To set te notation, let S j be a s j r matrix of full row rank, were 0 s j, and let Z j be a z j r matrix of full row rank, were 0 z j n j for 1 j n. Te S j will define te sign restrictions on te j t structural sock and te Z j will define te zero restrictions on te j t structural sock for 1 j n. In particular, we assume tat S j F(A 0, A + )e j > 0 and Z j F(A 0, A + )e j = 0 for 1 j n, were e j is te j t column of I n. In Rubio-Ramírez, Waggoner and Za (2010), sufficient conditions for identification are establised. Te sufficient condition for identification is tat tere must be an ordering of te structural socks so tat tere are at least n j zero restrictions on te j t structural sock, for 1 j n. In addition, 3 More generally, te IRF of te i t variable to te j t structural sock at orizon k is te element in row i and column j of te matrix L k (A 0, A + ), were L 0 (A 0, A + ) = ( A 1 ) 0 and Lk (A 0, A + ) = min{k,p} ( l=1 Al A 1 ) 0 Lk l (A 0, A + ), for k > 0. An induction argument on k sows tat L k (A 0 Q, A + Q) = L k (A 0, A + ) Q, for all k and Q O(n). 4 In addition, a regularity condition on F is needed. For instance, it suffices to assume tat F is differentiable and tat its derivative is of full row rank. 6

8 tere must be at least one sign restriction on te impulse responses to eac structural sock. 5 Te necessary order condition for identification, Rotenberg (1971), is tat te number of zero restrictions is greater tan or equal to n(n 1)/2. In tis paper, we will ave fewer tan n j zero restrictions on te j t structural sock. Tis means tat no matter ow many sign restrictions are imposed, identification will only be a set identification. However, if tere are enoug sign restrictions, ten te identified sets will be small and it will be possible to draw meaningful economic conclusions. 3.3 Te Ortogonal Reduced-Form Parameterization Equation (2) represents te SVAR in terms of te structural parameterization, wic is caracterized by A 0 and A +. Given te discussion in Section 3.2, te SVAR can alternatively be written in wat we call te ortogonal reduced-form parameterization. Tis parameterization is caracterized by te reduced-form parameters B and Σ togeter wit an ortogonal matrix Q and is given by te following equation y t = x tb + ε tq (Σ) for 1 t T, (4) were te n n matrix (Σ) is any decomposition of te covariance matrix Σ satisfying (Σ) (Σ) = Σ. We will take to be te Colesky decomposition, toug any differentiable decomposition would do. As we will see, te ortogonal reduced-form parameterization is convenient for drawing. However, te researcer will be interested in making draws from te structural parameterization; tus, we will need to transform (B, Σ, Q) into (A 0, A + ). Given Equation (2), Equation (4), and te decomposition, we can define a mapping between (A 0, A + ) and (B, Σ, Q) by f (A 0, A + ) = (A + A 1 0, (A 0 A }{{} 0) 1, ((A 0 A }{{} 0) 1 )A 0 ). }{{} B Σ Q By a direct computation, it is easy to see tat ((A 0 A 0) 1 )A 0 is an ortogonal matrix. Te function f is invertible, wit inverse defined by f 1 (B, Σ, Q) = ((Σ) 1 Q, B(Σ) 1 Q }{{} A 0 ). }{{} A + Te ortogonal reduced-form parameterization makes clear ow te structural parameters depend 5 Often, one is only interested in partial identification. If tere is an ordering suc tat tere are at least n j zero restrictions and at least one sign restriction on te impulse responses to te j t structural sock for 1 j k, ten te first k structural socks under tis ordering will be identified. 7

9 on te reduced-form parameters and ortogonal matrices. Given te reduced-form parameters and a decomposition, one can consider eac value of Q O(n) as a particular coice of structural parameters. 3.4 Cange of Variable Teorems As mentioned above, te researcer will be interested in making draws from te structural parameterization but it is simpler to make draws from te reduced-form parameterization and ten transform tem into te structural parameterization using f 1. Hence, it is crucial to understand ow to transform densities between tese two representations. In tis section, we discuss te cange of variable teorems tat will allow us do exactly tat. Wile we will apply tese teorems to te structural parameterization, tey can be used wit any parameterization as long as te mapping between te ortogonal reduced-form parameterization and te desired parameterization can be explicitly computed. 6 As we will see, tere are differences between transforming densities wen tere are only sign restrictions as opposed to wen tere are also zero restrictions. Details and proofs, or references to proofs, will be relegated to Appendix A. Te usual cange of variable teorem can be stated as follows. Teorem 1. Let U R b be an open set and let γ : U R b be a one-to-one and continuously differentiable function. If A γ(u) and λ : A R is an integrable function, ten Proof. See Appendix A.1. A λ(v)dv = γ 1 (A) λ(γ(u)) det(dγ(u)) du. (5) Note tat te integral on eac side of Equation (5) is wit respect to Lebesgue measure over R b. Te term v γ (u) = det(dγ(u)) is te volume element of γ at u, were Dγ(u) denotes te derivative of γ evaluated at u. Note tat Dγ(u) is a b b matrix. Wen te range of γ is not R b, but is instead a b-dimensional smoot manifold in R a, one as te following cange of variable teorem. 7 Teorem 2. Let U R b be an open set, let V R a be a b-dimensional smoot manifold, and let γ : U V be a one-to-one and continuously differentiable function. If A γ(u) and λ : A R is an 6 In particular, it is easy to replicate te teory and algoritms of tis paper for te IRF parameterization. Tis parameterization is caracterized by te IRFs of te SVAR and te details are in Appendix B. 7 A b-dimensional smoot manifold in R a is a subset V of R a tat admits a local b-dimensional coordinate system in V. Tis means tat for eac v V, tere is an open set U R b and a continuously differentiable function γ : U V suc tat γ(u) is open in V, Dγ(u) is of rank b for every u U, te inverse of γ exists and is continuous, and v γ(u). Te function γ : U V is a coordinate system in V about v and γ(u) is a coordinate patc in V about v. 8

10 integrable function, ten λ(v)dv = λ(γ(u)) det(dγ(u) Dγ(u)) 1 2 du. (6) A γ 1 (A) Proof. See Appendix A.1. Te integral on te rigt and side of Equation (6) is wit respect to Lebesgue measure over R b, but te integral on te left and side cannot be wit respect to Lebesgue measure in R a if a > b because V is of measure zero in R a. However, te smoot manifold structure of V, togeter wit Lebesgue measure over R b, uniquely defines te volume of a set in V, wic determines a well-defined measure over V tat we call te volume measure. More formally, if γ : U V is a coordinate system, A γ(u), γ 1 (A) is Lebesgue measurable over R b, and λ(v) = 1 for every v A, ten Equation (6) can be taken as te definition of te volume of A. It can be sown tat tis definition is independent of te coice of coordinate system and can be extended to sets not contained in a single coordinate patc. Te integral on te left and side of Equation (6) is wit respect to te volume measure over V. Te matrix Dγ(u) is a b, so tat Dγ(u) Dγ(u) is a b b matrix. As before, te term v γ (u) = det(dγ(u) Dγ(u)) 1 2 is te volume element of γ at u. Wen a = b, Teorem 2 reduces to Teorem 1. As we will see below, Teorem 2 will be used to transform densities wen only sign restrictions are considered. Te final generalization of te cange of variable teorem is given below, wic will be of use wen tere are zero restrictions. Teorem 3. Let U R b be an open set, let V R a be a d-dimensional smoot manifold, and let te functions γ : U R a and β : U R b d be continuously differentiable wit Dβ(u) of rank b-d wenever β(u) = 0. Define U = β 1 ({0}) and suppose tat γ(u) V and γ is one-to-one on U. If A γ(u) and λ : A R is an integrable function, ten λ(v)dv = λ(γ(u)) det(n u Dγ(u) Dγ(u) N u ) 1 2 du, (7) A γ 1 (A) U were N u is any b d matrix wose columns form an ortonormal basis for te null space of Dβ(u). Proof. See Appendix A.1. Te conditions on te function β, wic will be used to describe te zero restrictions, imply tat U is a d-dimensional smoot manifold in R b and te integral on te rigt and side of Equation (7) is wit 9

11 respect to te volume measure over U. By assumption, γ(u) is contained in te d-dimensional smoot manifold V and te integral on te left and side of Equation (7) is wit respect to te volume measure over V. Te matrix Dγ(u) is a b, so tat N u Dγ(u) Dγ(u) N u is d d. Te matrix N u is not unique. If N u and Ñu are two matrices wose columns form an ortonormal basis for te null space of Dβ(u), ten tere exists a d d ortogonal matrix X suc tat N u = ÑuX. Because te determinant of a product of square matrices is equal to te product of te determinants and te determinant of a ortogonal matrix is plus or minus one, te value of te expression det(n u Dγ(u) Dγ(u) N u ) is independent of te coice of N u. As before, te term det(n u Dγ(u) Dγ(u) N u ) 1 2 is te volume element of γ restricted to U at u. To empasize te importance of te restriction, we denote tis volume element by v γ U (u). Bot Teorems 2 and 3 will be used in subsequent sections in te following way. We will ave a distribution over V wose density wit respect to te volume measure over V evaluated at v is p(v). A draw v from tis distribution can be uniquely transformed to u = γ 1 (v) and we will want to compute te density over te u. 8 Wen tere are only sign restrictions, Teorem 2 will be applicable and te density of u will be p(γ(u))v γ (u) wit respect to Lebesgue measure over R b. Wen tere are zero restrictions given by β, Teorem 3 will be applicable and te density of u will be p(γ(u))v γ U (u) wit respect to te volume measure over U. Wen applying tese teorems, v will be an ortogonal reducedform parameter and u will be a structural parameter. If te researcer is using a parameterization oter tan te structural parameterization, u will belong to it instead. In tis section we ave discussed cange of variable formulas for integration over smoot manifolds wit respect to te volume measure. In order to fix ideas, it is useful to relate te volume measure over commonly used smoot manifolds to oter measures tat could be defined over te same smoot manifolds. Some of tese examples will be used later in te paper. First, an open subset U R b is a b-dimensional smoot manifold in R b. Te volume measure over U is identical to Lebesgue measure over R b in tis case. Second, an open subset of a b-dimensional linear subspace of R a is a b-dimensional smoot manifold in R a and so tere is a well-defined volume measure over tese sets. For instance, te set of all n n symmetric and positive definite matrices is a n(n+1) 2 -dimensional smoot manifold in R n2. If V R a is a b-dimensional linear subspace, we know tat tere exists a linear mapping γ from R b onto V. Because γ is linear, te volume element is constant. Tus, by Teorem 2, te volume of A V will be te 8 It must be te case tat te support of p(v) is contained in γ(u) wen applying Teorem 2 and contained in γ(u) wen applying Teorem 3. Tis ensures tat u = γ 1 (v) exists and is unique. 10

12 Lebesgue measure of γ 1 (A) R b times te constant value of te volume element. Often te constant volume element is ignored, in wic case te measure will not be te volume measure. Because of te simple nature of linear subspaces tis generally causes no problems; owever, in tis paper we will always use te volume measure and tus te constant volume element will always be explicitly taken into account. Tird, te set of all n n ortogonal matrices, O(n), is a n(n 1) 2 -dimensional smoot manifold in R n2. In addition to te volume measure over O(n), tere are Haar measures defined over O(n), wic is any measure tat is invariant to multiplication by ortogonal matrices. Any two Haar measures differ only by a constant scale factor. Because volume is invariant to rigid transformations, wic multiplication by an ortogonal matrix is, te volume measure over O(n) is a Haar measure. Trougout te rest of te paper all densities will be wit respect to te volume measure, even toug we will not explicitly state it. Sometimes, for instance wen tere are no zero restrictions and we are working wit A 0, A +, or B, te volume measure will be Lebesgue measure. However, wen we are working wit symmetric and positive definite matrices, ortogonal matrices, or wen tere are zero restrictions, te volume measure will not be Lebesgue. 3.5 Conjugate Priors and Posteriors Wile te tecniques developed ere will work wit any prior distribution, tey are most efficient wen used wit prior distributions tat belong to a certain family of conjugate distributions. 9 For te reduced-form representation in Equation (3), te normal-inverse-wisart family of distributions is conjugate. 10 If te prior distribution over te reduced-form parameters is NIW ( ν, Φ, Ψ, Ω), ten 9 A family of distributions is conjugate if te prior distribution being a member of tis family implies tat te posterior distribution is a member of te family. Some autors also require te likeliood to be a member of te family. 10 A normal-inverse-wisart distribution over te reduced-form parameters is caracterized by four parameters: a scalar ν n, an n n symmetric and positive definite matrix Φ, an m n matrix Ψ, and an m m symmetric and positive definite matrix Ω. We denote tis distribution by NIW (ν, Φ, Ψ, Ω) and its density by NIW (ν,φ,ψ,ω) (B, Σ). Furtermore, NIW (ν,φ,ψ,ω) (B, Σ) det(σ) ν+n+1 2 e 1 2 tr(φσ 1 ) det(σ) m 2 e 1 2 vec(b Ψ) (Σ Ω) 1 vec(b Ψ). }{{}}{{} inverse-wisart conditionally normal 11

13 te posterior distribution over te reduced-form parameters is NIW ( ν, Φ, Ψ, Ω), were ν = T + ν, Ω = (X X + Ω 1 ) 1, Ψ = Ω(X Y + Ω 1 Ψ), Φ = Y Y + Φ + Ψ Ω 1 Ψ Ψ Ω 1 Ψ, for Y = [y 1 y T ] and X = [x 1 x T ]. If π(q B, Σ) is any conditional density over O(n), ten prior densities of te form NIW ( ν, Φ, Ψ, Ω) (B, Σ)π(Q B, Σ) over te ortogonal reduced-form parameterization will be conjugate. We will take π(q B, Σ) to be te uniform density. We make tis coice for tree reasons. First, as we will see below priors densities over te ortogonal reduced-form parameterization of tis form induce standard prior densities over te structural parameterization. Second, prior and posterior densities over te ortogonal reduced-form parameterization of tis form will be very easy to independently draw from. Tird, te likeliood is of tis form and so tis family of densities will be conjugate in even te stronger sense. We call tis te uniform-normal-inverse-wisart distribution over te ortogonal reduced-form parameterization; denote it by U N IW (ν, Φ, Ψ, Ω), and denote its density over te ortogonal reduced-form parameterization by UNIW (ν,φ,ψ,ω) (B, Σ, Q). 11 Densities over te ortogonal reduced-form parameterization induce densities over te structural parameterization via te function f. If π(b, Σ, Q) is any density over te ortogonal reduced-form parameterization, ten by Teorem 2, te induced density over te structural parameterization will be π(f (A 0, A + ))v f (A 0, A + ). It is easy to verify tat te ypoteses of Teorem 2 are satisfied and so Teorem 2 is applicable. Te volume element could be computed numerically, but for any function f it can be computed analytically using Proposition 1, described below. Te reader sould notice tat te volume element does not depend on te coice of. 11 It is te case tat UNIW (ν,φ,ψ,ω) (B, Σ, Q) = NIW (ν,φ,ψ,ω) (B, Σ)/ 1dQ. Because O(n) is compact, O(n) 1dQ is finite. O(n) 12

14 Proposition 1. Te volume element of f at (A 0, A + ) is v f (A 0, A + ) = 2 n(n+1) 2 det(a 0 ) (2n+m+1). Proof. See Appendix A.2. Using Teorem 2, Proposition 1, and te definition of te normal-inverse-wisart distribution, te density over te structural parameterization induced by te uniform-normal-inverse-wisart density over te ortogonal reduced-form parameterization is NGN (ν,φ,ψ,ω) (A 0, A + ) = UNIW (ν,φ,ψ,ω) (f (A 0, A + ))v f (A 0, A + ) det(a 0 ) ν n e 1 2 vec(a 0) (I n Φ) vec(a 0 ) e 1 2 vec(a + ΨA 0 ) (I n Ω) 1 vec(a + ΨA 0 ) }{{}}{{}. (8) generalized-normal conditionally normal Tus, if we independently draw (B, Σ, Q) from a uniform-normal-inverse-wisart distribution over te ortogonal reduced-form parameterization wit parameters (ν, Φ, Ψ, Ω) and ten transform te draws to (A 0, A + ) using f 1 we are in fact independently drawing from te density over te structural parameterization represented in Equation (8). We call tis a normal-generalized-normal distribution over te structural parameterization; denote it by NGN(ν, Φ, Ψ, Ω), and denote its density over te structural parameterization by NGN (ν,φ,ψ,ω) (A 0, A + ). Wen ν = n te marginal distribution of vec(a 0 ) is normal wit mean zero and variance I n Φ 1. In general we call it a generalized-normal distribution. Te distribution of vec(a + ), conditional on A 0, is normal wit mean vec(ψa 0 ) and variance I n Ω. Because te uniform-normal-inverse-wisart family of distributions is conjugate over te ortogonal reduced-form parameterization, te normal-generalized-normal family of distributions over te structural parameterization is conjugate. Tis is because if te prior and posterior densities ave te same functional form in one parameterization, ten, because te volume element will be te same for te prior and posterior densities, te induced prior and posterior densities in te oter parameterization will also ave te same functional form. Normal-generalized-normal prior distributions over te structural parameterization are often used in te literature, particularly wit ν = n. For instance, any prior distribution over te structural parameterization tat can be implemented troug dummy observations will be of tis form. Tus, te Sims-Za prior distribution over te structural parameterization is also of tis form. If te researcer needs to work wit a parameterization oter tan te structural parameteriza- 13

15 tion, an analog to Equation (8) can easily be obtained as long as te mapping between te ortogonal reduced-form parameterization and te desired parameterization can be explicitly computed. However, it may not be possible to derive an analytical expression for te volume element as in Proposition 1, but Teorem 2 can always be used to numerically compute te density over te desired parameterization induced by te uniform-normal-inverse-wisart density over te ortogonal reduced-form parameterization. It is very easy to independently draw from te uniform-normal-inverse-wisart distribution. Matlab, Matematica, and R ave routines for making independent draws from bot te inverse-wisart distribution and te normal distribution. Tere are efficient algoritms for making independent draws from te uniform distribution over O(n). Faust (1998), Canova and Nicoló (2002), Ulig (2005), and Rubio-Ramírez, Waggoner and Za (2010) all propose algoritms to do tis. Te algoritm of Rubio- Ramírez, Waggoner and Za (2010) is te most efficient, particularly for larger SVAR systems (e.g., n > 4). 12 Rubio-Ramírez, Waggoner and Za s (2010) results are based on te following teorem. Teorem 4. Let X be an n n random matrix wit eac element aving an independent standard normal distribution. Let X = QR be te QR decomposition of X wit te diagonal of R normalized to be positive. Te random matrix Q is ortogonal and is a draw from te uniform distribution over O(n). Proof. Te proof follows directly from Stewart (1980). In tis section we ave explicitly derived expressions for prior densities over te structural parameterization tat are conjugate and are induced by a uniform-normal-inverse-wisart prior density over te ortogonal reduced-form parameterization. Furtermore, tey are a family of prior densities often used in te literature. 4 Sign Restrictions Because F is continuous, te set of all structural parameters satisfying te sign restrictions will be open in te set of all structural parameters. An important point to make ere is tat te condition F(A 0 Q, A + Q) = F(A 0, A + )Q for every Q O(n) and te regularity condition are only needed to implement te algoritms for zero restrictions to be presented later. Wen only sign restrictions 12 See Rubio-Ramírez, Waggoner and Za (2010) for details. 14

16 are considered, it is enoug to assume tat F is continuous. So, if te sign restrictions are nondegenerate, so tat tere is at least one parameter value satisfying te sign restrictions, ten te set of all structural parameters satisfying te sign restrictions will be of positive measure in te set of all structural parameters. Tis justifies algoritms of te following type. Algoritm 1. Te following algoritm independently draws from te N GN(ν, Φ, Ψ, Ω) distribution over te structural parameterization conditional on te sign restrictions. 1. Draw (B, Σ) independently from te NIW (ν, Φ, Ψ, Ω) distribution. 2. Draw Q independently from te uniform distribution over O(n) using Teorem Keep (A 0, A + ) = f 1 (B, Σ, Q) if te sign restrictions are satisfied. 4. Return to Step 1 until te required number of draws as been obtained. Algoritm 1 follows te steps igligted in Section 3.4: it draws from a distribution over te ortogonal reduced-form parameterization conditional on te sign restrictions and ten transforms te draws into te structural parameterization using f 1. It follows from te discussion in Section 3.5 tat te independent draws of (A 0, A + ) produced by Algoritm 1 will be from te NGN(ν, Φ, Ψ, Ω) distribution over te structural parameterization conditional on te sign restrictions. As argued by Baumeister and Hamilton (2015), one sould coose te parameterization so tat at least some, if not all, of te parameters ave an economic interpretation and te prior distribution over te selected parameterization sould be cosen to reflect wat economic teory as to say about tose parameters. Usually, tis parameterization will not be te ortogonal reduced-form parameterization since tose parameters, particularly te ortogonal matrix, Q, are ard to interpret from an economic point of view. Altoug we agree wit Baumeister and Hamilton (2015), we will draw from te ortogonal reduced-form parameterization because Algoritm 1 is relatively efficient. We will ten transform te ortogonal reduced-form draws back to te desired parameterization. Wile Algoritm 1 is stated in terms of te structural parameterization, it will work for any parameterization as long as one can explicitly compute te transformation between te ortogonal reduced-form and te desired parameterization and te draws produced by tis algoritm will be from a conjugate distribution over te desired parameterization. Clearly, if te desired parameterization is te ortogonal reducedform parameterization, te transformation is te identity and Algoritm 1 can be used to produce independent draws of (B, Σ, Q) from te UNIW (ν, Φ, Ψ, Ω) distribution over te ortogonal reducedform parameterization conditional on te sign restrictions. 15

17 Given te desired parameterization, conjugate prior distributions ave many useful properties and if one wants to use a conjugate prior distribution tat is induced by a uniform-normal-inverse-wisart distribution over te ortogonal reduced-form parameterization, ten Algoritm 1 is an efficient tecnique for making independent draws from te associated posterior distribution. If one wants to use a different prior distribution, ten Algoritm 1 can still be used as te proposal density in an importance sampler. Of course te efficiency of tis algoritm depends eavily on ow close suc a prior distribution is to one tat can be induced by a uniform-normal-inverse-wisart distribution over te ortogonal reduced-form parameterization. In tis paper we work only wit conjugate priors distributions over te structural parameterization wose density can be described by Equation (8) as tey are commonly used in te literature, as mentioned in Section Zero Restrictions We now adapt Algoritm 1 to andle te case of sign and zero restrictions. Wen tere are only sign restrictions, te set of all structural parameters satisfying te restrictions is of positive measure in te set of all structural parameters. However, wen tere are bot sign and zero restrictions, te set of all structural parameters satisfying te restrictions is of measure zero in te set of all structural parameters. Tis invalidates te direct use of Algoritm 1. But since te set of all structural parameters satisfying bot te sign and zero restrictions is of positive measure in te set of all structural parameters satisfying te zero restrictions, if we could make independent draws from te set of all structural parameters satisfying te zero restrictions, ten we could apply a variant of Algoritm 1 to obtain independent draws from te set of all structural parameters satisfying bot te sign and zero restrictions. Algoritm 3, described in tis section, does precisely tat. 5.1 Zero Restrictions in te Ortogonal Reduced-Form Parameterization Te zero restrictions in te structural parameterization are Z j F(A 0, A + )e j = 0 for 1 j n. From te definition of f and te fact tat F(A 0 Q, A + Q) = F(A 0, A + )Q, te zero restrictions in te ortogonal reduced-form parameterization are Z j F(f 1 (B, Σ, Q))e j = Z j F(f 1 (B, Σ, I n))qe j = 0 for 1 j n. 16

18 Tis means tat te zero restrictions in te ortogonal reduced-form parameterization are really just linear restrictions on eac column of te ortogonal matrix Q, conditional on te reduced-form parameters (B, Σ). It is tis observation tat is key to being able to make independent draws from te set of all structural parameters satisfying te zero restrictions. Algoritm 2. Te following makes independent draws from a distribution over te structural parameterization conditional on te zero restrictions. 1. Draw (B, Σ) independently from te NIW (ν, Φ, Ψ, Ω) distribution. 2. For 1 j n, draw x j R n+1 j z j independently from a standard normal distribution and set w j = x j / x j. 3. Define Q = [q 1 q n ] recursively by q j = K j w j for any matrix K j wose columns form an ortonormal basis for te null space of te (j 1 + z j ) n matrix M j = [ q 1 q j 1 (Z j F(f 1 (B, Σ, I n))) ]. 4. Set (A 0, A + ) = f 1 (B, Σ, Q). 5. Return to Step 1 until te required number of draws as been obtained. Te null space of M j will be of dimension n + 1 j z j if and only if M j is of full row rank. Because of te regularity condition on te function F, tis will always be case. Te details of tis argument appear in Appendix A.3. Tis is crucial because oterwise te product K j w j is not defined. It is also te case tat matrix K j is not unique. If te columns of K j form an ortonormal basis for te null space of M j, ten so will te columns of K j X for any X O(n + 1 j z j ). Te particular coice of K j does not make a material difference in te output of Algoritm 2, but in te next section, wen we compute te density over te structural parameterization conditional on te zero restrictions implied by Algoritm 2, we will need te function K j = K j (B, Σ, q 1,, q j 1 ) to be differentiable almost everywere. 13 In Appendix A.3 we define K j so tat it is differentiable almost everywere. 13 Te function K j depends on f 1 (B, Σ, I n), and so implicitly requires Σ to be symmetric and positive definite. Tus te domain of K j is not an open set in R n(m+n+j 1). In Appendix A.2, we extend te definition of f 1 so tat te domain of K j is an open set and te derivative can be defined. Also, in general, it is not possible to define K j so tat it is differentiable everywere. For instance, if tere are no restrictions, ten te existence of everywere continuous K j would imply tat O(n) is topologically equivalent to a product of speres, wic is not true if n 3. 17

19 Our implementation is quite straigt forward and te computation of eac K j requires only a single QR-decomposition of an n n invertible matrix. 14 By construction, te vector q j is perpendicular to te rows of M j and q j = w j = 1. Tus te matrix Q obtained in Steps 2 and 3 is ortogonal and Z j F(f 1 (B, Σ, I n))qe j = 0 for 1 j n. So, te algoritm produces independent draws from a distribution over te structural parameterization conditional on te zero restrictions. In te next subsection, we sow ow to numerically compute te density of tis distribution using Teorem 3. Unlike te sign restriction only case, te distribution over te structural parameterization conditional on te zero restrictions implied by Algoritm 2 is not equal to te NGN(ν, Φ, Ψ, Ω) distribution conditional on te zero restrictions. However, once we know ow to numerically compute its density, we can use Algoritm 2 as a proposal distribution for an importance sampler to draw from te N GN(ν, Φ, Ψ, Ω) distribution over te structural parameterization conditional on te zero restrictions. Because Algoritm 2 will be used as a proposal distribution for an importance sampler for a distribution wose support is te set of all structural parameters satisfying te zero restrictions, it must be te case tat te support of te distribution implied by Algoritm 2 is also te set of all structural parameters tat satisfy te zero restrictions. To see tat tis is te case, suppose tat (A 0, A + ) = f 1 (B, Σ, Q) satisfies te zero restrictions. To sow tat tese parameters are in te support of te distribution implied by Algoritm 2, it suffices to sow tat tere exist w j R n+1 j z j, for 1 j n, suc tat Step 3 of Algoritm 2 maps to te ortogonal matrix Q. Let w j = K jq j. Since f 1 (B, Σ, Q) satisfies te zero restrictions and te matrix Q is ortogonal, te jt column of Q is in te null space of M j. Tus K j w j = K j K jq j = q j, because multiplication by K j K j is projection onto te null space of M j. Algoritm 2 also follows te steps igligted in Section 3.4: it draws from a distribution over te ortogonal reduced-form parameterization conditional on te zero restrictions and ten transforms te draws into te structural parameterization using f 1. As mentioned, in tis case te independent draws of (A 0, A + ) produced by Algoritm 2 will not be from te NGN(ν, Φ, Ψ, Ω) distribution over te structural parameterization conditional on te zero restrictions. Te density implied by Algoritm 2 will be analyzed below. 14 In Matlab, an obvious coice would be to define K j = null(m j ). Wile it is surely te case tat tis coice would be differentiable almost everywere, to prove tis would require details of te Matlab implementation of tis function. 18

20 5.2 Te Density Implied by Algoritm 2 In tis section we use Teorem 3 to sow ow to numerically compute te distribution over te structural parameterization conditional on te zero restrictions implied by Algoritm 2. In order to do tat we need to carefully caracterize te mapping implied by te steps in te algoritm. Step 1 of Algoritm 2 independently draws B and Σ from te NIW (ν, Φ, Ψ, Ω) distribution. Step 2 draws w j from te uniform distribution on te unit spere in R n+1 j z j. Tis implies tat te density over (B, Σ, w 1,, w n ) will be proportional to NIW (ν,φ,ψ,ω) (B, Σ). Step 3 maps (B, Σ, w 1,, w n ) to (B, Σ, Q) and Step 4 maps (B, Σ, Q) to (A 0, A + ) = f 1 (B, Σ, Q). It is tis composite mapping, togeter wit Teorem 3, tat we will use to compute te density. It will be sown in Appendix A.3 tat tere exists an open set V R nm+n2 +n 2 suc tat te functions K j = K j (B, Σ, q 1,, q j 1 ) can be defined so tat tey are differentiable for all (B, Σ, Q) V. 15 Tus, we can define a differentiable function g : V R nm+n2 + n j=1 (n+1 j z j) by g(b, Σ, Q) = (B, Σ, (K 1 (B, Σ) q 1,, K n (B, Σ, q 1,, q n 1 ) q n )). On te set of all (B, Σ, Q) V suc tat Σ is symmetric and positive definite, Q is ortogonal, and (A 0, A + ) = f 1 (B, Σ, Q) satisfies te zero restrictions, te function g will be one-to-one. Te easiest way to see tis is tat te function defined by Step 3 of Algoritm 2 is te inverse of g on tis restricted set. Te argument is identical to tat used to sow tat te support of te distribution implied by Algoritm 2 is te set of all structural parameters satisfying te zero restrictions. Let U = f 1 (V ), wic will be an open set in te set of all structural parameters. Te composite function g f, wen restricted to te (A 0, A + ) U tat satisfy te zero restrictions, will be te inverse of te function defined by Steps 3 and 4 of Algoritm 2. If Z denotes te set of all structural parameters tat satisfy te zero restrictions, ten by Teorem 3 te density over te structural parameterization conditional on te zero restrictions implied by Algoritm 2 is proportional to NIW (ν,φ,ψ,ω) (B, Σ)v (g f ) Z (A 0, A + ), were (B, Σ, Q) = f (A 0, A + ). It is easy to verify tat te ypoteses of Teorem 3 are satisfied. Te only difficulty is to ceck weter te derivative of te function describing te zero restrictions, wic is given by β(a 0, A + ) = (Z j F(A 0, A + )e j ) n j=1, as full row rank. Tis will follow from te regularity conditions on te function F, te details of wic are in Appendix A.3. Finally, it must be te case tat any structural parameter satisfying te zero restrictions must 15 Here, we are implicitly considering K j to be a function of Q, even toug it actually only depends on te first j 1 columns of Q. 19

21 almost surely be in te set U. If tis were not te case, ten tere would be a set of positive measure for wic te tecniques of te section would not apply. As wit te oter details in tis section, tis will be sown in Appendix A An Importance Sampler Te results of Sections 5.1 and 5.2 sow tat, first, Algoritm 2 generates independent draws from a distribution over te structural parameterization conditional on te zero restrictions tat is not equal to te NGN(ν, Φ, Ψ, Ω) distribution conditional on te zero restrictions and, second, we know ow to to numerically compute tis density. Tus, tey justify te following importance sampler algoritm to independently draw from te N GN(ν, Φ, Ψ, Ω) distribution over te structural parameterization conditional on te sign and zero restrictions. Algoritm 3. Te following algoritm independently draws from te N GN(ν, Φ, Ψ, Ω) distribution over te structural parameterization conditional on te sign and zero restrictions. 1. Use Algoritm 2 to independently draw (A 0, A + ). 2. If (A 0, A + ) satisfies te sign restrictions, ten set its importance weigt to NGN (ν,φ,ψ,ω) (A 0, A + ) NIW (ν,φ,ψ,ω) (B, Σ)v (g f ) Z (A 0, A + ) det(a 0) (2n+m+1) v (g f ) Z (A 0, A + ), were (B, Σ, Q) = f (A 0, A + ) and Z denotes te set of all structural parameters tat satisfy te zero restrictions. Oterwise, set its importance weigt to zero. 3. Return to Step 1 until te required number of draws as been obtained. As was te case wit Algoritms 1 and 2, Algoritm 3 follows te steps igligted in Section 3.4: it draws from a distribution over te ortogonal reduced-form parameterization conditional on te sign and zero restrictions and ten transforms te draws into te structural parameterization. It follows from te discussion in Section 5.2 tat te independent draws of (A 0, A + ) produced by Algoritm 3 will be from te N GN(ν, Φ, Ψ, Ω) distribution over te structural parameterization conditional on te sign and zero restrictions. Algoritm 3 inerits te key features of Algoritm 1. First, being able to independently draw (A 0, A + ) from te normal-generalized-normal family of distributions over te structural parameterization conditional on te sign and zero restrictions means tat we can use Algoritm 3 to independently 20

Inference Based on SVARs Identified with Sign and Zero Restrictions: Theory and Applications

Inference Based on SVARs Identified with Sign and Zero Restrictions: Theory and Applications Inference Based on SVARs Identified wit Sign and Zero Restrictions: Teory and Applications Jonas E. Arias Federal Reserve Board Juan F. Rubio-Ramírez Emory University, BBVA Researc, and Federal Reserve

More information

Inference Based on SVARs Identified with Sign and Zero Restrictions: Theory and Applications

Inference Based on SVARs Identified with Sign and Zero Restrictions: Theory and Applications Inference Based on SVARs Identified wit Sign and Zero Restrictions: Teory and Applications Jonas E. Arias Federal Reserve Board Juan F. Rubio-Ramírez Emory University, BBVA Researc, and Federal Reserve

More information

Inference Based on SVARs Identified with Sign and Zero Restrictions: Theory and Applications

Inference Based on SVARs Identified with Sign and Zero Restrictions: Theory and Applications Inference Based on SVARs Identified with Sign and Zero Restrictions: Theory and Applications Jonas E. Arias Federal Reserve Board Juan F. Rubio-Ramírez Duke University Federal Reserve Bank of Atlanta Daniel

More information

Inference Based on SVARs Identified with Sign and Zero Restrictions: Theory and Applications

Inference Based on SVARs Identified with Sign and Zero Restrictions: Theory and Applications Dynare Working Papers Series http://www.dynare.org/wp/ Inference Based on SVARs Identified with Sign and Zero Restrictions: Theory and Applications Jonas E. Arias Juan F. Rubio-Ramírez Daniel F. Waggoner

More information

Bounding Tree Cover Number and Positive Semidefinite Zero Forcing Number

Bounding Tree Cover Number and Positive Semidefinite Zero Forcing Number Bounding Tree Cover Number and Positive Semidefinite Zero Forcing Number Sofia Burille Mentor: Micael Natanson September 15, 2014 Abstract Given a grap, G, wit a set of vertices, v, and edges, various

More information

Haar Transform CS 430 Denbigh Starkey

Haar Transform CS 430 Denbigh Starkey Haar Transform CS Denbig Starkey. Background. Computing te transform. Restoring te original image from te transform 7. Producing te transform matrix 8 5. Using Haar for lossless compression 6. Using Haar

More information

Linear Interpolating Splines

Linear Interpolating Splines Jim Lambers MAT 772 Fall Semester 2010-11 Lecture 17 Notes Tese notes correspond to Sections 112, 11, and 114 in te text Linear Interpolating Splines We ave seen tat ig-degree polynomial interpolation

More information

3.6 Directional Derivatives and the Gradient Vector

3.6 Directional Derivatives and the Gradient Vector 288 CHAPTER 3. FUNCTIONS OF SEVERAL VARIABLES 3.6 Directional Derivatives and te Gradient Vector 3.6.1 Functions of two Variables Directional Derivatives Let us first quickly review, one more time, te

More information

4.2 The Derivative. f(x + h) f(x) lim

4.2 The Derivative. f(x + h) f(x) lim 4.2 Te Derivative Introduction In te previous section, it was sown tat if a function f as a nonvertical tangent line at a point (x, f(x)), ten its slope is given by te it f(x + ) f(x). (*) Tis is potentially

More information

MATH 5a Spring 2018 READING ASSIGNMENTS FOR CHAPTER 2

MATH 5a Spring 2018 READING ASSIGNMENTS FOR CHAPTER 2 MATH 5a Spring 2018 READING ASSIGNMENTS FOR CHAPTER 2 Note: Tere will be a very sort online reading quiz (WebWork) on eac reading assignment due one our before class on its due date. Due dates can be found

More information

Section 2.3: Calculating Limits using the Limit Laws

Section 2.3: Calculating Limits using the Limit Laws Section 2.3: Calculating Limits using te Limit Laws In previous sections, we used graps and numerics to approimate te value of a it if it eists. Te problem wit tis owever is tat it does not always give

More information

Cubic smoothing spline

Cubic smoothing spline Cubic smooting spline Menu: QCExpert Regression Cubic spline e module Cubic Spline is used to fit any functional regression curve troug data wit one independent variable x and one dependent random variable

More information

Multi-Stack Boundary Labeling Problems

Multi-Stack Boundary Labeling Problems Multi-Stack Boundary Labeling Problems Micael A. Bekos 1, Micael Kaufmann 2, Katerina Potika 1 Antonios Symvonis 1 1 National Tecnical University of Atens, Scool of Applied Matematical & Pysical Sciences,

More information

The Euler and trapezoidal stencils to solve d d x y x = f x, y x

The Euler and trapezoidal stencils to solve d d x y x = f x, y x restart; Te Euler and trapezoidal stencils to solve d d x y x = y x Te purpose of tis workseet is to derive te tree simplest numerical stencils to solve te first order d equation y x d x = y x, and study

More information

4.1 Tangent Lines. y 2 y 1 = y 2 y 1

4.1 Tangent Lines. y 2 y 1 = y 2 y 1 41 Tangent Lines Introduction Recall tat te slope of a line tells us ow fast te line rises or falls Given distinct points (x 1, y 1 ) and (x 2, y 2 ), te slope of te line troug tese two points is cange

More information

Two Modifications of Weight Calculation of the Non-Local Means Denoising Method

Two Modifications of Weight Calculation of the Non-Local Means Denoising Method Engineering, 2013, 5, 522-526 ttp://dx.doi.org/10.4236/eng.2013.510b107 Publised Online October 2013 (ttp://www.scirp.org/journal/eng) Two Modifications of Weigt Calculation of te Non-Local Means Denoising

More information

1.4 RATIONAL EXPRESSIONS

1.4 RATIONAL EXPRESSIONS 6 CHAPTER Fundamentals.4 RATIONAL EXPRESSIONS Te Domain of an Algebraic Epression Simplifying Rational Epressions Multiplying and Dividing Rational Epressions Adding and Subtracting Rational Epressions

More information

Numerical Derivatives

Numerical Derivatives Lab 15 Numerical Derivatives Lab Objective: Understand and implement finite difference approximations of te derivative in single and multiple dimensions. Evaluate te accuracy of tese approximations. Ten

More information

Our Calibrated Model has No Predictive Value: An Example from the Petroleum Industry

Our Calibrated Model has No Predictive Value: An Example from the Petroleum Industry Our Calibrated Model as No Predictive Value: An Example from te Petroleum Industry J.N. Carter a, P.J. Ballester a, Z. Tavassoli a and P.R. King a a Department of Eart Sciences and Engineering, Imperial

More information

More on Functions and Their Graphs

More on Functions and Their Graphs More on Functions and Teir Graps Difference Quotient ( + ) ( ) f a f a is known as te difference quotient and is used exclusively wit functions. Te objective to keep in mind is to factor te appearing in

More information

PYRAMID FILTERS BASED ON BILINEAR INTERPOLATION

PYRAMID FILTERS BASED ON BILINEAR INTERPOLATION PYRAMID FILTERS BASED ON BILINEAR INTERPOLATION Martin Kraus Computer Grapics and Visualization Group, Tecnisce Universität Müncen, Germany krausma@in.tum.de Magnus Strengert Visualization and Interactive

More information

2 The Derivative. 2.0 Introduction to Derivatives. Slopes of Tangent Lines: Graphically

2 The Derivative. 2.0 Introduction to Derivatives. Slopes of Tangent Lines: Graphically 2 Te Derivative Te two previous capters ave laid te foundation for te study of calculus. Tey provided a review of some material you will need and started to empasize te various ways we will view and use

More information

13.5 DIRECTIONAL DERIVATIVES and the GRADIENT VECTOR

13.5 DIRECTIONAL DERIVATIVES and the GRADIENT VECTOR 13.5 Directional Derivatives and te Gradient Vector Contemporary Calculus 1 13.5 DIRECTIONAL DERIVATIVES and te GRADIENT VECTOR Directional Derivatives In Section 13.3 te partial derivatives f x and f

More information

2.8 The derivative as a function

2.8 The derivative as a function CHAPTER 2. LIMITS 56 2.8 Te derivative as a function Definition. Te derivative of f(x) istefunction f (x) defined as follows f f(x + ) f(x) (x). 0 Note: tis differs from te definition in section 2.7 in

More information

Investigating an automated method for the sensitivity analysis of functions

Investigating an automated method for the sensitivity analysis of functions Investigating an automated metod for te sensitivity analysis of functions Sibel EKER s.eker@student.tudelft.nl Jill SLINGER j..slinger@tudelft.nl Delft University of Tecnology 2628 BX, Delft, te Neterlands

More information

15-122: Principles of Imperative Computation, Summer 2011 Assignment 6: Trees and Secret Codes

15-122: Principles of Imperative Computation, Summer 2011 Assignment 6: Trees and Secret Codes 15-122: Principles of Imperative Computation, Summer 2011 Assignment 6: Trees and Secret Codes William Lovas (wlovas@cs) Karl Naden Out: Tuesday, Friday, June 10, 2011 Due: Monday, June 13, 2011 (Written

More information

MAC-CPTM Situations Project

MAC-CPTM Situations Project raft o not use witout permission -P ituations Project ituation 20: rea of Plane Figures Prompt teacer in a geometry class introduces formulas for te areas of parallelograms, trapezoids, and romi. e removes

More information

Section 1.2 The Slope of a Tangent

Section 1.2 The Slope of a Tangent Section 1.2 Te Slope of a Tangent You are familiar wit te concept of a tangent to a curve. Wat geometric interpretation can be given to a tangent to te grap of a function at a point? A tangent is te straigt

More information

12.2 Techniques for Evaluating Limits

12.2 Techniques for Evaluating Limits 335_qd /4/5 :5 PM Page 863 Section Tecniques for Evaluating Limits 863 Tecniques for Evaluating Limits Wat ou sould learn Use te dividing out tecnique to evaluate its of functions Use te rationalizing

More information

Piecewise Polynomial Interpolation, cont d

Piecewise Polynomial Interpolation, cont d Jim Lambers MAT 460/560 Fall Semester 2009-0 Lecture 2 Notes Tese notes correspond to Section 4 in te text Piecewise Polynomial Interpolation, cont d Constructing Cubic Splines, cont d Having determined

More information

Computing geodesic paths on manifolds

Computing geodesic paths on manifolds Proc. Natl. Acad. Sci. USA Vol. 95, pp. 8431 8435, July 1998 Applied Matematics Computing geodesic pats on manifolds R. Kimmel* and J. A. Setian Department of Matematics and Lawrence Berkeley National

More information

Chapter K. Geometric Optics. Blinn College - Physics Terry Honan

Chapter K. Geometric Optics. Blinn College - Physics Terry Honan Capter K Geometric Optics Blinn College - Pysics 2426 - Terry Honan K. - Properties of Ligt Te Speed of Ligt Te speed of ligt in a vacuum is approximately c > 3.0µ0 8 mês. Because of its most fundamental

More information

NOTES: A quick overview of 2-D geometry

NOTES: A quick overview of 2-D geometry NOTES: A quick overview of 2-D geometry Wat is 2-D geometry? Also called plane geometry, it s te geometry tat deals wit two dimensional sapes flat tings tat ave lengt and widt, suc as a piece of paper.

More information

19.2 Surface Area of Prisms and Cylinders

19.2 Surface Area of Prisms and Cylinders Name Class Date 19 Surface Area of Prisms and Cylinders Essential Question: How can you find te surface area of a prism or cylinder? Resource Locker Explore Developing a Surface Area Formula Surface area

More information

MAPI Computer Vision

MAPI Computer Vision MAPI Computer Vision Multiple View Geometry In tis module we intend to present several tecniques in te domain of te 3D vision Manuel Joao University of Mino Dep Industrial Electronics - Applications -

More information

Density Estimation Over Data Stream

Density Estimation Over Data Stream Density Estimation Over Data Stream Aoying Zou Dept. of Computer Science, Fudan University 22 Handan Rd. Sangai, 2433, P.R. Cina ayzou@fudan.edu.cn Ziyuan Cai Dept. of Computer Science, Fudan University

More information

Hash-Based Indexes. Chapter 11. Comp 521 Files and Databases Spring

Hash-Based Indexes. Chapter 11. Comp 521 Files and Databases Spring Has-Based Indexes Capter 11 Comp 521 Files and Databases Spring 2010 1 Introduction As for any index, 3 alternatives for data entries k*: Data record wit key value k

More information

The (, D) and (, N) problems in double-step digraphs with unilateral distance

The (, D) and (, N) problems in double-step digraphs with unilateral distance Electronic Journal of Grap Teory and Applications () (), Te (, D) and (, N) problems in double-step digraps wit unilateral distance C Dalfó, MA Fiol Departament de Matemàtica Aplicada IV Universitat Politècnica

More information

Fast Calculation of Thermodynamic Properties of Water and Steam in Process Modelling using Spline Interpolation

Fast Calculation of Thermodynamic Properties of Water and Steam in Process Modelling using Spline Interpolation P R E P R N T CPWS XV Berlin, September 8, 008 Fast Calculation of Termodynamic Properties of Water and Steam in Process Modelling using Spline nterpolation Mattias Kunick a, Hans-Joacim Kretzscmar a,

More information

CE 221 Data Structures and Algorithms

CE 221 Data Structures and Algorithms CE Data Structures and Algoritms Capter 4: Trees (AVL Trees) Text: Read Weiss, 4.4 Izmir University of Economics AVL Trees An AVL (Adelson-Velskii and Landis) tree is a binary searc tree wit a balance

More information

Symmetric Tree Replication Protocol for Efficient Distributed Storage System*

Symmetric Tree Replication Protocol for Efficient Distributed Storage System* ymmetric Tree Replication Protocol for Efficient Distributed torage ystem* ung Cune Coi 1, Hee Yong Youn 1, and Joong up Coi 2 1 cool of Information and Communications Engineering ungkyunkwan University

More information

, 1 1, A complex fraction is a quotient of rational expressions (including their sums) that result

, 1 1, A complex fraction is a quotient of rational expressions (including their sums) that result RT. Complex Fractions Wen working wit algebraic expressions, sometimes we come across needing to simplify expressions like tese: xx 9 xx +, xx + xx + xx, yy xx + xx + +, aa Simplifying Complex Fractions

More information

A Novel QC-LDPC Code with Flexible Construction and Low Error Floor

A Novel QC-LDPC Code with Flexible Construction and Low Error Floor A Novel QC-LDPC Code wit Flexile Construction and Low Error Floor Hanxin WANG,2, Saoping CHEN,2,CuitaoZHU,2 and Kaiyou SU Department of Electronics and Information Engineering, Sout-Central University

More information

Coarticulation: An Approach for Generating Concurrent Plans in Markov Decision Processes

Coarticulation: An Approach for Generating Concurrent Plans in Markov Decision Processes Coarticulation: An Approac for Generating Concurrent Plans in Markov Decision Processes Kasayar Roanimanes kas@cs.umass.edu Sridar Maadevan maadeva@cs.umass.edu Department of Computer Science, University

More information

Mean Shifting Gradient Vector Flow: An Improved External Force Field for Active Surfaces in Widefield Microscopy.

Mean Shifting Gradient Vector Flow: An Improved External Force Field for Active Surfaces in Widefield Microscopy. Mean Sifting Gradient Vector Flow: An Improved External Force Field for Active Surfaces in Widefield Microscopy. Margret Keuper Cair of Pattern Recognition and Image Processing Computer Science Department

More information

On the Use of Radio Resource Tests in Wireless ad hoc Networks

On the Use of Radio Resource Tests in Wireless ad hoc Networks Tecnical Report RT/29/2009 On te Use of Radio Resource Tests in Wireless ad oc Networks Diogo Mónica diogo.monica@gsd.inesc-id.pt João Leitão jleitao@gsd.inesc-id.pt Luis Rodrigues ler@ist.utl.pt Carlos

More information

A Cost Model for Distributed Shared Memory. Using Competitive Update. Jai-Hoon Kim Nitin H. Vaidya. Department of Computer Science

A Cost Model for Distributed Shared Memory. Using Competitive Update. Jai-Hoon Kim Nitin H. Vaidya. Department of Computer Science A Cost Model for Distributed Sared Memory Using Competitive Update Jai-Hoon Kim Nitin H. Vaidya Department of Computer Science Texas A&M University College Station, Texas, 77843-3112, USA E-mail: fjkim,vaidyag@cs.tamu.edu

More information

Test Generation for Acyclic Sequential Circuits with Hold Registers

Test Generation for Acyclic Sequential Circuits with Hold Registers Test Generation for Acyclic Sequential Circuits wit Hold Registers Tomoo Inoue, Debes Kumar Das, Ciio Sano, Takairo Miara, and Hideo Fujiwara Faculty of Information Sciences Computer Science and Engineering

More information

12.2 TECHNIQUES FOR EVALUATING LIMITS

12.2 TECHNIQUES FOR EVALUATING LIMITS Section Tecniques for Evaluating Limits 86 TECHNIQUES FOR EVALUATING LIMITS Wat ou sould learn Use te dividing out tecnique to evaluate its of functions Use te rationalizing tecnique to evaluate its of

More information

ANTENNA SPHERICAL COORDINATE SYSTEMS AND THEIR APPLICATION IN COMBINING RESULTS FROM DIFFERENT ANTENNA ORIENTATIONS

ANTENNA SPHERICAL COORDINATE SYSTEMS AND THEIR APPLICATION IN COMBINING RESULTS FROM DIFFERENT ANTENNA ORIENTATIONS NTNN SPHRICL COORDINT SSTMS ND THIR PPLICTION IN COMBINING RSULTS FROM DIFFRNT NTNN ORINTTIONS llen C. Newell, Greg Hindman Nearfield Systems Incorporated 133. 223 rd St. Bldg. 524 Carson, C 9745 US BSTRCT

More information

Optimal In-Network Packet Aggregation Policy for Maximum Information Freshness

Optimal In-Network Packet Aggregation Policy for Maximum Information Freshness 1 Optimal In-etwork Packet Aggregation Policy for Maimum Information Fresness Alper Sinan Akyurek, Tajana Simunic Rosing Electrical and Computer Engineering, University of California, San Diego aakyurek@ucsd.edu,

More information

An Algorithm for Loopless Deflection in Photonic Packet-Switched Networks

An Algorithm for Loopless Deflection in Photonic Packet-Switched Networks An Algoritm for Loopless Deflection in Potonic Packet-Switced Networks Jason P. Jue Center for Advanced Telecommunications Systems and Services Te University of Texas at Dallas Ricardson, TX 75083-0688

More information

CHAPTER 7: TRANSCENDENTAL FUNCTIONS

CHAPTER 7: TRANSCENDENTAL FUNCTIONS 7.0 Introduction and One to one Functions Contemporary Calculus 1 CHAPTER 7: TRANSCENDENTAL FUNCTIONS Introduction In te previous capters we saw ow to calculate and use te derivatives and integrals of

More information

George Xylomenos and George C. Polyzos. with existing protocols and their eæciency in terms of

George Xylomenos and George C. Polyzos. with existing protocols and their eæciency in terms of IP MULTICASTING FOR WIRELESS MOBILE OSTS George Xylomenos and George C. Polyzos fxgeorge,polyzosg@cs.ucsd.edu Computer Systems Laboratory Department of Computer Science and Engineering University of California,

More information

Tilings of rectangles with T-tetrominoes

Tilings of rectangles with T-tetrominoes Tilings of rectangles wit T-tetrominoes Micael Korn and Igor Pak Department of Matematics Massacusetts Institute of Tecnology Cambridge, MA, 2139 mikekorn@mit.edu, pak@mat.mit.edu August 26, 23 Abstract

More information

Redundancy Awareness in SQL Queries

Redundancy Awareness in SQL Queries Redundancy Awareness in QL Queries Bin ao and Antonio Badia omputer Engineering and omputer cience Department University of Louisville bin.cao,abadia @louisville.edu Abstract In tis paper, we study QL

More information

Intra- and Inter-Session Network Coding in Wireless Networks

Intra- and Inter-Session Network Coding in Wireless Networks Intra- and Inter-Session Network Coding in Wireless Networks Hulya Seferoglu, Member, IEEE, Atina Markopoulou, Member, IEEE, K K Ramakrisnan, Fellow, IEEE arxiv:857v [csni] 3 Feb Abstract In tis paper,

More information

6 Computing Derivatives the Quick and Easy Way

6 Computing Derivatives the Quick and Easy Way Jay Daigle Occiental College Mat 4: Calculus Experience 6 Computing Derivatives te Quick an Easy Way In te previous section we talke about wat te erivative is, an we compute several examples, an ten we

More information

CESILA: Communication Circle External Square Intersection-Based WSN Localization Algorithm

CESILA: Communication Circle External Square Intersection-Based WSN Localization Algorithm Sensors & Transducers 2013 by IFSA ttp://www.sensorsportal.com CESILA: Communication Circle External Square Intersection-Based WSN Localization Algoritm Sun Hongyu, Fang Ziyi, Qu Guannan College of Computer

More information

Proceedings of the 8th WSEAS International Conference on Neural Networks, Vancouver, British Columbia, Canada, June 19-21,

Proceedings of the 8th WSEAS International Conference on Neural Networks, Vancouver, British Columbia, Canada, June 19-21, Proceedings of te 8t WSEAS International Conference on Neural Networks, Vancouver, Britis Columbia, Canada, June 9-2, 2007 3 Neural Network Structures wit Constant Weigts to Implement Dis-Jointly Removed

More information

CSE 332: Data Structures & Parallelism Lecture 8: AVL Trees. Ruth Anderson Winter 2019

CSE 332: Data Structures & Parallelism Lecture 8: AVL Trees. Ruth Anderson Winter 2019 CSE 2: Data Structures & Parallelism Lecture 8: AVL Trees Rut Anderson Winter 29 Today Dictionaries AVL Trees /25/29 2 Te AVL Balance Condition: Left and rigt subtrees of every node ave eigts differing

More information

You Try: A. Dilate the following figure using a scale factor of 2 with center of dilation at the origin.

You Try: A. Dilate the following figure using a scale factor of 2 with center of dilation at the origin. 1 G.SRT.1-Some Tings To Know Dilations affect te size of te pre-image. Te pre-image will enlarge or reduce by te ratio given by te scale factor. A dilation wit a scale factor of 1> x >1enlarges it. A dilation

More information

Announcements. Lilian s office hours rescheduled: Fri 2-4pm HW2 out tomorrow, due Thursday, 7/7. CSE373: Data Structures & Algorithms

Announcements. Lilian s office hours rescheduled: Fri 2-4pm HW2 out tomorrow, due Thursday, 7/7. CSE373: Data Structures & Algorithms Announcements Lilian s office ours resceduled: Fri 2-4pm HW2 out tomorrow, due Tursday, 7/7 CSE373: Data Structures & Algoritms Deletion in BST 2 5 5 2 9 20 7 0 7 30 Wy migt deletion be arder tan insertion?

More information

Implementation of Integral based Digital Curvature Estimators in DGtal

Implementation of Integral based Digital Curvature Estimators in DGtal Implementation of Integral based Digital Curvature Estimators in DGtal David Coeurjolly 1, Jacques-Olivier Lacaud 2, Jérémy Levallois 1,2 1 Université de Lyon, CNRS INSA-Lyon, LIRIS, UMR5205, F-69621,

More information

RECONSTRUCTING OF A GIVEN PIXEL S THREE- DIMENSIONAL COORDINATES GIVEN BY A PERSPECTIVE DIGITAL AERIAL PHOTOS BY APPLYING DIGITAL TERRAIN MODEL

RECONSTRUCTING OF A GIVEN PIXEL S THREE- DIMENSIONAL COORDINATES GIVEN BY A PERSPECTIVE DIGITAL AERIAL PHOTOS BY APPLYING DIGITAL TERRAIN MODEL IV. Évfolyam 3. szám - 2009. szeptember Horvát Zoltán orvat.zoltan@zmne.u REONSTRUTING OF GIVEN PIXEL S THREE- DIMENSIONL OORDINTES GIVEN Y PERSPETIVE DIGITL ERIL PHOTOS Y PPLYING DIGITL TERRIN MODEL bsztrakt/bstract

More information

AVL Trees Outline and Required Reading: AVL Trees ( 11.2) CSE 2011, Winter 2017 Instructor: N. Vlajic

AVL Trees Outline and Required Reading: AVL Trees ( 11.2) CSE 2011, Winter 2017 Instructor: N. Vlajic 1 AVL Trees Outline and Required Reading: AVL Trees ( 11.2) CSE 2011, Winter 2017 Instructor: N. Vlajic AVL Trees 2 Binary Searc Trees better tan linear dictionaries; owever, te worst case performance

More information

An Anchor Chain Scheme for IP Mobility Management

An Anchor Chain Scheme for IP Mobility Management An Ancor Cain Sceme for IP Mobility Management Yigal Bejerano and Israel Cidon Department of Electrical Engineering Tecnion - Israel Institute of Tecnology Haifa 32000, Israel E-mail: bej@tx.tecnion.ac.il.

More information

12.2 Investigate Surface Area

12.2 Investigate Surface Area Investigating g Geometry ACTIVITY Use before Lesson 12.2 12.2 Investigate Surface Area MATERIALS grap paper scissors tape Q U E S T I O N How can you find te surface area of a polyedron? A net is a pattern

More information

Distributed and Optimal Rate Allocation in Application-Layer Multicast

Distributed and Optimal Rate Allocation in Application-Layer Multicast Distributed and Optimal Rate Allocation in Application-Layer Multicast Jinyao Yan, Martin May, Bernard Plattner, Wolfgang Mülbauer Computer Engineering and Networks Laboratory, ETH Zuric, CH-8092, Switzerland

More information

Comparison of the Efficiency of the Various Algorithms in Stratified Sampling when the Initial Solutions are Determined with Geometric Method

Comparison of the Efficiency of the Various Algorithms in Stratified Sampling when the Initial Solutions are Determined with Geometric Method International Journal of Statistics and Applications 0, (): -0 DOI: 0.9/j.statistics.000.0 Comparison of te Efficiency of te Various Algoritms in Stratified Sampling wen te Initial Solutions are Determined

More information

Unsupervised Learning for Hierarchical Clustering Using Statistical Information

Unsupervised Learning for Hierarchical Clustering Using Statistical Information Unsupervised Learning for Hierarcical Clustering Using Statistical Information Masaru Okamoto, Nan Bu, and Tosio Tsuji Department of Artificial Complex System Engineering Hirosima University Kagamiyama

More information

Multi-Objective Particle Swarm Optimizers: A Survey of the State-of-the-Art

Multi-Objective Particle Swarm Optimizers: A Survey of the State-of-the-Art Multi-Objective Particle Swarm Optimizers: A Survey of te State-of-te-Art Margarita Reyes-Sierra and Carlos A. Coello Coello CINVESTAV-IPN (Evolutionary Computation Group) Electrical Engineering Department,

More information

Overcomplete Steerable Pyramid Filters and Rotation Invariance

Overcomplete Steerable Pyramid Filters and Rotation Invariance vercomplete Steerable Pyramid Filters and Rotation Invariance H. Greenspan, S. Belongie R. Goodman and P. Perona S. Raksit and C. H. Anderson Department of Electrical Engineering Department of Anatomy

More information

Some Handwritten Signature Parameters in Biometric Recognition Process

Some Handwritten Signature Parameters in Biometric Recognition Process Some Handwritten Signature Parameters in Biometric Recognition Process Piotr Porwik Institute of Informatics, Silesian Uniersity, Bdziska 39, 41- Sosnowiec, Poland porwik@us.edu.pl Tomasz Para Institute

More information

Tuning MAX MIN Ant System with off-line and on-line methods

Tuning MAX MIN Ant System with off-line and on-line methods Université Libre de Bruxelles Institut de Recerces Interdisciplinaires et de Développements en Intelligence Artificielle Tuning MAX MIN Ant System wit off-line and on-line metods Paola Pellegrini, Tomas

More information

ICES REPORT Isogeometric Analysis of Boundary Integral Equations

ICES REPORT Isogeometric Analysis of Boundary Integral Equations ICES REPORT 5-2 April 205 Isogeometric Analysis of Boundary Integral Equations by Mattias Taus, Gregory J. Rodin and Tomas J. R. Huges Te Institute for Computational Engineering and Sciences Te University

More information

Hash-Based Indexes. Chapter 11. Comp 521 Files and Databases Fall

Hash-Based Indexes. Chapter 11. Comp 521 Files and Databases Fall Has-Based Indexes Capter 11 Comp 521 Files and Databases Fall 2012 1 Introduction Hasing maps a searc key directly to te pid of te containing page/page-overflow cain Doesn t require intermediate page fetces

More information

MAP MOSAICKING WITH DISSIMILAR PROJECTIONS, SPATIAL RESOLUTIONS, DATA TYPES AND NUMBER OF BANDS 1. INTRODUCTION

MAP MOSAICKING WITH DISSIMILAR PROJECTIONS, SPATIAL RESOLUTIONS, DATA TYPES AND NUMBER OF BANDS 1. INTRODUCTION MP MOSICKING WITH DISSIMILR PROJECTIONS, SPTIL RESOLUTIONS, DT TYPES ND NUMBER OF BNDS Tyler J. lumbaug and Peter Bajcsy National Center for Supercomputing pplications 605 East Springfield venue, Campaign,

More information

HASH ALGORITHMS: A DESIGN FOR PARALLEL CALCULATIONS

HASH ALGORITHMS: A DESIGN FOR PARALLEL CALCULATIONS HASH ALGORITHMS: A DESIGN FOR PARALLEL CALCULATIONS N.G.Bardis Researc Associate Hellenic Ministry of te Interior, Public Administration and Decentralization 8, Dragatsaniou str., Klatmonos S. 0559, Greece

More information

Alternating Direction Implicit Methods for FDTD Using the Dey-Mittra Embedded Boundary Method

Alternating Direction Implicit Methods for FDTD Using the Dey-Mittra Embedded Boundary Method Te Open Plasma Pysics Journal, 2010, 3, 29-35 29 Open Access Alternating Direction Implicit Metods for FDTD Using te Dey-Mittra Embedded Boundary Metod T.M. Austin *, J.R. Cary, D.N. Smite C. Nieter Tec-X

More information

Single and Multi-View Reconstruction of Structured Scenes

Single and Multi-View Reconstruction of Structured Scenes ACCV2002: Te 5t Asian Conference on Computer Vision 23 25 January 2002 Melbourne Australia 1 Single and Multi-View econstruction of Structured Scenes Etienne Grossmann Diego Ortin and José Santos-Victor

More information

Traffic Sign Classification Using Ring Partitioned Method

Traffic Sign Classification Using Ring Partitioned Method Traffic Sign Classification Using Ring Partitioned Metod Aryuanto Soetedjo and Koici Yamada Laboratory for Management and Information Systems Science, Nagaoa University of Tecnology 603- Kamitomioamaci,

More information

A geometric analysis of heuristic search

A geometric analysis of heuristic search A geometric analysis of euristic searc by GORDON J. VANDERBRUG University of Maryland College Park, Maryland ABSTRACT Searc spaces for various types of problem representations can be represented in one

More information

Limits and Continuity

Limits and Continuity CHAPTER Limits and Continuit. Rates of Cange and Limits. Limits Involving Infinit.3 Continuit.4 Rates of Cange and Tangent Lines An Economic Injur Level (EIL) is a measurement of te fewest number of insect

More information

A Statistical Approach for Target Counting in Sensor-Based Surveillance Systems

A Statistical Approach for Target Counting in Sensor-Based Surveillance Systems Proceedings IEEE INFOCOM A Statistical Approac for Target Counting in Sensor-Based Surveillance Systems Dengyuan Wu, Decang Cen,aiXing, Xiuzen Ceng Department of Computer Science, Te George Wasington University,

More information

A library of biorthogonal wavelet transforms originated from polynomial splines

A library of biorthogonal wavelet transforms originated from polynomial splines A library of biortogonal wavelet transforms originated from polynomial splines Amir Z. Averbuc a and Valery A. Zeludev a a Scool of Computer Science, Tel Aviv University Tel Aviv 69978, Israel ABSTRACT

More information

2D transformations Homogeneous coordinates. Uses of Transformations

2D transformations Homogeneous coordinates. Uses of Transformations 2D transformations omogeneous coordinates Uses of Transformations Modeling: position and resize parts of a complex model; Viewing: define and position te virtual camera Animation: define ow objects move/cange

More information

You should be able to visually approximate the slope of a graph. The slope m of the graph of f at the point x, f x is given by

You should be able to visually approximate the slope of a graph. The slope m of the graph of f at the point x, f x is given by Section. Te Tangent Line Problem 89 87. r 5 sin, e, 88. r sin sin Parabola 9 9 Hperbola e 9 9 9 89. 7,,,, 5 7 8 5 ortogonal 9. 5, 5,, 5, 5. Not multiples of eac oter; neiter parallel nor ortogonal 9.,,,

More information

Soft sensor modelling by time difference, recursive partial least squares and adaptive model updating

Soft sensor modelling by time difference, recursive partial least squares and adaptive model updating Soft sensor modelling by time difference, recursive partial least squares adaptive model updating Y Fu 1, 2, W Yang 2, O Xu 1, L Zou 3, J Wang 4 1 Zijiang College, Zejiang University of ecnology, Hangzou

More information

UNSUPERVISED HIERARCHICAL IMAGE SEGMENTATION BASED ON THE TS-MRF MODEL AND FAST MEAN-SHIFT CLUSTERING

UNSUPERVISED HIERARCHICAL IMAGE SEGMENTATION BASED ON THE TS-MRF MODEL AND FAST MEAN-SHIFT CLUSTERING UNSUPERVISED HIERARCHICAL IMAGE SEGMENTATION BASED ON THE TS-MRF MODEL AND FAST MEAN-SHIFT CLUSTERING Raffaele Gaetano, Giuseppe Scarpa, Giovanni Poggi, and Josiane Zerubia Dip. Ing. Elettronica e Telecomunicazioni,

More information

PLK-B SERIES Technical Manual (USA Version) CLICK HERE FOR CONTENTS

PLK-B SERIES Technical Manual (USA Version) CLICK HERE FOR CONTENTS PLK-B SERIES Technical Manual (USA Version) CLICK ERE FOR CONTENTS CONTROL BOX PANEL MOST COMMONLY USED FUNCTIONS INITIAL READING OF SYSTEM SOFTWARE/PAGES 1-2 RE-INSTALLATION OF TE SYSTEM SOFTWARE/PAGES

More information

The navigability variable is binary either a cell is navigable or not. Thus, we can invert the entire reasoning by substituting x i for x i : (4)

The navigability variable is binary either a cell is navigable or not. Thus, we can invert the entire reasoning by substituting x i for x i : (4) A Multi-Resolution Pyramid for Outdoor Robot Terrain Perception Micael Montemerlo and Sebastian Trun AI Lab, Stanford University 353 Serra Mall Stanford, CA 94305-9010 {mmde,trun}@stanford.edu Abstract

More information

Feature-Based Steganalysis for JPEG Images and its Implications for Future Design of Steganographic Schemes

Feature-Based Steganalysis for JPEG Images and its Implications for Future Design of Steganographic Schemes Feature-Based Steganalysis for JPEG Images and its Implications for Future Design of Steganograpic Scemes Jessica Fridric Dept. of Electrical Engineering, SUNY Bingamton, Bingamton, NY 3902-6000, USA fridric@bingamton.edu

More information

arxiv: v3 [cs.cv] 31 Jul 2017

arxiv: v3 [cs.cv] 31 Jul 2017 GUIDEFILL: GPU ACCELERATED, ARTIST GUIDED GEOMETRIC INPAINTING FOR 3D CONVERSION OF FILM L. ROBERT HOCKING, RUSSELL MACKENZIE, AND CAROLA-BIBIANE SCHÖNLIEB arxiv:1611.05319v3 [cs.cv] 31 Jul 2017 Abstract.

More information

Classify solids. Find volumes of prisms and cylinders.

Classify solids. Find volumes of prisms and cylinders. 11.4 Volumes of Prisms and Cylinders Essential Question How can you find te volume of a prism or cylinder tat is not a rigt prism or rigt cylinder? Recall tat te volume V of a rigt prism or a rigt cylinder

More information

Parallel Simulation of Equation-Based Models on CUDA-Enabled GPUs

Parallel Simulation of Equation-Based Models on CUDA-Enabled GPUs Parallel Simulation of Equation-Based Models on CUDA-Enabled GPUs Per Ostlund Department of Computer and Information Science Linkoping University SE-58183 Linkoping, Sweden per.ostlund@liu.se Kristian

More information

CS 234. Module 6. October 16, CS 234 Module 6 ADT Dictionary 1 / 33

CS 234. Module 6. October 16, CS 234 Module 6 ADT Dictionary 1 / 33 CS 234 Module 6 October 16, 2018 CS 234 Module 6 ADT Dictionary 1 / 33 Idea for an ADT Te ADT Dictionary stores pairs (key, element), were keys are distinct and elements can be any data. Notes: Tis is

More information

Local features and image matching May 8 th, 2018

Local features and image matching May 8 th, 2018 Local features and image matcing May 8 t, 2018 Yong Jae Lee UC Davis Last time RANSAC for robust fitting Lines, translation Image mosaics Fitting a 2D transformation Homograpy 2 Today Mosaics recap: How

More information

Shape Analysis with Inductive Recursion Synthesis

Shape Analysis with Inductive Recursion Synthesis Sape Analysis wit Inductive Recursion Syntesis Bolei Guo Neil Vacarajani David I. August Department of Computer Science Princeton University {bguo,nvacar,august}@princeton.edu Abstract Separation logic

More information

1 Finding Trigonometric Derivatives

1 Finding Trigonometric Derivatives MTH 121 Fall 2008 Essex County College Division of Matematics Hanout Version 8 1 October 2, 2008 1 Fining Trigonometric Derivatives 1.1 Te Derivative as a Function Te efinition of te erivative as a function

More information