Recursive Estimation
|
|
- Matthew Townsend
- 6 years ago
- Views:
Transcription
1 Recursive Estimation Raffaello D Andrea Spring 28 Problem Set : Probability Review Last updated: March 6, 28 Notes: Notation: Unless otherwise noted, x, y, and z denote random variables, p x denotes the probability density function of x, and p x y denotes the conditional probability density function of x conditioned on y. Note that shorthand (as introduced in Lecture and 2) and longhand notation is used. The expected value of x and its variance is denoted by E [x] and Var [x], and Pr (Z) denotes the probability that the event Z occurs. A normally distributed random variable x with mean µ and variance σ 2 is denoted by x N ( µ, σ 2). Please report any errors found in this problem set to the teaching assistants (hofermat@ethz.ch or csferrazza@ethz.ch).
2 Problem Set Problem Let x be a discrete random variable (DRV) with sample space X = {,,, 2} and probability density function (PDF) p(x) = ax 2. a) Determine a such that p(x) is a valid PDF. b) Calculate the probability Pr (x = ). c) Calculate the probability Pr ( x ). d) Calculate the probability Pr ( x > ). e) Calculate the expected value E [x]. f) Calculate the variance Var [x]. Problem 2 Let x be a continuous random variable (CRV) with sample space X = [, 2] and probability density function (PDF) p(x) = ax 2. a) Determine a such that p(x) is a valid PDF. b) Calculate the probability Pr (x = ). c) Calculate the probability Pr ( x ). d) Calculate the probability Pr ( x > ). e) Calculate the expected value E [x]. f) Calculate the variance Var [x]. Problem 3 Let x and y be continuous random variables (CRVs) with sample space X = [, 2], Y = [, ] and joint PDF p(x, y) = 2 (x + y)2. a) Determine p x. b) Determine p y. c) Determine p x y. d) Determine p y x. e) Compute Pr ( x > y = 2), the probability that x is greater than given that y is.5. Problem 4 Prove p(x y) = p(x) p(y x) = p(y). 2
3 Problem 5 Prove p(x y, z) = p(x z) p(x, y z) = p(x z) p(y z). Problem 6 Come up with an example where p(x, y z) = p(x z) p(y z) but p(x, y) p(x) p(y) for continuous random variables x, y, and z. Problem 7 Come up with an example where p(x, y z) = p(x z) p(y z) but p(x, y) p(x) p(y) for discrete random variables x, y, and z. Problem 8 Let y be a scalar continuous random variable with probability density function p y, and g be a function transforming y to a new variable x by x = g(y). Derive the change of variables formula, i.e. the equation for the probability density function p x, under the assumption that g is continuously differentiable with dg(y) dy < y. Problem 9 Let x X be a scalar valued continuous random variable with probability density function p x. Prove the law of the unconscious statistician, that is, for a function g show that the expected value of y = g(x) is given by E [y] = y X g( x)p x ( x) d x. You may consider the special case where g is a continuously differentiable, strictly monotonically increasing function. Problem Prove E x [ax + b] = ae [x] + b, where a and b are constants. Problem Let x and y be scalar random variables. Prove that, if x and y are independent, then for any functions g and h, E x,y [g(x)h(y)] = E x [g(x)] E y [h(y)]. Problem 2 From above, it follows that if x and y are independent, then E [xy] = E [x] E [y]. Is the converse true, that is, if E [xy] = E [x] E [y], does it imply that x and y are independent? If yes, prove it. If no, find a counter-example. 3
4 Problem 3 Let x X and y Y be independent, scalar valued discrete random variables. Let z = x + y. Prove that p z ( z) = ȳ Y p x ( z ȳ) p y (ȳ) = x X p x ( x) p y ( z x), that is, the probability density function is the convolution of the individual probability density functions. Problem 4 (adapted from S.M. Ross, Introduction to Probability Models, 2) Let x, x 2,..., x n be independent random variables, each having a uniform distribution over [, ]. Let the random variable m be defined as the maximum of x,..., x n, that is, m = max{x, x 2,..., x n }. Show that the cumulative distribution function of m, F m ( ), is given by F m ( m) = m n, m. What is the probability density function of m? Problem 5 (adapted from S.M. Ross, Introduction to Probability Models, 2) Prove that E [ x 2] (E [x]) 2. When does the statement hold with equality? Problem 6 (adapted from S.M. Ross, Introduction to Probability Models, 985) Suppose that x and y are independent scalar continuous random variables. Show that Pr (x y) = F x (ȳ)p y (ȳ) dȳ. Problem 7 (adapted from S.M. Ross, Introduction to Probability Models, 2) Let x and y be independent random variables with means µ x and µ y and variances σ 2 x and σ 2 y, respectively. Show that Var x,y [xy] = σ2 xσ 2 y + µ 2 yσ 2 x + µ 2 xσ 2 y. Problem 8 a) Use the method presented in class to obtain a sample x of the exponential distribution { λe λ x x ˆp x ( x) = x < (λ > ) from a given sample u of a uniform distribution on [, ]. b) Let x and y be CRVs with (x, y) {( x, ȳ) R 2 x <, x ȳ < } and joint PDF { λ λ 2 e λ x e λ 2(ȳ x) x <, x ȳ < ˆp xy ( x, ȳ) = otherwise, with < λ 2 < λ. From two given samples u and u 2 of a uniform distribution on [, ], compute a sample (x s, y s ) of x and y using the method presented in class. 4
5 Additional Problems The following problems are not directly related to the topics covered in the lectures. We will not assume that you know these problem s specific results by heart, for example Chebychev s inequality. However, these problems let you further practice the application of the fundamental probability concepts discussed in the first two lectures. Problem 9 Let x and y be binary random variables: x, y {, }. Prove that p xy ( x, ȳ) p x ( x) + p y (ȳ), which is called Bonferroni s inequality. Problem 2 Let x be a scalar continuous random variable that takes on only non-negative values. Prove that, for a >, Pr (x a) E [x] a, which is called Markov s inequality. Problem 2 Let x be a scalar continuous random variable. Let x = E [x], σ 2 = Var [x]. Prove that for any k >, Pr ( x x k) σ2 k 2, which is called Chebyshev s inequality. Problem 22 (adapted from S.M. Ross, Introduction to Probability Models, 2) Use Chebyshev s inequality to prove the weak law of large numbers. Namely, if x, x 2,... are independent and identically distributed with mean µ and variance σ 2 then, for any ε >, ( ) x + x x n Pr µ n > ε as n. Problem 23 (adapted from S.M. Ross, Introduction to Probability Models, 2) Suppose that x is a random variable with mean and variance 5. What can we say about Pr (5 < x < 5)? Problem 24 Exponential random variables, such as the one in Problem 8, are used to model failures of components that do not wear, for example solid state electronics. This is based on their property Pr (x > s + t x > t) = Pr (x > s) s, t. A random variable with this property is said to be memoryless. Prove that a random variable with an exponential distribution satisfies this property. 5
6 Sample solutions Problem a) We require x X p(x) =. We find b) Pr (x = ) = p(x = ) = 6. c) Pr ( x ) = x= p(x) = 3. 2 x= ax 2 = 6a and therefore, a = 6. d) From above: Pr ( x > ) = 3 = 2 3, which is equal to Pr (x = 2). e) E [x] = xp(x) = = 4 3. x X f) Var [x] = E [ (x E [x]) 2] = E = E [ x 2] E [x] 2 = x X [ x 2 2xE [x] + E [x] 2] = E [ x 2] 2E [xe [x]] + E [E [x] 2] x 2 p(x) 6 9 = ( ) = = 9. Problem 2 a) We require x X p(x) dx =. We find 2 ax 2 dx = a [ ] x = 3a and therefore, a = 3. b) Pr (x = ) = c) Pr ( x ) = p(x) dx =, since the integration interval has zero length. p(x) dx = 3 [ ] x 3 3 = 2 9. d) From above: Pr ( x > ) = 2 9 = 7 9. e) E [x] = X xp(x) dx = 3 2 x 3 dx = 3 [ ] x = 5 4. f) From Problem f: Var [x] = E [ x 2] E [x] 2 = = 2 3 x 4 dx 25 6 = 3 2 [ ] x = = 5 8. x 2 p(x) dx 25 6 Problem 3 a) We apply marginalization: p(x) = y Y b) We apply marginalization: p(y) = x X c) We apply conditioning: p(x y) = p(x,y) p(y) = (x+y)2 3(+y+y 2 ). d) We apply conditioning: p(y x) = p(x,y) p(x) = 3(x+y)2 (+3x+3x 2 ). p(x, y) dy = 2 ( 3 + x + x2). p(x, y) dx = 6 ( + y + y 2 ). e) 2 p(x y =.5) = ( ) x =
7 Problem 4 From the definition of conditional probability, we have p(xy) = p(x y) p(y) and p(xy) = p(y x) p(x) and therefore p(x y) p(y) = p(y x) p(x). () We can assume that p(x) and p(y) ; otherwise p(y x) and p(x y), respectively, are not defined, and the statement under consideration does not make sense. We will check both directions of the equivalence statement: p(x y) = p(x) p(y x) = p(y) () = p(x) p(y) = p(y x) p(x) p(x) = p(y) = p(y x) () = p(x y) p(y) = p(y) p(x) p(y) = p(x) = p(x y) Therefore, p(x y) = p(x) p(y x) = p(y). Problem 5 Using p(x, y z) = p(x y, z) p(y z) (2) both directions of the equivalence statement can be shown as follows: p(x y, z) = p(x z) p(x, y z) = p(x z) p(y z) (2) = p(x, y z) = p(x z) p(y z) (2) = p(x y, z) p(y z) = p(x z) p(y z) = p(x y, z) = p(x z) since p(y z), otherwise p(x y, z) is not defined. Problem 6 Let x, y, z [, ] be continuous random variables. Consider the joint PDF p(x, y, z) = c g x (x, z) g y (y, z) where g x and g y necessarily PDFs) and c is a constant such that are functions (not p(x, y, z) dx dy dz = Then with p(x, y z) = p(x, y, z) p(z) = c g x(x, z) g y (y, z), c G x (z) G y (z) G x (z) = g x (x, z) dx, G y (z) = g y (y, z) dy 7
8 since p(z) = c g x (x, z) g y (y, z) dx dy = c G x (z) g y (y, z) dy = c G x (z) G y (z). Furthermore, p(x z) = p(x, z) p(z) = p(x, y, z) dy p(z) = c g x(x, z)g y (z) c G x (z)g y (z) = g x(x, z) G x (z) and, by analogy, p(y z) = g y(y, z) G y (z). Continuing the above, p(x, y z) = g x(x, z)g y (y, z) G x (z)g y (z) = p(x z) p(y z), that is, x and y are conditionally independent. Now, let g x (x, z) = x + z and g y (y, z) = y + z. Then, p(x, y) = = c p(x, y, z) dz = c [ 3 z3 + 2 (x + y)z2 + xyz (x + z)(y + z) dz = c ] z= z= z 2 + (x + y)z + xy dz ( = c 3 + ) (x + y) + xy 2 p(x) = p(y) = Therefore, p(x, y) dy = c p(x, y) dx = c x + ( y + xy dy = c x ) ( 2 x = c x + 7 ) 2 ( y + 7 ) 2 (by symmetry). ( 49 p(x) p(y) = c ) (x + y) + xy p(x, y), 2 that is, x and y are not independent. Problem 7 As in Problem 6, let p(x, y, z) = c g x (x, z)g y (y, z). This ensures that p(x, y z) = p(x z) p(y z). Let x, y, z {, }. 8
9 Define g x and g y by value tables: x z g x (x, z) Compute p(x, y), p(x), p(y): y z g y (y, z) x y z p(x, y, z) = 2 g x(x, z)g y (y, z).5.5 x y p(x, y).5.5 From this we see p(x, y) p(x) p(y). x p(x).5.5 y p(y).5.5 x y p(x) p(y) Remark: For verification, one may also compute p(x, y z), p(x z), p(y z) in order to verify p(x, y z) = p(x z) p(y z), although this holds by construction of p(x, y, z) and the derivation in Problem 3. Problem 8 Consider the probability that y is in a small interval [ȳ, ȳ + y] (with y > small), Pr (y [ȳ, ȳ + y]) = ȳ+ y ȳ p y (ỹ) dỹ, which approaches p y (ȳ) y in the limit as y. Let x := g(ȳ). For small y, we have by a Taylor series approximation, g(ȳ + y) g(ȳ) + dg dy (ȳ) y = x + x, x < }{{} =: x Since g(y) is decreasing, x <, see Figure. x x + x g(y) ȳ ȳ + y Figure : Decreasing function g(y) on a small interval [ȳ, ȳ + y]. 9
10 We are interested in the probability of x [ x + x, x]. For small x, this probability is equal to the probability that y [ȳ, ȳ + y] (since y [ȳ, ȳ + y] g(y) [g(ȳ), g(ȳ + y)] x [ x + x, x] for small y and x). Therefore, as y (and thus x ), Pr (y [ȳ, ȳ + y]) = p y (ȳ) y = Pr (x [ x + x, x]) = p x ( x) x p y (ȳ) y = p x ( x) x p x ( x) = p y(ȳ) x y As y (and thus x ), x p x ( x) = p y(ȳ) dg dy (ȳ) or p x(x) =. y dg dy p y(y) dg dy (y). (ȳ) and therefore Problem 9 From the change of variables formula, we have p(y) = p(x) dg dx (x) since g is continuously differentiable and strictly monotonic. Furthermore, y = g(x) dy = dg dx (x)dx. Let Y = g(x ), that is Y = {y x X : y = g(x)}. Then E [y] = Y ȳp y (ȳ) dȳ = X g( x) p x( x) dg dg dx ( x) dx ( x) d x = X g( x)p x ( x) d x. Problem Using the law of the unconscious statistician, we have for a discrete random variable x, E [ax + b] = x X and for a continuous random variable x, E [ax + b] = (ax + b)p(x) = a xp(x) +b p(x) = ae [x] + b x X x X }{{}}{{} E[x] (ax + b)p(x) dx = a xp(x) dx +b } {{ } E[x] p(x) dx = ae [x] + b. } {{ }
11 Problem Using the law of the unconscious statistician applied to the vector random variable PDF p(x, y) we have, [ ] x with y E [g(x)h(y)] = x,y = = g(x)h(y)p(x, y) dx dy g(x)h(y)p(x) p(y) dx dy (by independence of x, y) g(x)p(x) dx = E x [g(x)] E y [h(y)]. And similarly for discrete random variables. h(y)p(y) dy Problem 2 We will construct a counter-example. Let x be a discrete random variable that takes values -,, and with probabilities 4, 2, and 4, respectively. Let w be a discrete random variable taking the values -, with equal probability. Let x and w be independent. Let y be a discrete random variable defined by y = xw (it follows that y can take values -,, ). By construction, it is clear that y depends on x. We will now show that y and x are not independent, but E [xy] = E [x] E [y]. PDF p(x, y): x y p(x, y) x = (Prob 4 ) w = + (Prob 2 ) - impossible since w - 8 x = (Prob 4 ) w = (Prob 2 ) - impossible 2 w either ±, x = Marginalization p(x), p(y):. x p(x) y p(y)
12 It follows that p(x, y) = p(x) p(y), x, y does not hold. For example, p xy (, ) = 8 = p x ( ) p y (). Expected values: E [xy] = x xyp(x, y) y = ( ) ( ) 8 = + ( ) () + () () + () ( ) + () () E [x] = x xp(x) E [y] = = = So, for all x, y, E [xy] = E [x] E [y], but not p(x, y) = p(x) p(y). Hence, we have showed by counterexample that E [xy] = E [x] E [y] does not imply independence of x and y. Problem 3 From the definition of the probability density function for discrete random variables, p z ( z) = Pr (z = z). The probability that z takes on z is given by the sum of the probabilities of all possibilities of x and y such that z = x + y, that is all ȳ Y such that x = z ȳ and y = ȳ (or, alternatively, x X s.t. x = x and y = z x). That is, p z ( z) = Pr (z = z) = ȳ Y Pr (x = z ȳ)pr (y = ȳ) (by independence assumption) = ȳ Y p x ( z ȳ) p y (ȳ) Rewriting the last equation yields p z ( z) = ȳ Y p x ( z ȳ) p y (ȳ). Similarly, p z ( z) = x X p x ( x) p y ( z x) may be obtained. Problem 4 For t, we write F m (t) = Pr (m [, t]) = Pr (max{x,..., x n } [, t]) = Pr (x i [, t] i {,..., n}) = Pr (x [, t]) Pr (x 2 [, t]) Pr (x n [, t]) = t n. By substituting m for t, we obtain the desired result. 2
13 m Since F m ( m) = p m (τ) dτ, the PDF of m is obtained by differentiating, p m ( m) = df m dm ( m) = d d m ( mn ) = n m n. Problem 5 Let x = E [x]. Then Var [x] = (x x) 2 p(x) dx = = E [ x 2] 2 x 2 + x 2 = E [ x 2] x 2 Since Var [x], we have E [ x 2] = x 2 + Var [x] x 2, with equality if Var [x] =. x 2 p(x) dx 2 x xp(x) dx + x 2 } {{ } = x p(x) dx } {{ } = Problem 6 For fixed ȳ, Pr (x ȳ) = F x (ȳ). The probability that y [ȳ, ȳ + y] is Pr (y [ȳ, ȳ + y]) = ȳ+ y ȳ p y (ỹ) dỹ p y (ȳ) y for small y. Therefore, the probability that x ȳ and y [ȳ, ȳ + y] is Pr (x ȳ y [ȳ, ȳ + y]) = Pr (x ȳ) Pr (y [ȳ, ȳ + y]) F x (ȳ) p y (ȳ) y (3) by independence of x and y. Now, since we are interested in Pr (x y), i.e. in the probabilty of x being less than any y and not just a fixed ȳ, we can sum (3) over all possible intervals [ȳ i, ȳ i + y] Pr ( ȳ i such that x ȳ i ) = lim By letting y, we obtain Pr (x y) = F x (ȳ) p y (ȳ) dȳ. N N i= N F x (ȳ i ) p y (ȳ i ) y 3
14 Problem 7 First, we compute the mean of xy, E [xy] = E [x] E [y] = µ xµ y. x,y Hence, by using independence of x and y and the result of Problem 9, [ Var [xy] = E (xy) 2] (µ x µ y ) 2 x,y x,y = E [ x 2] E [ y 2] µ 2 xµ 2 y. Using E [ x 2] = σ 2 x + µ 2 x and E [ y 2] = σ 2 y + µ 2 y, we obtain the result Var x,y [xy] = ( σ 2 x + µ 2 x) ( σ 2 y + µ 2 y) µ 2 x µ 2 y = σ 2 xσ 2 y + µ 2 yσ 2 x + µ 2 xσ 2 y. Problem 8 a) The cumulative distribution is ˆF x ( x) = x λe λt dt = [ e λt] x t= = e λ x. Let u be a sample from a uniform distribution. According to the method (A3) presented in class, we obtain a sample x s by solving u = ˆF x (x s ) for x s, that is, u = e λxs e λxs = u λx s = ln ( u) x s = Notice the following properties: u x s, u x s +. ln( u). λ b) According to algorithm (A4) presented in class, we will use ˆp xy ( x, ȳ) = ˆp y x (ȳ x) ˆp x ( x) to obtain the sample x s from ˆp x and u, and the sample y s from ˆp y x (ȳ x s ) and u 2. We calculate x ˆp xy ( x, ỹ) dỹ = λ e λ x e λ 2 x x λ 2 e λ 2ỹ dỹ = λ e λ x, and, therefore, ˆp x ( x) = { λ e λ x x otherwise. 4
15 Furthermore, for x, ˆp y x (ȳ x) = ˆp xy( x, ȳ) ˆp x ( x) = { λ 2 e λ 2(ȳ x) x ȳ otherwise, with the corresponding CDF ˆF y x (ȳ x) = ȳ ȳ ˆp y x (t x) dt = λ 2 e λ2(t x) dt = e λ2(ȳ x). x Since the PDF ˆp x ( x) is identical to the PDF considered in part (a), we obtain the sample x s from u according to the solution from (a); that is, x s = ln( u ) λ. Notice that x s. We then obtain the sample y s by setting u 2 = ˆF y x (y s x s ) and solving for y s similarly to the solution in part (a): y s = ln( u 2) λ 2 + x s. Problem 9 For all x, ȳ {, } we have by marginalization, p x ( x) = p xy ( x, ȳ) = p xy ( x, ) + p xy ( x, ) = p xy ( x, ȳ) + p xy ( x, ȳ) p y (ȳ) = Therefore, ȳ {,} x {,} p xy ( x, ȳ) = p xy ( x, ȳ) + p xy ( x, ȳ). p x ( x) + p y (ȳ) = 2p xy ( x, ȳ) + p xy ( x, ȳ) + p xy ( x, ȳ) = 2p xy ( x, ȳ) + p xy ( x, ȳ) + p xy ( x, ȳ) + p xy ( x, ȳ) p xy ( x, ȳ) = p xy ( x, ȳ) + p xy ( x, ȳ) + p xy ( x, ȳ) + p xy ( x, ȳ) }{{} = p xy ( x, ȳ) = ȳ {,} x {,} p xy ( x, ȳ) + p xy ( x, ȳ) = + p xy ( x, ȳ) p xy ( x, ȳ) + p xy ( x, ȳ) = p xy ( x, ȳ) p x ( x) + p y (ȳ). 5
16 Problem 2 E [x] = = a a a xp(x) dx = xp(x) dx a xp(x) dx + } {{ } a ap(x) dx p(x) dx = a Pr (x a) a xp(x) dx Problem 2 This inequality can be obtained using Markov s inequality: (x x) 2 is a nonnegative random variable; apply Markov s inequality with a = k 2 (k ): Pr ( (x x) 2 k 2) E [ (x x) 2] k 2 = Var [x] k 2 = σ2 k 2 Since (x x) 2 k 2 x x k, the result follows: Pr ( x x k) σ2 k 2. Problem 22 Consider the random variable x +x 2 + +x n n. It has the mean [ ] x + x x n E = n n (E [x ] + + E [x n ]) = n (nµ) = µ and variance [ ] [ (x x + x x n + x x n Var = E n n = [ (E n 2 (x µ) (x n µ) 2 + cross terms The expected value of the cross terms is zero, for example, Hence, ) ] 2 µ ]). E [(x i µ) (x j µ)] = E [x i µ] E [x j µ] for i j, by independence =. [ ] x + x x n Var = n n 2 (Var [x ] +... Var [x n ]) = nσ2 n 2 = σ2 n 6
17 Applying Chebyshev s inequality with ɛ > gives ( ) x + x x n Pr µ n ɛ σ2 nɛ 2 n ( ) x + x x n Pr µ n > ɛ n Remark: Changing to > is valid here, since ( ) ( ) x + + x n Pr µ n ɛ x + + x n = Pr µ n > ɛ ( ) x + + x n + Pr µ n = ɛ. }{{} = since x + +xn n is a CRV Problem 23 First, we rewrite Pr (5 < x < 5) = Pr ( x < 5) = Pr ( x 5). Using Chebyshev s inequality, with x =, σ 2 = 5, k = 5, yields Pr ( x 5) = 5 25 = 3 5. Therefore, we can say that Pr (5 < x < 5) 3 5 = 2 5. Problem 24 For s, t, Pr (x > s + t x > t) Pr (x > s + t) Pr (x > s + t x > t) = = Pr (x > t) Pr (x > t) [ s+t λe λ x d x e λ x ] = t λe λ x d x = s+t [ e λ x ] = e λ(s+t) t e λt = e λs = s = Pr (x > s). λe λ x d x 7
Recursive Estimation
Recursive Estimatio Raffaello D Adrea Sprig 2 Problem Set: Probability Review Last updated: February 28, 2 Notes: Notatio: Uless otherwise oted, x, y, ad z deote radom variables, f x (x) (or the short
More informationLecture 8: Jointly distributed random variables
Lecture : Jointly distributed random variables Random Vectors and Joint Probability Distributions Definition: Random Vector. An n-dimensional random vector, denoted as Z = (Z, Z,, Z n ), is a function
More informationCS 112: Computer System Modeling Fundamentals. Prof. Jenn Wortman Vaughan April 21, 2011 Lecture 8
CS 112: Computer System Modeling Fundamentals Prof. Jenn Wortman Vaughan April 21, 2011 Lecture 8 Quiz #2 Reminders & Announcements Homework 2 is due in class on Tuesday be sure to check the posted homework
More informationProbability Model for 2 RV s
Probability Model for 2 RV s The joint probability mass function of X and Y is P X,Y (x, y) = P [X = x, Y= y] Joint PMF is a rule that for any x and y, gives the probability that X = x and Y= y. 3 Example:
More informationMultivariate probability distributions
Multivariate probability distributions September, 07 STAT 0 Class Slide Outline of Topics Background Discrete bivariate distribution 3 Continuous bivariate distribution STAT 0 Class Slide Multivariate
More informationJoint probability distributions
Joint probability distributions Let X and Y be two discrete rv s defined on the sample space S. The joint probability mass function p(x, y) is p(x, y) = P(X = x, Y = y). Note p(x, y) and x y p(x, y) =
More informationWill Monroe July 21, with materials by Mehran Sahami and Chris Piech. Joint Distributions
Will Monroe July 1, 017 with materials by Mehran Sahami and Chris Piech Joint Distributions Review: Normal random variable An normal (= Gaussian) random variable is a good approximation to many other distributions.
More information14.30 Introduction to Statistical Methods in Economics Spring 2009
MIT OpenCourseWare http://ocw.mit.edu 14.30 Introduction to Statistical Methods in Economics Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
More informationIE 521 Convex Optimization
Lecture 4: 5th February 2019 Outline 1 / 23 Which function is different from others? Figure: Functions 2 / 23 Definition of Convex Function Definition. A function f (x) : R n R is convex if (i) dom(f )
More informationMULTI-DIMENSIONAL MONTE CARLO INTEGRATION
CS580: Computer Graphics KAIST School of Computing Chapter 3 MULTI-DIMENSIONAL MONTE CARLO INTEGRATION 2 1 Monte Carlo Integration This describes a simple technique for the numerical evaluation of integrals
More information= f (a, b) + (hf x + kf y ) (a,b) +
Chapter 14 Multiple Integrals 1 Double Integrals, Iterated Integrals, Cross-sections 2 Double Integrals over more general regions, Definition, Evaluation of Double Integrals, Properties of Double Integrals
More informationToday s outline: pp
Chapter 3 sections We will SKIP a number of sections Random variables and discrete distributions Continuous distributions The cumulative distribution function Bivariate distributions Marginal distributions
More informationMath 233. Lagrange Multipliers Basics
Math 233. Lagrange Multipliers Basics Optimization problems of the form to optimize a function f(x, y, z) over a constraint g(x, y, z) = k can often be conveniently solved using the method of Lagrange
More information(c) 0 (d) (a) 27 (b) (e) x 2 3x2
1. Sarah the architect is designing a modern building. The base of the building is the region in the xy-plane bounded by x =, y =, and y = 3 x. The building itself has a height bounded between z = and
More informationHw 4 Due Feb 22. D(fg) x y z (
Hw 4 Due Feb 22 2.2 Exercise 7,8,10,12,15,18,28,35,36,46 2.3 Exercise 3,11,39,40,47(b) 2.4 Exercise 6,7 Use both the direct method and product rule to calculate where f(x, y, z) = 3x, g(x, y, z) = ( 1
More informationREVIEW I MATH 254 Calculus IV. Exam I (Friday, April 29) will cover sections
REVIEW I MATH 254 Calculus IV Exam I (Friday, April 29 will cover sections 14.1-8. 1. Functions of multivariables The definition of multivariable functions is similar to that of functions of one variable.
More informationCurves, Tangent Planes, and Differentials ( ) Feb. 26, 2012 (Sun) Lecture 9. Partial Derivatives: Signs on Level Curves, Tangent
Lecture 9. Partial Derivatives: Signs on Level Curves, Tangent Planes, and Differentials ( 11.3-11.4) Feb. 26, 2012 (Sun) Signs of Partial Derivatives on Level Curves Level curves are shown for a function
More informationLecture 4. Digital Image Enhancement. 1. Principle of image enhancement 2. Spatial domain transformation. Histogram processing
Lecture 4 Digital Image Enhancement 1. Principle of image enhancement 2. Spatial domain transformation Basic intensity it tranfomation ti Histogram processing Principle Objective of Enhancement Image enhancement
More information1. Suppose that the equation F (x, y, z) = 0 implicitly defines each of the three variables x, y, and z as functions of the other two:
Final Solutions. Suppose that the equation F (x, y, z) implicitly defines each of the three variables x, y, and z as functions of the other two: z f(x, y), y g(x, z), x h(y, z). If F is differentiable
More informationMath 209, Fall 2009 Homework 3
Math 209, Fall 2009 Homework 3 () Find equations of the tangent plane and the normal line to the given surface at the specified point: x 2 + 2y 2 3z 2 = 3, P (2,, ). Solution Using implicit differentiation
More informationIntroduction to Digital Image Processing
Introduction to Digital Image Processing Ranga Rodrigo June 9, 29 Outline Contents Introduction 2 Point Operations 2 Histogram Processing 5 Introduction We can process images either in spatial domain or
More informationIntensity Transformations and Spatial Filtering
77 Chapter 3 Intensity Transformations and Spatial Filtering Spatial domain refers to the image plane itself, and image processing methods in this category are based on direct manipulation of pixels in
More informationLocal Limit Theorem in negative curvature. François Ledrappier. IM-URFJ, 26th May, 2014
Local Limit Theorem in negative curvature François Ledrappier University of Notre Dame/ Université Paris 6 Joint work with Seonhee Lim, Seoul Nat. Univ. IM-URFJ, 26th May, 2014 1 (M, g) is a closed Riemannian
More informationMATH. 2153, Spring 16, MWF 12:40 p.m. QUIZ 1 January 25, 2016 PRINT NAME A. Derdzinski Show all work. No calculators. The problem is worth 10 points.
MATH. 2153, Spring 16, MWF 12:40 p.m. QUIZ 1 January 25, 2016 PRINT NAME A. Derdzinski Show all work. No calculators. The problem is worth 10 points. 1. Evaluate the area A of the triangle with the vertices
More informationMath 233. Lagrange Multipliers Basics
Math 33. Lagrange Multipliers Basics Optimization problems of the form to optimize a function f(x, y, z) over a constraint g(x, y, z) = k can often be conveniently solved using the method of Lagrange multipliers:
More informationMTAEA Convexity and Quasiconvexity
School of Economics, Australian National University February 19, 2010 Convex Combinations and Convex Sets. Definition. Given any finite collection of points x 1,..., x m R n, a point z R n is said to be
More informationDiscrete Mathematics Course Review 3
21-228 Discrete Mathematics Course Review 3 This document contains a list of the important definitions and theorems that have been covered thus far in the course. It is not a complete listing of what has
More informationLocally convex topological vector spaces
Chapter 4 Locally convex topological vector spaces 4.1 Definition by neighbourhoods Let us start this section by briefly recalling some basic properties of convex subsets of a vector space over K (where
More informationPage 129 Exercise 5: Suppose that the joint p.d.f. of two random variables X and Y is as follows: { c(x. 0 otherwise. ( 1 = c. = c
Stat Solutions for Homework Set Page 9 Exercise : Suppose that the joint p.d.f. of two random variables X and Y is as follows: { cx fx, y + y for y x, < x < otherwise. Determine a the value of the constant
More informationMath 209 (Fall 2007) Calculus III. Solution #5. 1. Find the minimum and maximum values of the following functions f under the given constraints:
Math 9 (Fall 7) Calculus III Solution #5. Find the minimum and maximum values of the following functions f under the given constraints: (a) f(x, y) 4x + 6y, x + y ; (b) f(x, y) x y, x + y 6. Solution:
More informationThere are 10 problems, with a total of 150 points possible. (a) Find the tangent plane to the surface S at the point ( 2, 1, 2).
Instructions Answer each of the questions on your own paper, and be sure to show your work so that partial credit can be adequately assessed. Put your name on each page of your paper. You may use a scientific
More informationBounded, Closed, and Compact Sets
Bounded, Closed, and Compact Sets Definition Let D be a subset of R n. Then D is said to be bounded if there is a number M > 0 such that x < M for all x D. D is closed if it contains all the boundary points.
More informationDigital Image Processing
Digital Image Processing Intensity Transformations (Histogram Processing) Christophoros Nikou cnikou@cs.uoi.gr University of Ioannina - Department of Computer Science and Engineering 2 Contents Over the
More informationLecture 2 September 3
EE 381V: Large Scale Optimization Fall 2012 Lecture 2 September 3 Lecturer: Caramanis & Sanghavi Scribe: Hongbo Si, Qiaoyang Ye 2.1 Overview of the last Lecture The focus of the last lecture was to give
More informationMath 213 Exam 2. Each question is followed by a space to write your answer. Please write your answer neatly in the space provided.
Math 213 Exam 2 Name: Section: Do not remove this answer page you will return the whole exam. You will be allowed two hours to complete this test. No books or notes may be used other than a onepage cheat
More informationInverse and Implicit functions
CHAPTER 3 Inverse and Implicit functions. Inverse Functions and Coordinate Changes Let U R d be a domain. Theorem. (Inverse function theorem). If ϕ : U R d is differentiable at a and Dϕ a is invertible,
More informationFunctions of Several Variables
Jim Lambers MAT 280 Spring Semester 2009-10 Lecture 2 Notes These notes correspond to Section 11.1 in Stewart and Section 2.1 in Marsden and Tromba. Functions of Several Variables Multi-variable calculus
More information= w. w u. u ; u + w. x x. z z. y y. v + w. . Remark. The formula stated above is very important in the theory of. surface integral.
1 Chain rules 2 Directional derivative 3 Gradient Vector Field 4 Most Rapid Increase 5 Implicit Function Theorem, Implicit Differentiation 6 Lagrange Multiplier 7 Second Derivative Test Theorem Suppose
More informationMATH 2400, Analytic Geometry and Calculus 3
MATH 2400, Analytic Geometry and Calculus 3 List of important Definitions and Theorems 1 Foundations Definition 1. By a function f one understands a mathematical object consisting of (i) a set X, called
More informationSTAT 725 Notes Monte Carlo Integration
STAT 725 Notes Monte Carlo Integration Two major classes of numerical problems arise in statistical inference: optimization and integration. We have already spent some time discussing different optimization
More informationChapter 6. Curves and Surfaces. 6.1 Graphs as Surfaces
Chapter 6 Curves and Surfaces In Chapter 2 a plane is defined as the zero set of a linear function in R 3. It is expected a surface is the zero set of a differentiable function in R n. To motivate, graphs
More informationLecture 2 - Introduction to Polytopes
Lecture 2 - Introduction to Polytopes Optimization and Approximation - ENS M1 Nicolas Bousquet 1 Reminder of Linear Algebra definitions Let x 1,..., x m be points in R n and λ 1,..., λ m be real numbers.
More information1 Overview. 2 Applications of submodular maximization. AM 221: Advanced Optimization Spring 2016
AM : Advanced Optimization Spring 06 Prof. Yaron Singer Lecture 0 April th Overview Last time we saw the problem of Combinatorial Auctions and framed it as a submodular maximization problem under a partition
More informationTesting Continuous Distributions. Artur Czumaj. DIMAP (Centre for Discrete Maths and it Applications) & Department of Computer Science
Testing Continuous Distributions Artur Czumaj DIMAP (Centre for Discrete Maths and it Applications) & Department of Computer Science University of Warwick Joint work with A. Adamaszek & C. Sohler Testing
More informationTopic 6: Calculus Integration Volume of Revolution Paper 2
Topic 6: Calculus Integration Standard Level 6.1 Volume of Revolution Paper 1. Let f(x) = x ln(4 x ), for < x
More informationChapter 1. Introduction
Chapter 1 Introduction A Monte Carlo method is a compuational method that uses random numbers to compute (estimate) some quantity of interest. Very often the quantity we want to compute is the mean of
More informationFunctions of Two variables.
Functions of Two variables. Ferdinánd Filip filip.ferdinand@bgk.uni-obuda.hu siva.banki.hu/jegyzetek 27 February 217 Ferdinánd Filip 27 February 217 Functions of Two variables. 1 / 36 Table of contents
More informationSTATISTICAL LABORATORY, April 30th, 2010 BIVARIATE PROBABILITY DISTRIBUTIONS
STATISTICAL LABORATORY, April 3th, 21 BIVARIATE PROBABILITY DISTRIBUTIONS Mario Romanazzi 1 MULTINOMIAL DISTRIBUTION Ex1 Three players play 1 independent rounds of a game, and each player has probability
More informationMath 265 Exam 3 Solutions
C Roettger, Fall 16 Math 265 Exam 3 Solutions Problem 1 Let D be the region inside the circle r 5 sin θ but outside the cardioid r 2 + sin θ. Find the area of D. Note that r and θ denote polar coordinates.
More information. 1. Chain rules. Directional derivative. Gradient Vector Field. Most Rapid Increase. Implicit Function Theorem, Implicit Differentiation
1 Chain rules 2 Directional derivative 3 Gradient Vector Field 4 Most Rapid Increase 5 Implicit Function Theorem, Implicit Differentiation 6 Lagrange Multiplier 7 Second Derivative Test Theorem Suppose
More informationSolution 2. ((3)(1) (2)(1), (4 3), (4)(2) (3)(3)) = (1, 1, 1) D u (f) = (6x + 2yz, 2y + 2xz, 2xy) (0,1,1) = = 4 14
Vector and Multivariable Calculus L Marizza A Bailey Practice Trimester Final Exam Name: Problem 1. To prepare for true/false and multiple choice: Compute the following (a) (4, 3) ( 3, 2) Solution 1. (4)(
More information. Tutorial Class V 3-10/10/2012 First Order Partial Derivatives;...
Tutorial Class V 3-10/10/2012 1 First Order Partial Derivatives; Tutorial Class V 3-10/10/2012 1 First Order Partial Derivatives; 2 Application of Gradient; Tutorial Class V 3-10/10/2012 1 First Order
More informationMath 241, Final Exam. 12/11/12.
Math, Final Exam. //. No notes, calculator, or text. There are points total. Partial credit may be given. ircle or otherwise clearly identify your final answer. Name:. (5 points): Equation of a line. Find
More informationMultivariate Calculus: Review Problems for Examination Two
Multivariate Calculus: Review Problems for Examination Two Note: Exam Two is on Tuesday, August 16. The coverage is multivariate differential calculus and double integration. You should review the double
More informationMultivariate Calculus Review Problems for Examination Two
Multivariate Calculus Review Problems for Examination Two Note: Exam Two is on Thursday, February 28, class time. The coverage is multivariate differential calculus and double integration: sections 13.3,
More informationminutes/question 26 minutes
st Set Section I (Multiple Choice) Part A (No Graphing Calculator) 3 problems @.96 minutes/question 6 minutes. What is 3 3 cos cos lim? h hh (D) - The limit does not exist.. At which of the five points
More informationDecomposition of log-linear models
Graphical Models, Lecture 5, Michaelmas Term 2009 October 27, 2009 Generating class Dependence graph of log-linear model Conformal graphical models Factor graphs A density f factorizes w.r.t. A if there
More informationx 6 + λ 2 x 6 = for the curve y = 1 2 x3 gives f(1, 1 2 ) = λ actually has another solution besides λ = 1 2 = However, the equation λ
Math 0 Prelim I Solutions Spring 010 1. Let f(x, y) = x3 y for (x, y) (0, 0). x 6 + y (4 pts) (a) Show that the cubic curves y = x 3 are level curves of the function f. Solution. Substituting y = x 3 in
More informationTopic 5 - Joint distributions and the CLT
Topic 5 - Joint distributions and the CLT Joint distributions Calculation of probabilities, mean and variance Expectations of functions based on joint distributions Central Limit Theorem Sampling distributions
More informationSets. De Morgan s laws. Mappings. Definition. Definition
Sets Let X and Y be two sets. Then the set A set is a collection of elements. Two sets are equal if they contain exactly the same elements. A is a subset of B (A B) if all the elements of A also belong
More informationGrad operator, triple and line integrals. Notice: this material must not be used as a substitute for attending the lectures
Grad operator, triple and line integrals Notice: this material must not be used as a substitute for attending the lectures 1 .1 The grad operator Let f(x 1, x,..., x n ) be a function of the n variables
More information9/23/2014. Why study? Lambda calculus. Church Rosser theorem Completeness of Lambda Calculus: Turing Complete
Dr A Sahu Dept of Computer Science & Engineering IIT Guwahati Why study? Lambda calculus Syntax Evaluation Relationship to programming languages Church Rosser theorem Completeness of Lambda Calculus: Turing
More informationEECS 126 Probability and Random Processes University of California, Berkeley: Fall 2017 Abhay Parekh and Jean Walrand September 21, 2017.
EECS 126 Probability and Random Processes University of California, Berkeley: Fall 2017 Abhay Parekh and Jean Walrand September 21, 2017 Midterm Exam Last Name First Name SID Rules. You have 80 minutes
More information1 Scope, Bound and Free Occurrences, Closed Terms
CS 6110 S18 Lecture 2 The λ-calculus Last time we introduced the λ-calculus, a mathematical system for studying the interaction of functional abstraction and functional application. We discussed the syntax
More information2. Solve for x when x < 22. Write your answer in interval notation. 3. Find the distance between the points ( 1, 5) and (4, 3).
Math 6 Practice Problems for Final. Find all real solutions x such that 7 3 x = 5 x 3.. Solve for x when 0 4 3x
More informationf (Pijk ) V. may form the Riemann sum: . Definition. The triple integral of f over the rectangular box B is defined to f (x, y, z) dv = lim
Chapter 14 Multiple Integrals..1 Double Integrals, Iterated Integrals, Cross-sections.2 Double Integrals over more general regions, Definition, Evaluation of Double Integrals, Properties of Double Integrals.3
More informationECE353: Probability and Random Processes. Lecture 11- Two Random Variables (II)
ECE353: Probability and Random Processes Lecture 11- Two Random Variables (II) Xiao Fu School of Electrical Engineering and Computer Science Oregon State University E-mail: xiao.fu@oregonstate.edu Joint
More informationTangent Planes and Linear Approximations
February 21, 2007 Tangent Planes Tangent Planes Let S be a surface with equation z = f (x, y). Tangent Planes Let S be a surface with equation z = f (x, y). Let P(x 0, y 0, z 0 ) be a point on S. Tangent
More informationMath 113 Calculus III Final Exam Practice Problems Spring 2003
Math 113 Calculus III Final Exam Practice Problems Spring 23 1. Let g(x, y, z) = 2x 2 + y 2 + 4z 2. (a) Describe the shapes of the level surfaces of g. (b) In three different graphs, sketch the three cross
More informationContents. MATH 32B-2 (18W) (L) G. Liu / (TA) A. Zhou Calculus of Several Variables. 1 Homework 1 - Solutions 3. 2 Homework 2 - Solutions 13
MATH 32B-2 (8) (L) G. Liu / (TA) A. Zhou Calculus of Several Variables Contents Homework - Solutions 3 2 Homework 2 - Solutions 3 3 Homework 3 - Solutions 9 MATH 32B-2 (8) (L) G. Liu / (TA) A. Zhou Calculus
More informationMinima, Maxima, Saddle points
Minima, Maxima, Saddle points Levent Kandiller Industrial Engineering Department Çankaya University, Turkey Minima, Maxima, Saddle points p./9 Scalar Functions Let us remember the properties for maxima,
More informationLecture 6: Chain rule, Mean Value Theorem, Tangent Plane
Lecture 6: Chain rule, Mean Value Theorem, Tangent Plane Rafikul Alam Department of Mathematics IIT Guwahati Chain rule Theorem-A: Let x : R R n be differentiable at t 0 and f : R n R be differentiable
More informationPartial Derivatives. Partial Derivatives. Partial Derivatives. Partial Derivatives. Partial Derivatives. Partial Derivatives
In general, if f is a function of two variables x and y, suppose we let only x vary while keeping y fixed, say y = b, where b is a constant. By the definition of a derivative, we have Then we are really
More informationLaplace Transform of a Lognormal Random Variable
Approximations of the Laplace Transform of a Lognormal Random Variable Joint work with Søren Asmussen & Jens Ledet Jensen The University of Queensland School of Mathematics and Physics August 1, 2011 Conference
More informationMATH 54 - LECTURE 4 DAN CRYTSER
MATH 54 - LECTURE 4 DAN CRYTSER Introduction In this lecture we review properties and examples of bases and subbases. Then we consider ordered sets and the natural order topology that one can lay on an
More informationMathematical preliminaries and error analysis
Mathematical preliminaries and error analysis Tsung-Ming Huang Department of Mathematics National Taiwan Normal University, Taiwan August 28, 2011 Outline 1 Round-off errors and computer arithmetic IEEE
More informationDubna 2018: lines on cubic surfaces
Dubna 2018: lines on cubic surfaces Ivan Cheltsov 20th July 2018 Lecture 1: projective plane Complex plane Definition A line in C 2 is a subset that is given by ax + by + c = 0 for some complex numbers
More informationWhat you will learn today
What you will learn today Tangent Planes and Linear Approximation and the Gradient Vector Vector Functions 1/21 Recall in one-variable calculus, as we zoom in toward a point on a curve, the graph becomes
More informationChapter 15 Vector Calculus
Chapter 15 Vector Calculus 151 Vector Fields 152 Line Integrals 153 Fundamental Theorem and Independence of Path 153 Conservative Fields and Potential Functions 154 Green s Theorem 155 urface Integrals
More informationLecture 2: Introduction to Numerical Simulation
Lecture 2: Introduction to Numerical Simulation Ahmed Kebaier kebaier@math.univ-paris13.fr HEC, Paris Outline of The Talk 1 Simulation of Random variables Outline 1 Simulation of Random variables Random
More informationPerspective projection and Transformations
Perspective projection and Transformations The pinhole camera The pinhole camera P = (X,,) p = (x,y) O λ = 0 Q λ = O λ = 1 Q λ = P =-1 Q λ X = 0 + λ X 0, 0 + λ 0, 0 + λ 0 = (λx, λ, λ) The pinhole camera
More informationEC 521 MATHEMATICAL METHODS FOR ECONOMICS. Lecture 2: Convex Sets
EC 51 MATHEMATICAL METHODS FOR ECONOMICS Lecture : Convex Sets Murat YILMAZ Boğaziçi University In this section, we focus on convex sets, separating hyperplane theorems and Farkas Lemma. And as an application
More informationRandom Number Generators
1/17 Random Number Generators Professor Karl Sigman Columbia University Department of IEOR New York City USA 2/17 Introduction Your computer generates" numbers U 1, U 2, U 3,... that are considered independent
More information1. Gradient and Chain Rule. First a reminder about the gradient f of a smooth function f. In R n, the gradient operator or grad has the form
REVIEW OF CRITICAL OINTS, CONVEXITY AND INTEGRALS KATHERINE DONOGHUE & ROB KUSNER 1. Gradient and Chain Rule. First a reminder about the gradient f of a smooth function f. In R n, the gradient operator
More informationGAMES Webinar: Rendering Tutorial 2. Monte Carlo Methods. Shuang Zhao
GAMES Webinar: Rendering Tutorial 2 Monte Carlo Methods Shuang Zhao Assistant Professor Computer Science Department University of California, Irvine GAMES Webinar Shuang Zhao 1 Outline 1. Monte Carlo integration
More informationMATH2111 Higher Several Variable Calculus Lagrange Multipliers
MATH2111 Higher Several Variable Calculus Lagrange Multipliers Dr. Jonathan Kress School of Mathematics and Statistics University of New South Wales Semester 1, 2016 [updated: February 29, 2016] JM Kress
More informationQuantitative Biology II!
Quantitative Biology II! Lecture 3: Markov Chain Monte Carlo! March 9, 2015! 2! Plan for Today!! Introduction to Sampling!! Introduction to MCMC!! Metropolis Algorithm!! Metropolis-Hastings Algorithm!!
More informationPhysics 235 Chapter 6. Chapter 6 Some Methods in the Calculus of Variations
Chapter 6 Some Methods in the Calculus of Variations In this Chapter we focus on an important method of solving certain problems in Classical Mechanics. In many problems we need to determine how a system
More informationImage Enhancement: To improve the quality of images
Image Enhancement: To improve the quality of images Examples: Noise reduction (to improve SNR or subjective quality) Change contrast, brightness, color etc. Image smoothing Image sharpening Modify image
More informationMonte Carlo Integration
Lecture 11: Monte Carlo Integration Computer Graphics and Imaging UC Berkeley CS184/284A, Spring 2016 Reminder: Quadrature-Based Numerical Integration f(x) Z b a f(x)dx x 0 = a x 1 x 2 x 3 x 4 = b E.g.
More informationLecture 5. If, as shown in figure, we form a right triangle With P1 and P2 as vertices, then length of the horizontal
Distance; Circles; Equations of the form Lecture 5 y = ax + bx + c In this lecture we shall derive a formula for the distance between two points in a coordinate plane, and we shall use that formula to
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 13 Random Numbers and Stochastic Simulation Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright
More information(1) Given the following system of linear equations, which depends on a parameter a R, 3x y + 5z = 2 4x + y + (a 2 14)z = a + 2
(1 Given the following system of linear equations, which depends on a parameter a R, x + 2y 3z = 4 3x y + 5z = 2 4x + y + (a 2 14z = a + 2 (a Classify the system of equations depending on the values of
More informationModule 2: Single Step Methods Lecture 4: The Euler Method. The Lecture Contains: The Euler Method. Euler's Method (Analytical Interpretations)
The Lecture Contains: The Euler Method Euler's Method (Analytical Interpretations) An Analytical Example file:///g /Numerical_solutions/lecture4/4_1.htm[8/26/2011 11:14:40 AM] We shall now describe methods
More informationAnswer sheet: Second Midterm for Math 2339
Answer sheet: Second Midterm for Math 2339 March 31, 2009 Problem 1. Let (a) f(x,y,z) = ln(z x 2 y 2 ). Evaluate f(1, 1, 2 + e). f(1, 1, 2 + e) = ln(2 + e 1 1) = lne = 1 (b) Find the domain of f. dom(f)
More informationSurfaces and Partial Derivatives
Surfaces and Partial Derivatives James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 9, 2016 Outline Partial Derivatives Tangent Planes
More informationCS242: Probabilistic Graphical Models Lecture 3: Factor Graphs & Variable Elimination
CS242: Probabilistic Graphical Models Lecture 3: Factor Graphs & Variable Elimination Instructor: Erik Sudderth Brown University Computer Science September 11, 2014 Some figures and materials courtesy
More informationLecture 17 - Friday May 8th
Lecture 17 - Friday May 8th jacques@ucsd.edu Key words: Multiple integrals Key concepts: Know how to evaluate multiple integrals 17.1 Multiple Integrals A rectangular parallelepiped in R n is a set of
More informationChapter 4 Convex Optimization Problems
Chapter 4 Convex Optimization Problems Shupeng Gui Computer Science, UR October 16, 2015 hupeng Gui (Computer Science, UR) Convex Optimization Problems October 16, 2015 1 / 58 Outline 1 Optimization problems
More informationExam 2 is Tue Nov 21. Bring a pencil and a calculator. Discuss similarity to exam1. HW3 is due Tue Dec 5.
Stat 100a: Introduction to Probability. Outline for the day 1. Bivariate and marginal density. 2. CLT. 3. CIs. 4. Sample size calculations. 5. Review for exam 2. Exam 2 is Tue Nov 21. Bring a pencil and
More information