Joint probability distributions

Size: px
Start display at page:

Download "Joint probability distributions"

Transcription

1 Joint probability distributions Let X and Y be two discrete rv s defined on the sample space S. The joint probability mass function p(x, y) is p(x, y) = P(X = x, Y = y). Note p(x, y) and x y p(x, y) = 1. P[(X, Y ) A] = (x,y) A p(x, y). The marginal probability mass function of X and Y, denoted by P X (x) and P Y (y) are P X (x) = y p(x, y), P Y (y) = x p(x, y).

2 Example Let X = number of meals in restaurants and Y = number of movies a randomly selected JMU student has on a typical weekend. The joint pmf of X and Y is given by y p(x, y) x red P(Y 2) = p(, 2) + p(1, 2) + p(2, 2) + p(3, 2) + p(, 3) + p(1, 3) + p(2, 3) + p(3, 3) =.51 The marginal distributions are: p X () =.35, p X (1) =.38, p X (2) =.19, p X (3) =.8 p Y () =.17, p Y (1) =.32, p Y (2) =.28, p Y (3) =.23.

3 Exercise Let X=number of heads and Y=number of heads minus the number of tails obtained in 3 flips of a balanced coin. Get the joint probability distribution of X and Y.

4 Joint pdf Let X and Y be continuous rv s, then f (x, y) is the joint probability density function for X and Y if for any two-dimensional set A P[(X, Y ) A] = A f (x, y)dxdy. In particular, P(a X b, c Y d) = b a d c f (x, y)dydx. The marginal probability density function of X and Y, denoted by f X (x) and f Y (y), are given by f X (x) = f (x, y)dy and f Y (y) = f (x, y)dx.

5 Example* Given the joint pdf f (x, y) = P( < X <.3, < Y <.5) =.3.3 ( 2 3 xy y 2 ) y=.5 dx =.3 f X (x) = 1 f Y (y) = 1 2 { 2 3 (x + 2y), < x < 1, < y < 1, otherwise ( 1 3 x (x + 2y)dydx = )dx = x2 6 + x 6.3 =.2 3 (x + 2y)dy = ( 2 3 xy y 2 ) 1 y= = 2 3 (x + 1), < x < (x + 2y)dx = 1 3 (1 + 4y), < y < 1.

6 exercise Given the joint pdf f (x, y) = Find P( < X < 1 2, < Y < 1). Find f Y (y). { 3 5x(y + x), < x < 1, < y < 2, otherwise

7 1 P( < X < 1 2, < Y < 1) =.5 = x( y xy) 1 y= dx =.5 ( 3 ( 3 2 x x 3 ).5 = 5 8 = x(y + x)dydx 1 x x 2 )dx = f Y (y) = x(y +x)dx = ( 3 5 y x x 3 ) 1 x= = 3 1 y + 1 5, < y < 2. Check 2 ( 3 1 y )dy = ( 3 2 y y) 2 = 1.

8 Example Given the joint pdf { 24xy, x 1, y 1, x + y 1 f (x, y) =, otherwise Let A = {(x, y) : x 1, y 1, x + y.5}, then P[(X, Y ) A] =.5.5 x 24xydydx =.625. f X (x) = f (x, y)dy = 1 x 24xydy = 12x(1 x) 2, x 1.

9 Independent rv s Two rv s X and Y are said to be independent if for every pair of (x, y), p(x, y) = P X (x)p Y (y), when X and Y are discrete or f (x, y) = f X (x)f Y (y) when X and Y are continuous. example: meals and movie example: p(, ) =.5 p X () p Y () = example *: f X (x)f Y (y) = 2 3 (x + 1) 1 3 (1 + 4y) f (x, y), so X and Y are dependent.

10 Expected values, covariance and correlation Let X and Y be two rv s with joint pmf p(x, y) or f (x, y), then { E(h(X, Y )) = x y h(x, y)p(x, y), if X and Y are discrete h(x, y)f (x, y)dxdy, if X and Y are continuous The covariance between X and Y is Cov(X, Y ) = E[(X µ X )(Y µ Y )] { = x y (x µ X )(y µ Y )p(x, y), X, Y discrete (x µ X )(y µ Y )f (x, y)dxdy, X, Y continuous proposition: Cov(X, Y ) = E(XY ) µ X µ Y.

11 example Meals and movies example: y p(x, y) p(x) x p(y) Note µ X = xp(x) = 1, µ Y = yp(y) = 1.57, E(XY ) = x y xyp(x, y) = = 1.5, so Cov(X, Y ) = E(XY ) µ X µ Y = =.7

12 example Given the joint pdf { 24xy, x 1, y 1, x + y 1 f (x, y) =, otherwise f X (x) = f (x, y)dy = 1 x 24xydy = 12x(1 x 2 ), x 1. Similarly, f Y (y) = 12y(1 y 2 ), y 1, and µ X = µ Y = 2 5. E(XY ) = 1 1 x xy24xydydx = 8 Thus Cov(X, Y ) = 2 15 ( 2 5 )( 2 5 ) = x 2 (1 x) 3 dx = 2 15.

13 The correlation coefficient of X an Y, denoted by corr(x, Y), or ρ X,Y or just ρ, is Cov(X,Y ) ρ X,Y = σ X σ Y. Meals and movies example: E(X 2 ) = = 1.86, σx 2 = =.86, σ X =.927, and σ Y = 1.22, so ρ X,Y = =.7 proposition: 1. If a and c are both positive or negative, Corr(aX + b, cy + d) = Corr(X, Y ) ρ If X and Y are independent, then ρ = but ρ = does not necessarily imply independence. 4. ρ = ±1 iff Y = ax + b for a.

14 Suppose X and Y are independent with pdf s f X (x) = 3x 2, x 1 f Y (y) = 2y, y 1. Find E( X Y ). Optional: Find P(X Y ).

15 The joint pdf of X and Y is f (x, y) = 6x 2 y, x 1, y 1. E( X Y ) = 1 1 ( x y )6x 2 ydxdy = 1.5. P(X Y ) = 1 x 6x 2 ydydx =.6

16 Suppose the joint pmf of discrete r.v.s X and Y is given by p(x, y) = kxy, x = 1, 2, y = 1, 2, 3 and k is an unknown constant. 1). Find the proper value for k. 2). Find the marginal probability distributions for X and Y.

Today s outline: pp

Today s outline: pp Chapter 3 sections We will SKIP a number of sections Random variables and discrete distributions Continuous distributions The cumulative distribution function Bivariate distributions Marginal distributions

More information

Probability Model for 2 RV s

Probability Model for 2 RV s Probability Model for 2 RV s The joint probability mass function of X and Y is P X,Y (x, y) = P [X = x, Y= y] Joint PMF is a rule that for any x and y, gives the probability that X = x and Y= y. 3 Example:

More information

This is a good time to refresh your memory on double-integration. We will be using this skill in the upcoming lectures.

This is a good time to refresh your memory on double-integration. We will be using this skill in the upcoming lectures. Chapter 5: JOINT PROBABILITY DISTRIBUTIONS Part 1: Sections 5-1.1 to 5-1.4 For both discrete and continuous random variables we will discuss the following... Joint Distributions (for two or more r.v. s)

More information

Lecture 8: Jointly distributed random variables

Lecture 8: Jointly distributed random variables Lecture : Jointly distributed random variables Random Vectors and Joint Probability Distributions Definition: Random Vector. An n-dimensional random vector, denoted as Z = (Z, Z,, Z n ), is a function

More information

Will Monroe July 21, with materials by Mehran Sahami and Chris Piech. Joint Distributions

Will Monroe July 21, with materials by Mehran Sahami and Chris Piech. Joint Distributions Will Monroe July 1, 017 with materials by Mehran Sahami and Chris Piech Joint Distributions Review: Normal random variable An normal (= Gaussian) random variable is a good approximation to many other distributions.

More information

14.30 Introduction to Statistical Methods in Economics Spring 2009

14.30 Introduction to Statistical Methods in Economics Spring 2009 MIT OpenCourseWare http://ocw.mit.edu 14.30 Introduction to Statistical Methods in Economics Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information

Recursive Estimation

Recursive Estimation Recursive Estimation Raffaello D Andrea Spring 28 Problem Set : Probability Review Last updated: March 6, 28 Notes: Notation: Unless otherwise noted, x, y, and z denote random variables, p x denotes the

More information

Page 129 Exercise 5: Suppose that the joint p.d.f. of two random variables X and Y is as follows: { c(x. 0 otherwise. ( 1 = c. = c

Page 129 Exercise 5: Suppose that the joint p.d.f. of two random variables X and Y is as follows: { c(x. 0 otherwise. ( 1 = c. = c Stat Solutions for Homework Set Page 9 Exercise : Suppose that the joint p.d.f. of two random variables X and Y is as follows: { cx fx, y + y for y x, < x < otherwise. Determine a the value of the constant

More information

Exam 2 is Tue Nov 21. Bring a pencil and a calculator. Discuss similarity to exam1. HW3 is due Tue Dec 5.

Exam 2 is Tue Nov 21. Bring a pencil and a calculator. Discuss similarity to exam1. HW3 is due Tue Dec 5. Stat 100a: Introduction to Probability. Outline for the day 1. Bivariate and marginal density. 2. CLT. 3. CIs. 4. Sample size calculations. 5. Review for exam 2. Exam 2 is Tue Nov 21. Bring a pencil and

More information

ECE353: Probability and Random Processes. Lecture 11- Two Random Variables (II)

ECE353: Probability and Random Processes. Lecture 11- Two Random Variables (II) ECE353: Probability and Random Processes Lecture 11- Two Random Variables (II) Xiao Fu School of Electrical Engineering and Computer Science Oregon State University E-mail: xiao.fu@oregonstate.edu Joint

More information

This is a good time to refresh your memory on double-integration. We will be using this skill in the upcoming lectures.

This is a good time to refresh your memory on double-integration. We will be using this skill in the upcoming lectures. Chapter 5: JOINT PROBABILITY DISTRIBUTIONS Part 1: Sections 5-1.1 to 5-1.4 For both discrete and continuous random variables we will discuss the following... Joint Distributions for two or more r.v. s)

More information

Topic 5 - Joint distributions and the CLT

Topic 5 - Joint distributions and the CLT Topic 5 - Joint distributions and the CLT Joint distributions Calculation of probabilities, mean and variance Expectations of functions based on joint distributions Central Limit Theorem Sampling distributions

More information

Multivariate probability distributions

Multivariate probability distributions Multivariate probability distributions September, 07 STAT 0 Class Slide Outline of Topics Background Discrete bivariate distribution 3 Continuous bivariate distribution STAT 0 Class Slide Multivariate

More information

STATISTICAL LABORATORY, April 30th, 2010 BIVARIATE PROBABILITY DISTRIBUTIONS

STATISTICAL LABORATORY, April 30th, 2010 BIVARIATE PROBABILITY DISTRIBUTIONS STATISTICAL LABORATORY, April 3th, 21 BIVARIATE PROBABILITY DISTRIBUTIONS Mario Romanazzi 1 MULTINOMIAL DISTRIBUTION Ex1 Three players play 1 independent rounds of a game, and each player has probability

More information

CS 112: Computer System Modeling Fundamentals. Prof. Jenn Wortman Vaughan April 21, 2011 Lecture 8

CS 112: Computer System Modeling Fundamentals. Prof. Jenn Wortman Vaughan April 21, 2011 Lecture 8 CS 112: Computer System Modeling Fundamentals Prof. Jenn Wortman Vaughan April 21, 2011 Lecture 8 Quiz #2 Reminders & Announcements Homework 2 is due in class on Tuesday be sure to check the posted homework

More information

Tangent Planes and Linear Approximations

Tangent Planes and Linear Approximations February 21, 2007 Tangent Planes Tangent Planes Let S be a surface with equation z = f (x, y). Tangent Planes Let S be a surface with equation z = f (x, y). Let P(x 0, y 0, z 0 ) be a point on S. Tangent

More information

Pairs of a random variable

Pairs of a random variable Handout 8 Pairs of a random variable "Always be a little improbable." Oscar Wilde in the previous tutorials we analyzed experiments in which an outcome is one number. we'll start to analyze experiments

More information

14.4: Tangent Planes and Linear Approximations

14.4: Tangent Planes and Linear Approximations 14.4: Tangent Planes and Linear Approximations Marius Ionescu October 15, 2012 Marius Ionescu () 14.4: Tangent Planes and Linear Approximations October 15, 2012 1 / 13 Tangent Planes Marius Ionescu ()

More information

MATH 52 MIDTERM I APRIL 22, 2009

MATH 52 MIDTERM I APRIL 22, 2009 MATH 52 MIDTERM I APRIL 22, 2009 THIS IS A CLOSED BOOK, CLOSED NOTES EXAM. NO CALCULATORS OR OTHER ELECTRONIC DEVICES ARE PERMITTED. YOU DO NOT NEED TO EVALUATE ANY INTEGRALS IN ANY PROBLEM. THERE ARE

More information

Sampling and Monte-Carlo Integration

Sampling and Monte-Carlo Integration Sampling and Monte-Carlo Integration Sampling and Monte-Carlo Integration Last Time Pixels are samples Sampling theorem Convolution & multiplication Aliasing: spectrum replication Ideal filter And its

More information

Chapter 5: Joint Probability Distributions and Random

Chapter 5: Joint Probability Distributions and Random Chapter 5: Joint Probability Distributions and Random Samples Curtis Miller 2018-06-13 Introduction We may naturally inquire about collections of random variables that are related to each other in some

More information

Discrete Mathematics Course Review 3

Discrete Mathematics Course Review 3 21-228 Discrete Mathematics Course Review 3 This document contains a list of the important definitions and theorems that have been covered thus far in the course. It is not a complete listing of what has

More information

Feature Selection. Department Biosysteme Karsten Borgwardt Data Mining Course Basel Fall Semester / 262

Feature Selection. Department Biosysteme Karsten Borgwardt Data Mining Course Basel Fall Semester / 262 Feature Selection Department Biosysteme Karsten Borgwardt Data Mining Course Basel Fall Semester 2016 239 / 262 What is Feature Selection? Department Biosysteme Karsten Borgwardt Data Mining Course Basel

More information

STAT 135 Lab 1 Solutions

STAT 135 Lab 1 Solutions STAT 135 Lab 1 Solutions January 26, 2015 Introduction To complete this lab, you will need to have access to R and RStudio. If you have not already done so, you can download R from http://cran.cnr.berkeley.edu/,

More information

We use non-bold capital letters for all random variables in these notes, whether they are scalar-, vector-, matrix-, or whatever-valued.

We use non-bold capital letters for all random variables in these notes, whether they are scalar-, vector-, matrix-, or whatever-valued. The Bayes Classifier We have been starting to look at the supervised classification problem: we are given data (x i, y i ) for i = 1,..., n, where x i R d, and y i {1,..., K}. In this section, we suppose

More information

MATH SPRING 2000 (Test 01) FINAL EXAM INSTRUCTIONS

MATH SPRING 2000 (Test 01) FINAL EXAM INSTRUCTIONS MATH 61 - SPRING 000 (Test 01) Name Signature Instructor Recitation Instructor Div. Sect. No. FINAL EXAM INSTRUCTIONS 1. You must use a # pencil on the mark-sense sheet (answer sheet).. If you have test

More information

Automatic Reasoning (Section 8.3)

Automatic Reasoning (Section 8.3) Automatic Reasoning (Section 8.3) Automatic Reasoning Can reasoning be automated? Yes, for some logics, including first-order logic. We could try to automate natural deduction, but there are many proof

More information

Overview. CS389L: Automated Logical Reasoning. Lecture 6: First Order Logic Syntax and Semantics. Constants in First-Order Logic.

Overview. CS389L: Automated Logical Reasoning. Lecture 6: First Order Logic Syntax and Semantics. Constants in First-Order Logic. Overview CS389L: Automated Logical Reasoning Lecture 6: First Order Logic Syntax and Semantics Işıl Dillig So far: Automated reasoning in propositional logic. Propositional logic is simple and easy to

More information

Locally convex topological vector spaces

Locally convex topological vector spaces Chapter 4 Locally convex topological vector spaces 4.1 Definition by neighbourhoods Let us start this section by briefly recalling some basic properties of convex subsets of a vector space over K (where

More information

12.5 Triple Integrals

12.5 Triple Integrals 1.5 Triple Integrals Arkansas Tech University MATH 94: Calculus III r. Marcel B Finan In Sections 1.1-1., we showed how a function of two variables can be integrated over a region in -space and how integration

More information

Integration. Example Find x 3 dx.

Integration. Example Find x 3 dx. Integration A function F is called an antiderivative of the function f if F (x)=f(x). The set of all antiderivatives of f is called the indefinite integral of f with respect to x and is denoted by f(x)dx.

More information

Precomposing Equations

Precomposing Equations Precomposing Equations Let s precompose the function f(x) = x 3 2x + 9 with the function g(x) = 4 x. (Precompose f with g means that we ll look at f g. We would call g f postcomposing f with g.) f g(x)

More information

Generative and discriminative classification techniques

Generative and discriminative classification techniques Generative and discriminative classification techniques Machine Learning and Category Representation 2014-2015 Jakob Verbeek, November 28, 2014 Course website: http://lear.inrialpes.fr/~verbeek/mlcr.14.15

More information

Iterated Integrals. f (x; y) dy dx. p(x) To evaluate a type I integral, we rst evaluate the inner integral Z q(x) f (x; y) dy.

Iterated Integrals. f (x; y) dy dx. p(x) To evaluate a type I integral, we rst evaluate the inner integral Z q(x) f (x; y) dy. Iterted Integrls Type I Integrls In this section, we begin the study of integrls over regions in the plne. To do so, however, requires tht we exmine the importnt ide of iterted integrls, in which inde

More information

6-1 THE STANDARD NORMAL DISTRIBUTION

6-1 THE STANDARD NORMAL DISTRIBUTION 6-1 THE STANDARD NORMAL DISTRIBUTION The major focus of this chapter is the concept of a normal probability distribution, but we begin with a uniform distribution so that we can see the following two very

More information

Machine Learning

Machine Learning Machine Learning 10-701 Tom M. Mitchell Machine Learning Department Carnegie Mellon University February 15, 2011 Today: Graphical models Inference Conditional independence and D-separation Learning from

More information

Lecture 2 September 3

Lecture 2 September 3 EE 381V: Large Scale Optimization Fall 2012 Lecture 2 September 3 Lecturer: Caramanis & Sanghavi Scribe: Hongbo Si, Qiaoyang Ye 2.1 Overview of the last Lecture The focus of the last lecture was to give

More information

MULTI-DIMENSIONAL MONTE CARLO INTEGRATION

MULTI-DIMENSIONAL MONTE CARLO INTEGRATION CS580: Computer Graphics KAIST School of Computing Chapter 3 MULTI-DIMENSIONAL MONTE CARLO INTEGRATION 2 1 Monte Carlo Integration This describes a simple technique for the numerical evaluation of integrals

More information

Econ 3790: Business and Economics Statistics. Instructor: Yogesh Uppal

Econ 3790: Business and Economics Statistics. Instructor: Yogesh Uppal Econ 3790: Business and Economics Statistics Instructor: Yogesh Uppal Email: yuppal@ysu.edu Chapter 8: Interval Estimation Population Mean: Known Population Mean: Unknown Margin of Error and the Interval

More information

1 Overview. 2 Applications of submodular maximization. AM 221: Advanced Optimization Spring 2016

1 Overview. 2 Applications of submodular maximization. AM 221: Advanced Optimization Spring 2016 AM : Advanced Optimization Spring 06 Prof. Yaron Singer Lecture 0 April th Overview Last time we saw the problem of Combinatorial Auctions and framed it as a submodular maximization problem under a partition

More information

A Class of Symmetric Bivariate Uniform Distributions

A Class of Symmetric Bivariate Uniform Distributions A Class of Symmetric Bivariate Uniform Distributions Thomas S. Ferguson, 7/8/94 A class of symmetric bivariate uniform distributions is proposed for use in statistical modeling. The distributions may be

More information

Kevin James. MTHSC 206 Section 16.5 Applications of Double Integrals

Kevin James. MTHSC 206 Section 16.5 Applications of Double Integrals MTHSC 206 Section 16.5 Applications of Double Integrals Mass and Density Suppose that a lamina represented by a region D of R 2 has variable density given by ρ(x, y). Then the mass of the lamina can be

More information

Distance Measures for Gene Expression (and other high-dimensional) Data

Distance Measures for Gene Expression (and other high-dimensional) Data Distance Measures for Gene Expression (and other high-dimensional) Data Utah State University Spring 04 STAT 5570: Statistical Bioinformatics Notes. References Chapter of Bioconductor Monograph (course

More information

Behavior of the sample mean. varx i = σ 2

Behavior of the sample mean. varx i = σ 2 Behavior of the sample mean We observe n independent and identically distributed (iid) draws from a random variable X. Denote the observed values by X 1, X 2,..., X n. Assume the X i come from a population

More information

(c) 0 (d) (a) 27 (b) (e) x 2 3x2

(c) 0 (d) (a) 27 (b) (e) x 2 3x2 1. Sarah the architect is designing a modern building. The base of the building is the region in the xy-plane bounded by x =, y =, and y = 3 x. The building itself has a height bounded between z = and

More information

Parameter Estimation. Learning From Data: MLE. Parameter Estimation. Likelihood. Maximum Likelihood Parameter Estimation. Likelihood Function 12/1/16

Parameter Estimation. Learning From Data: MLE. Parameter Estimation. Likelihood. Maximum Likelihood Parameter Estimation. Likelihood Function 12/1/16 Learning From Data: MLE Maximum Estimators Common approach in statistics: use a parametric model of data: Assume data set: Bin(n, p), Poisson( ), N(µ, exp( ) Uniform(a, b) 2 ) But parameters are unknown!!!

More information

Exam in Calculus. Wednesday June 1st First Year at The TEK-NAT Faculty and Health Faculty

Exam in Calculus. Wednesday June 1st First Year at The TEK-NAT Faculty and Health Faculty Exam in Calculus Wednesday June 1st 211 First Year at The TEK-NAT Faculty and Health Faculty The present exam consists of 7 numbered pages with a total of 12 exercises. It is allowed to use books, notes,

More information

ENSC 805. Spring Assignment 1 Solution.

ENSC 805. Spring Assignment 1 Solution. ENSC 805 Spring 2016 Assignment 1 Solution Question 1. Go to the link below to install Matlab on your personal computer. https://www.sfu.ca/itservices/technical/software/matlab/nameduserlicense.html Question

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Brown University CSCI 1950-F, Spring 2012 Prof. Erik Sudderth Lecture 2: Probability: Discrete Random Variables Classification: Validation & Model Selection Many figures

More information

SD 372 Pattern Recognition

SD 372 Pattern Recognition SD 372 Pattern Recognition Lab 2: Model Estimation and Discriminant Functions 1 Purpose This lab examines the areas of statistical model estimation and classifier aggregation. Model estimation will be

More information

Machine Learning A W 1sst KU. b) [1 P] Give an example for a probability distributions P (A, B, C) that disproves

Machine Learning A W 1sst KU. b) [1 P] Give an example for a probability distributions P (A, B, C) that disproves Machine Learning A 708.064 11W 1sst KU Exercises Problems marked with * are optional. 1 Conditional Independence I [2 P] a) [1 P] Give an example for a probability distribution P (A, B, C) that disproves

More information

= f (a, b) + (hf x + kf y ) (a,b) +

= f (a, b) + (hf x + kf y ) (a,b) + Chapter 14 Multiple Integrals 1 Double Integrals, Iterated Integrals, Cross-sections 2 Double Integrals over more general regions, Definition, Evaluation of Double Integrals, Properties of Double Integrals

More information

Clustering. Mihaela van der Schaar. January 27, Department of Engineering Science University of Oxford

Clustering. Mihaela van der Schaar. January 27, Department of Engineering Science University of Oxford Department of Engineering Science University of Oxford January 27, 2017 Many datasets consist of multiple heterogeneous subsets. Cluster analysis: Given an unlabelled data, want algorithms that automatically

More information

OUTLINE. Paper Review First Paper The Zero-Error Side Information Problem and Chromatic Numbers, H. S. Witsenhausen Definitions:

OUTLINE. Paper Review First Paper The Zero-Error Side Information Problem and Chromatic Numbers, H. S. Witsenhausen Definitions: OUTLINE Definitions: - Source Code - Expected Length of a source code - Length of a codeword - Variable/fixed length coding - Example: Huffman coding - Lossless coding - Distortion - Worst case length

More information

lecture 19: monte carlo methods

lecture 19: monte carlo methods lecture 19: monte carlo methods STAT 598z: Introduction to computing for statistics Vinayak Rao Department of Statistics, Purdue University April 3, 2019 Monte Carlo integration We want to calculate integrals/summations

More information

Introduction to RStudio

Introduction to RStudio First, take class through processes of: Signing in Changing password: Tools -> Shell, then use passwd command Installing packages Check that at least these are installed: MASS, ISLR, car, class, boot,

More information

Lecture 17. ENGR-1100 Introduction to Engineering Analysis CENTROID OF COMPOSITE AREAS

Lecture 17. ENGR-1100 Introduction to Engineering Analysis CENTROID OF COMPOSITE AREAS ENGR-00 Introduction to Engineering Analysis Lecture 7 CENTROID OF COMPOSITE AREAS Today s Objective : Students will: a) Understand the concept of centroid. b) Be able to determine the location of the

More information

4. LINE AND PATH INTEGRALS

4. LINE AND PATH INTEGRALS Universidad arlos III de Madrid alculus II 4. LINE AN PATH INTEGRALS Marina elgado Téllez de epeda Parametrizations of important curves: ircumference: (x a) 2 + (y b) 2 = r 2 1 (t) = (a + cos t,b + sin

More information

Rectangular Coordinates in Space

Rectangular Coordinates in Space Rectangular Coordinates in Space Philippe B. Laval KSU Today Philippe B. Laval (KSU) Rectangular Coordinates in Space Today 1 / 11 Introduction We quickly review one and two-dimensional spaces and then

More information

Clustering analysis of gene expression data

Clustering analysis of gene expression data Clustering analysis of gene expression data Chapter 11 in Jonathan Pevsner, Bioinformatics and Functional Genomics, 3 rd edition (Chapter 9 in 2 nd edition) Human T cell expression data The matrix contains

More information

1 Double Integral. 1.1 Double Integral over Rectangular Domain

1 Double Integral. 1.1 Double Integral over Rectangular Domain Double Integral. Double Integral over Rectangular Domain As the definite integral of a positive function of one variable represents the area of the region between the graph and the x-asis, the double integral

More information

Laplace Transform of a Lognormal Random Variable

Laplace Transform of a Lognormal Random Variable Approximations of the Laplace Transform of a Lognormal Random Variable Joint work with Søren Asmussen & Jens Ledet Jensen The University of Queensland School of Mathematics and Physics August 1, 2011 Conference

More information

Testing Continuous Distributions. Artur Czumaj. DIMAP (Centre for Discrete Maths and it Applications) & Department of Computer Science

Testing Continuous Distributions. Artur Czumaj. DIMAP (Centre for Discrete Maths and it Applications) & Department of Computer Science Testing Continuous Distributions Artur Czumaj DIMAP (Centre for Discrete Maths and it Applications) & Department of Computer Science University of Warwick Joint work with A. Adamaszek & C. Sohler Testing

More information

Lecture 9: Predicate Logic. First: Finish Propositional Logic in Haskell. Semantics in Haskell. Syntax In Haskell. CS 181O Spring 2016 Kim Bruce

Lecture 9: Predicate Logic. First: Finish Propositional Logic in Haskell. Semantics in Haskell. Syntax In Haskell. CS 181O Spring 2016 Kim Bruce Lecture 9: Predicate Logic CS 181O Spring 2016 Kim Bruce First: Finish Propositional Logic in Haskell Some slide content taken from Unger and Michaelis Syntax In Haskell data Form = P String Ng Form Cnj

More information

Exact equations are first order DEs of the form M(x, y) + N(x, y) y' = 0 for which we can find a function f(x, φ(x)) so that

Exact equations are first order DEs of the form M(x, y) + N(x, y) y' = 0 for which we can find a function f(x, φ(x)) so that Section 2.6 Exact Equations (ONLY) Key Terms: Exact equations are first order DEs of the form M(x, y) + N(x, y) y' = 0 for which we can find a function f(x, φ(x)) so that The construction of f(x, φ(x))

More information

Computer Vision Group Prof. Daniel Cremers. 4. Probabilistic Graphical Models Directed Models

Computer Vision Group Prof. Daniel Cremers. 4. Probabilistic Graphical Models Directed Models Prof. Daniel Cremers 4. Probabilistic Graphical Models Directed Models The Bayes Filter (Rep.) (Bayes) (Markov) (Tot. prob.) (Markov) (Markov) 2 Graphical Representation (Rep.) We can describe the overall

More information

Chapter 3. Bootstrap. 3.1 Introduction. 3.2 The general idea

Chapter 3. Bootstrap. 3.1 Introduction. 3.2 The general idea Chapter 3 Bootstrap 3.1 Introduction The estimation of parameters in probability distributions is a basic problem in statistics that one tends to encounter already during the very first course on the subject.

More information

Discrete Mathematics and Probability Theory Fall 2016 Seshia and Walrand Midterm 1

Discrete Mathematics and Probability Theory Fall 2016 Seshia and Walrand Midterm 1 CS 70 Discrete Mathematics and Probability Theory Fall 2016 Seshia and Walrand Midterm 1 PRINT Your Name:, (last) READ AND SIGN The Honor Code: As a member of the UC Berkeley community, I act with honesty,

More information

Computer Vision Group Prof. Daniel Cremers. 4. Probabilistic Graphical Models Directed Models

Computer Vision Group Prof. Daniel Cremers. 4. Probabilistic Graphical Models Directed Models Prof. Daniel Cremers 4. Probabilistic Graphical Models Directed Models The Bayes Filter (Rep.) (Bayes) (Markov) (Tot. prob.) (Markov) (Markov) 2 Graphical Representation (Rep.) We can describe the overall

More information

COMP90051 Statistical Machine Learning

COMP90051 Statistical Machine Learning COMP90051 Statistical Machine Learning Semester 2, 2016 Lecturer: Trevor Cohn 21. Independence in PGMs; Example PGMs Independence PGMs encode assumption of statistical independence between variables. Critical

More information

mirsig: a consensus-based network inference methodology to identify pan-cancer mirna-mirna interaction signatures

mirsig: a consensus-based network inference methodology to identify pan-cancer mirna-mirna interaction signatures SUPPLEMENTARY FILE - S1 mirsig: a consensus-based network inference methodology to identify pan-cancer mirna-mirna interaction signatures Joseph J. Nalluri 1,*, Debmalya Barh 2, Vasco Azevedo 3 and Preetam

More information

Updated Sections 3.5 and 3.6

Updated Sections 3.5 and 3.6 Addendum The authors recommend the replacement of Sections 3.5 3.6 and Table 3.15 with the content of this addendum. Consequently, the recommendation is to replace the 13 models and their weights with

More information

Common Core Algebra 2. Chapter 1: Linear Functions

Common Core Algebra 2. Chapter 1: Linear Functions Common Core Algebra 2 Chapter 1: Linear Functions 1 1.1 Parent Functions and Transformations Essential Question: What are the characteristics of some of the basic parent functions? What You Will Learn

More information

CSI30. Chapter 1. The Foundations: Logic and Proofs Rules of inference with quantifiers Logic and bit operations Specification consistency

CSI30. Chapter 1. The Foundations: Logic and Proofs Rules of inference with quantifiers Logic and bit operations Specification consistency Chapter 1. The Foundations: Logic and Proofs 1.13 Rules of inference with quantifiers Logic and bit operations Specification consistency 1.13 Rules of inference with quantifiers universal instantiation

More information

Intro to Probability Instructor: Alexandre Bouchard

Intro to Probability Instructor: Alexandre Bouchard www.stat.ubc.ca/~bouchard/courses/stat302-sp2017-18/ Intro to Probability Instructor: Aleandre Bouchard Announcements New webwork will be release by end of day today, due one week later. Plan for today

More information

Computer vision: models, learning and inference. Chapter 10 Graphical Models

Computer vision: models, learning and inference. Chapter 10 Graphical Models Computer vision: models, learning and inference Chapter 10 Graphical Models Independence Two variables x 1 and x 2 are independent if their joint probability distribution factorizes as Pr(x 1, x 2 )=Pr(x

More information

Deep Learning and Its Applications

Deep Learning and Its Applications Convolutional Neural Network and Its Application in Image Recognition Oct 28, 2016 Outline 1 A Motivating Example 2 The Convolutional Neural Network (CNN) Model 3 Training the CNN Model 4 Issues and Recent

More information

What you will learn today

What you will learn today What you will learn today Tangent Planes and Linear Approximation and the Gradient Vector Vector Functions 1/21 Recall in one-variable calculus, as we zoom in toward a point on a curve, the graph becomes

More information

Inference and Representation

Inference and Representation Inference and Representation Rachel Hodos New York University Lecture 5, October 6, 2015 Rachel Hodos Lecture 5: Inference and Representation Today: Learning with hidden variables Outline: Unsupervised

More information

Markov Networks in Computer Vision

Markov Networks in Computer Vision Markov Networks in Computer Vision Sargur Srihari srihari@cedar.buffalo.edu 1 Markov Networks for Computer Vision Some applications: 1. Image segmentation 2. Removal of blur/noise 3. Stereo reconstruction

More information

Markov Networks in Computer Vision. Sargur Srihari

Markov Networks in Computer Vision. Sargur Srihari Markov Networks in Computer Vision Sargur srihari@cedar.buffalo.edu 1 Markov Networks for Computer Vision Important application area for MNs 1. Image segmentation 2. Removal of blur/noise 3. Stereo reconstruction

More information

CS 195-5: Machine Learning Problem Set 5

CS 195-5: Machine Learning Problem Set 5 CS 195-5: Machine Learning Problem Set 5 Douglas Lanman dlanman@brown.edu 26 November 26 1 Clustering and Vector Quantization Problem 1 Part 1: In this problem we will apply Vector Quantization (VQ) to

More information

PEER Report Addendum.

PEER Report Addendum. PEER Report 2017-03 Addendum. The authors recommend the replacement of Section 3.5.1 and Table 3.15 with the content of this Addendum. Consequently, the recommendation is to replace the 13 models and their

More information

2D TRANSFORMATIONS AND MATRICES

2D TRANSFORMATIONS AND MATRICES 2D TRANSFORMATIONS AND MATRICES Representation of Points: 2 x 1 matrix: x y General Problem: B = T A T represents a generic operator to be applied to the points in A. T is the geometric transformation

More information

Chapter 6. THE NORMAL DISTRIBUTION

Chapter 6. THE NORMAL DISTRIBUTION Chapter 6. THE NORMAL DISTRIBUTION Introducing Normally Distributed Variables The distributions of some variables like thickness of the eggshell, serum cholesterol concentration in blood, white blood cells

More information

Computer Applications for Engineers ET 601

Computer Applications for Engineers ET 601 Computer Applications for Engineers ET 601 Asst. Prof. Dr. Prapun Suksompong prapun@siit.tu.ac.th Random Variables 1 Office Hours: (BKD 3601-7) Wednesday 9:30-11:30 Wednesday 16:00-17:00 Thursday 14:40-16:00

More information

6 Randomized rounding of semidefinite programs

6 Randomized rounding of semidefinite programs 6 Randomized rounding of semidefinite programs We now turn to a new tool which gives substantially improved performance guarantees for some problems We now show how nonlinear programming relaxations can

More information

4.3 The Normal Distribution

4.3 The Normal Distribution 4.3 The Normal Distribution Objectives. Definition of normal distribution. Standard normal distribution. Specialties of the graph of the standard normal distribution. Percentiles of the standard normal

More information

The Distributive Property and Expressions Understand how to use the Distributive Property to Clear Parenthesis

The Distributive Property and Expressions Understand how to use the Distributive Property to Clear Parenthesis Objective 1 The Distributive Property and Expressions Understand how to use the Distributive Property to Clear Parenthesis The Distributive Property The Distributive Property states that multiplication

More information

EECS 126 Probability and Random Processes University of California, Berkeley: Fall 2017 Abhay Parekh and Jean Walrand September 21, 2017.

EECS 126 Probability and Random Processes University of California, Berkeley: Fall 2017 Abhay Parekh and Jean Walrand September 21, 2017. EECS 126 Probability and Random Processes University of California, Berkeley: Fall 2017 Abhay Parekh and Jean Walrand September 21, 2017 Midterm Exam Last Name First Name SID Rules. You have 80 minutes

More information

3.2 - Interpolation and Lagrange Polynomials

3.2 - Interpolation and Lagrange Polynomials 3. - Interpolation and Lagrange Polynomials. Polynomial Interpolation: Problem: Givenn pairs of data points x i, y i,wherey i fx i, i 0,,...,n for some function fx, we want to find a polynomial P x of

More information

COMP232 - Mathematics for Computer Science

COMP232 - Mathematics for Computer Science COMP232 - Mathematics for Computer Science Tutorial 3 Ali Moallemi moa ali@encs.concordia.ca Iraj Hedayati h iraj@encs.concordia.ca Concordia University, Winter 2017 Ali Moallemi, Iraj Hedayati COMP232

More information

Curves, Tangent Planes, and Differentials ( ) Feb. 26, 2012 (Sun) Lecture 9. Partial Derivatives: Signs on Level Curves, Tangent

Curves, Tangent Planes, and Differentials ( ) Feb. 26, 2012 (Sun) Lecture 9. Partial Derivatives: Signs on Level Curves, Tangent Lecture 9. Partial Derivatives: Signs on Level Curves, Tangent Planes, and Differentials ( 11.3-11.4) Feb. 26, 2012 (Sun) Signs of Partial Derivatives on Level Curves Level curves are shown for a function

More information

Note that ALL of these points are Intercepts(along an axis), something you should see often in later work.

Note that ALL of these points are Intercepts(along an axis), something you should see often in later work. SECTION 1.1: Plotting Coordinate Points on the X-Y Graph This should be a review subject, as it was covered in the prerequisite coursework. But as a reminder, and for practice, plot each of the following

More information

Lecture 4: Undirected Graphical Models

Lecture 4: Undirected Graphical Models Lecture 4: Undirected Graphical Models Department of Biostatistics University of Michigan zhenkewu@umich.edu http://zhenkewu.com/teaching/graphical_model 15 September, 2016 Zhenke Wu BIOSTAT830 Graphical

More information

Math Stuart Jones. 4.3 Curve Sketching

Math Stuart Jones. 4.3 Curve Sketching 4.3 Curve Sketching In this section, we combine much of what we have talked about with derivatives thus far to draw the graphs of functions. This is useful in many situations to visualize properties of

More information

1 Probability Review. CS 124 Section #8 Hashing, Skip Lists 3/20/17. Expectation (weighted average): the expectation of a random quantity X is:

1 Probability Review. CS 124 Section #8 Hashing, Skip Lists 3/20/17. Expectation (weighted average): the expectation of a random quantity X is: CS 124 Section #8 Hashing, Skip Lists 3/20/17 1 Probability Review Expectation (weighted average): the expectation of a random quantity X is: x= x P (X = x) For each value x that X can take on, we look

More information

Short-Cut MCMC: An Alternative to Adaptation

Short-Cut MCMC: An Alternative to Adaptation Short-Cut MCMC: An Alternative to Adaptation Radford M. Neal Dept. of Statistics and Dept. of Computer Science University of Toronto http://www.cs.utoronto.ca/ radford/ Third Workshop on Monte Carlo Methods,

More information

ISyE 6416: Computational Statistics Spring Lecture 13: Monte Carlo Methods

ISyE 6416: Computational Statistics Spring Lecture 13: Monte Carlo Methods ISyE 6416: Computational Statistics Spring 2017 Lecture 13: Monte Carlo Methods Prof. Yao Xie H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of Technology Determine area

More information

Massachusetts Institute of Technology. Department of Computer Science and Electrical Engineering /6.866 Machine Vision Quiz I

Massachusetts Institute of Technology. Department of Computer Science and Electrical Engineering /6.866 Machine Vision Quiz I Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision Quiz I Handed out: 2004 Oct. 21st Due on: 2003 Oct. 28th Problem 1: Uniform reflecting

More information