Approximate Inference 1. Forward Sampling
|
|
- Anne Horn
- 6 years ago
- Views:
Transcription
1 Approximate Inference This section on approximate inference relies on samples / particles Full particles: complete assignment to all network variables eg. (X = x, X = x,..., X N = x N )
2 Topological sort or order: An ordering of the nodes in the AG where X comes before Y in the ordering if there is a directed path from X to Y in the graph. A topological order is equivalent to a partial order on the nodes of the graph There may be several topological orderings A C B Examples of Topological orders: A,B,C, B,A,C, 3 Student Example P() low 0.6 I P(I) low 0.7 high 0.3 high 0.4 I S P(S I) I G P(G,I) low low C 0.3 low low B 0.4 low low A 0.3 ifficulty Intelligence low low 0.95 low high 0.05 high low 0. high high 0.8 low high C 0.0 low high B 0.08 low high A 0.9 high low C 0.7 high low B 0.5 high low A 0.05 high high C 0. high high B 0.3 Grade Letter SAT G L P(L G) C weak 0.99 C strong 0.0 B weak 0.4 B strong 0.6 A weak 0. A strong 0.9 high high A 0.5 4
3 Topological ordering:, I, G, S, L ifficulty Grade Letter Intelligence SAT. Sample from P() (Say you get =high). Sample I from P(I) (Say you get I=low) 3. Sample G from P(G I=low,=high) (Say you get G=C) 4. Sample S from P(S I=low) (Say you get S=low) 5. Sample L from P(L G=C) (Say you get L=weak) You now have a sample (=high, I=low, G=C, S=low, L=weak) 5 Suppose you want to calculate P( X = x, X = x,, X n = x n ) using forward sampling on a Bayesian network. The algorithm:. o a topological sort of the nodes in the Bayesian network.. For j = to NU_SAPLES For each node i in the ordering (starting from the top of the Bayesian network down) Sample the value xˆi from the distribution P( X i Parents(X i )) Add { xˆ ˆ ˆ, x,..., xn} to your collection of samples 3. Let P( X x, X x,..., X x ) # of samples with X x, X x,..., X NU _ SAPLES n n n x n 6 3
4 How do you sample from P(X i Parents(X i ))? Note: P( X i Parents(X i )) is a multinomial distribution P(x i,..., x ik,..., k )? 7 How do you sample from a multinomial distribution P(x i,..., x ik,..., k )? Generate a sample s uniformly from [0,] Partition interval into k subintervals: [0, ), [, + ),... ore generally, the ith interval is i i j, j j j If s is in the ith interval, the sample value is x i. Use binary search to find the interval for s in time O(log k) 8 4
5 Suppose your list of samples looks like the following table: I G S L low low B low weak low high A high strong low high A high weak high high A high strong high low C low weak P(I=high) = 3/5 = 0.6 Note that this value becomes a lot more accurate as the number of samples heads to infinity. 9 From a set of samples = {[],..., []}, we can estimate the expectation of any function f as: Eˆ ( f ) m To estimate P(y) Pˆ ( y) I{ y[ m] m f ( [ m]) y} This will return if the sample [m] matches the values in y (eg. y is I=high) and 0 otherwise. 0 5
6 Rejection Sampling Rejection Sampling What if we want to estimate P(y E=e)? Rejection sampling: do forward sampling but throw out samples where Ee Example: P(I=high L=weak) = /3 I G S L low low B low weak low high A high strong low high A high weak high high A high strong high low C low weak 6
7 Rejection Sampling What if the evidence E=e is very very rare? For example, if P(e) = 0.00, then for 0,000 samples, we get 0 unrejected samples To obtain at least * unrejected samples, we need to generate on average = */P(e) samples If evidence is rare, we end up generating a lot of samples which wastes time 3 Bad news: Rejection Sampling Rare evidence is the norm! As # of evidence variables k = E grows, the probability of the evidence decreases exponentially with k Need something better than rejection sampling! 4 7
8 Likelihood Weighting 5 Likelihood Weighting Intuition: Weight samples according to probability of the evidence Intelligence SAT I P(I) low 0.7 high 0.3 I S P(S I) low low 0.95 low high 0.05 high low 0. high high 0.8 rawing I = high and S = high should be 80% of a sample rawing I = low and S = high should be 5% of a sample 6 8
9 Likelihood Weighting Weighted particles: [ ], w[],..., [ ], w[ ] Estimate: Pˆ ( y e) m w[ m] I{ y[ m] m w[ m] y} 7 Likelihood Weighting Procedure LW-Sample(, // Bayesian network over X Z=z // Event in the network ). Let X,, X n be a topological ordering of X. w 3. for i =,, n 4. u i x<pa Xi > // Assignment to Pa Xi in x,, x i- 5. if X i Zthen 6. Sample x i from P(X i u i ) 7. else 8. x i z<x i > // Assignment to X i in z 9. w w P(x i u i ) // ultiply weight by probability of desired value 0. return (x,, x n ), w 8 9
Forward Sampling. Forward Sampling. Forward Sampling. Approximate Inference 1
Approximate Inference This section on approximate inference relies on samples / particles Full particles: complete assignment to all network variables eg. ( = x, = x,..., N = x N ) Topological sort or
More informationBayesian Networks. A Bayesian network is a directed acyclic graph that represents causal relationships between random variables. Earthquake.
Bayes Nets Independence With joint probability distributions we can compute many useful things, but working with joint PD's is often intractable. The naïve Bayes' approach represents one (boneheaded?)
More informationWarm-up as you walk in
arm-up as you walk in Given these N=10 observations of the world: hat is the approximate value for P c a, +b? A. 1/10 B. 5/10. 1/4 D. 1/5 E. I m not sure a, b, +c +a, b, +c a, b, +c a, +b, +c +a, b, +c
More informationBayesian Classification Using Probabilistic Graphical Models
San Jose State University SJSU ScholarWorks Master's Projects Master's Theses and Graduate Research Spring 2014 Bayesian Classification Using Probabilistic Graphical Models Mehal Patel San Jose State University
More informationBayes Net Learning. EECS 474 Fall 2016
Bayes Net Learning EECS 474 Fall 2016 Homework Remaining Homework #3 assigned Homework #4 will be about semi-supervised learning and expectation-maximization Homeworks #3-#4: the how of Graphical Models
More informationSTAT 598L Probabilistic Graphical Models. Instructor: Sergey Kirshner. Exact Inference
STAT 598L Probabilistic Graphical Models Instructor: Sergey Kirshner Exact Inference What To Do With Bayesian/Markov Network? Compact representation of a complex model, but Goal: efficient extraction of
More informationBayesian Networks Inference (continued) Learning
Learning BN tutorial: ftp://ftp.research.microsoft.com/pub/tr/tr-95-06.pdf TAN paper: http://www.cs.huji.ac.il/~nir/abstracts/frgg1.html Bayesian Networks Inference (continued) Learning Machine Learning
More informationLecture 6,
Lecture 6, 4.16.2009 Today: Review: Basic Set Operation: Recall the basic set operator,!. From this operator come other set quantifiers and operations:!,!,!,! \ Set difference (sometimes denoted, a minus
More informationComputational Intelligence
Computational Intelligence A Logical Approach Problems for Chapter 10 Here are some problems to help you understand the material in Computational Intelligence: A Logical Approach. They are designed to
More informationApproximate Bayesian Computation. Alireza Shafaei - April 2016
Approximate Bayesian Computation Alireza Shafaei - April 2016 The Problem Given a dataset, we are interested in. The Problem Given a dataset, we are interested in. The Problem Given a dataset, we are interested
More informationCMPSCI 105: Lecture #12 Searching, Sorting, Joins, and Indexing PART #1: SEARCHING AND SORTING. Linear Search. Binary Search.
CMPSCI 105: Lecture #12 Searching, Sorting, Joins, and Indexing PART #1: SEARCHING AND SORTING Linear Search Binary Search Items can be in any order, Have to examine first record, then second record, then
More informationIntroduction to Graphical Models
Robert Collins CSE586 Introduction to Graphical Models Readings in Prince textbook: Chapters 10 and 11 but mainly only on directed graphs at this time Credits: Several slides are from: Review: Probability
More informationData Compression Techniques
Data Compression Techniques Part 1: Entropy Coding Lecture 1: Introduction and Huffman Coding Juha Kärkkäinen 31.10.2017 1 / 21 Introduction Data compression deals with encoding information in as few bits
More informationComparative Analysis of Naive Bayes and Tree Augmented Naive Bayes Models
San Jose State University SJSU ScholarWorks Master's Projects Master's Theses and Graduate Research Spring 2014 Comparative Analysis of Naive Bayes and Tree Augmented Naive Bayes Models Harini Padmanaban
More informationTools for Scalable Data Mining
Tools for Scalable Data Mining XANDA SCHOFIELD CS 6410 11/13/2014 1. Astrolabe Large, eventuallyconsistent distributed system ROBERT VAN RENESSE, KEN BIRMAN, WERNER VOGELS [Source: Wikipedia] The Problem
More informationPower Set of a set and Relations
Power Set of a set and Relations 1 Power Set (1) Definition: The power set of a set S, denoted P(S), is the set of all subsets of S. Examples Let A={a,b,c}, P(A)={,{a},{b},{c},{a,b},{b,c},{a,c},{a,b,c}}
More informationInference. Inference: calculating some useful quantity from a joint probability distribution Examples: Posterior probability: Most likely explanation:
Inference Inference: calculating some useful quantity from a joint probability distribution Examples: Posterior probability: B A E J M Most likely explanation: This slide deck courtesy of Dan Klein at
More informationCS 343: Artificial Intelligence
CS 343: Artificial Intelligence Bayes Nets: Inference Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley.
More informationRNNs as Directed Graphical Models
RNNs as Directed Graphical Models Sargur Srihari srihari@buffalo.edu This is part of lecture slides on Deep Learning: http://www.cedar.buffalo.edu/~srihari/cse676 1 10. Topics in Sequence Modeling Overview
More informationHierarchical Clustering 4/5/17
Hierarchical Clustering 4/5/17 Hypothesis Space Continuous inputs Output is a binary tree with data points as leaves. Useful for explaining the training data. Not useful for making new predictions. Direction
More informationClipping Lines. Dr. Scott Schaefer
Clipping Lines Dr. Scott Schaefer Why Clip? We do not want to waste time drawing objects that are outside of viewing window (or clipping window) 2/94 Clipping Points Given a point (x, y) and clipping window
More informationMachine Learning
Machine Learning 10-601 Tom M. Mitchell Machine Learning Department Carnegie Mellon University April 1, 2019 Today: Inference in graphical models Learning graphical models Readings: Bishop chapter 8 Bayesian
More informationPartitioning Data. IRDS: Evaluation, Debugging, and Diagnostics. Cross-Validation. Cross-Validation for parameter tuning
Partitioning Data IRDS: Evaluation, Debugging, and Diagnostics Charles Sutton University of Edinburgh Training Validation Test Training : Running learning algorithms Validation : Tuning parameters of learning
More informationCS 188: Artificial Intelligence
CS 188: Artificial Intelligence Bayes Nets: Inference Instructors: Dan Klein and Pieter Abbeel --- University of California, Berkeley [These slides were created by Dan Klein and Pieter Abbeel for CS188
More informationMondrian Forests: Efficient Online Random Forests
Mondrian Forests: Efficient Online Random Forests Balaji Lakshminarayanan Joint work with Daniel M. Roy and Yee Whye Teh 1 Outline Background and Motivation Mondrian Forests Randomization mechanism Online
More informationLocal Probabilistic Models: Context-Specific CPDs. Sargur Srihari
Local Probabilistic Models: Context-Specific CPDs Sargur srihari@cedar.buffalo.edu 1 Context-Specific CPDs Topics 1. Regularity in parameters for different values of parents 2. Tree CPDs 3. Rule CPDs 4.
More informationReview I" CMPSCI 383 December 6, 2011!
Review I" CMPSCI 383 December 6, 2011! 1 General Information about the Final" Closed book closed notes! Includes midterm material too! But expect more emphasis on later material! 2 What you should know!
More informationDesign and Analysis of Algorithms Prof. Madhavan Mukund Chennai Mathematical Institute. Module 02 Lecture - 45 Memoization
Design and Analysis of Algorithms Prof. Madhavan Mukund Chennai Mathematical Institute Module 02 Lecture - 45 Memoization Let us continue our discussion of inductive definitions. (Refer Slide Time: 00:05)
More information4 Factor graphs and Comparing Graphical Model Types
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.438 Algorithms for Inference Fall 2014 4 Factor graphs and Comparing Graphical Model Types We now introduce
More informationGeneral Instructions. You can use QtSpim simulator to work on these assignments.
General Instructions You can use QtSpim simulator to work on these assignments. Only one member of each group has to submit the assignment. Please Make sure that there is no duplicate submission from your
More informationProbabilistic Graphical Models
Probabilistic Graphical Models Lecture 17 EM CS/CNS/EE 155 Andreas Krause Announcements Project poster session on Thursday Dec 3, 4-6pm in Annenberg 2 nd floor atrium! Easels, poster boards and cookies
More informationBiology 644: Bioinformatics
A statistical Markov model in which the system being modeled is assumed to be a Markov process with unobserved (hidden) states in the training data. First used in speech and handwriting recognition In
More informationHash Table and Hashing
Hash Table and Hashing The tree structures discussed so far assume that we can only work with the input keys by comparing them. No other operation is considered. In practice, it is often true that an input
More informationMachine Learning
Machine Learning 10-701 Tom M. Mitchell Machine Learning Department Carnegie Mellon University January 20, 2011 Today: Bayes Classifiers Naïve Bayes Gaussian Naïve Bayes Readings: Mitchell: Naïve Bayes
More informationMachine Learning (BSMC-GA 4439) Wenke Liu
Machine Learning (BSMC-GA 4439) Wenke Liu 01-25-2018 Outline Background Defining proximity Clustering methods Determining number of clusters Other approaches Cluster analysis as unsupervised Learning Unsupervised
More informationChapter 1. Introduction
Chapter 1 Introduction A Monte Carlo method is a compuational method that uses random numbers to compute (estimate) some quantity of interest. Very often the quantity we want to compute is the mean of
More informationMachine Learning. Supervised Learning. Manfred Huber
Machine Learning Supervised Learning Manfred Huber 2015 1 Supervised Learning Supervised learning is learning where the training data contains the target output of the learning system. Training data D
More informationARC 088/ABC 083. A: Libra. DEGwer 2017/12/23. #include <s t d i o. h> i n t main ( )
ARC 088/ABC 083 DEGwer 2017/12/23 A: Libra A, B, C, D A + B C + D #include i n t main ( ) i n t a, b, c, d ; s c a n f ( %d%d%d%d, &a, &b, &c, &d ) ; i f ( a + b > c + d ) p r i n t f (
More informationModule 8: Range-Searching in Dictionaries for Points
Module 8: Range-Searching in Dictionaries for Points CS 240 Data Structures and Data Management T. Biedl K. Lanctot M. Sepehri S. Wild Based on lecture notes by many previous cs240 instructors David R.
More informationComputer Vision Group Prof. Daniel Cremers. 6. Boosting
Prof. Daniel Cremers 6. Boosting Repetition: Regression We start with a set of basis functions (x) =( 0 (x), 1(x),..., M 1(x)) x 2 í d The goal is to fit a model into the data y(x, w) =w T (x) To do this,
More informationMondrian Forests: Efficient Online Random Forests
Mondrian Forests: Efficient Online Random Forests Balaji Lakshminarayanan (Gatsby Unit, UCL) Daniel M. Roy (Cambridge Toronto) Yee Whye Teh (Oxford) September 4, 2014 1 Outline Background and Motivation
More informationAlgorithms. Appendix B. B.1 Matrix Algorithms. Matrix/Matrix multiplication. Vector/Matrix multiplication. This appendix is about algorithms.
Appendix B Algorithms This appendix is about algorithms. B.1 Matrix Algorithms When describing algorithms for manipulating matrices, we will use the notation x[i] for the i-th element of a vector (row
More informationCS103 Handout 13 Fall 2012 May 4, 2012 Problem Set 5
CS103 Handout 13 Fall 2012 May 4, 2012 Problem Set 5 This fifth problem set explores the regular languages, their properties, and their limits. This will be your first foray into computability theory,
More informationCOMPSCI 311: Introduction to Algorithms First Midterm Exam, October 3, 2018
COMPSCI 311: Introduction to Algorithms First Midterm Exam, October 3, 2018 Name: ID: Answer the questions directly on the exam pages. Show all your work for each question. More detail including comments
More informationA Brief Introduction to Bayesian Networks. adapted from slides by Mitch Marcus
A Brief Introduction to Bayesian Networks adapted from slides by Mitch Marcus Bayesian Networks A simple, graphical notation for conditional independence assertions and hence for compact specification
More information1 ICS 161: Design and Analysis of Algorithms Lecture notes for January 23, Bucket Sorting
1 ICS 161: Design and Analysis of Algorithms Lecture notes for January 23, 1996 2 Bucket Sorting We ve seen various algorithms for sorting in O(n log n) time and a lower bound showing that O(n log n) is
More informationConceptual Data Modeling
Conceptual Data odeling A data model is a way to describe the structure of the data. In models that are implemented it includes a set of operations that manipulate the data. A Data odel is a combination
More informationSparse & Redundant Representations and Their Applications in Signal and Image Processing
Sparse & Redundant Representations and Their Applications in Signal and Image Processing Sparseland: An Estimation Point of View Michael Elad The Computer Science Department The Technion Israel Institute
More informationCOMP4128 Programming Challenges
Multi- COMP4128 Programming Challenges School of Computer Science and Engineering UNSW Australia Table of Contents 2 Multi- 1 2 Multi- 3 3 Multi- Given two strings, a text T and a pattern P, find the first
More information8 Algorithms 8.1. Foundations of Computer Science Cengage Learning
8 Algorithms 8.1 Foundations of Computer Science Cengage Learning 8.2 Objectives After studying this chapter, the student should be able to: Define an algorithm and relate it to problem solving. Define
More informationThe Relational Model 2. Week 3
The Relational Model 2 Week 3 1 We have seen how to create a database schema, how do we create an actual database on our computers? professor(pid : string, name : string) course(pid : string, number :
More informationDefinition: A data structure is a way of organizing data in a computer so that it can be used efficiently.
The Science of Computing I Lesson 4: Introduction to Data Structures Living with Cyber Pillar: Data Structures The need for data structures The algorithms we design to solve problems rarely do so without
More informationContext-Free Languages & Grammars (CFLs & CFGs) Reading: Chapter 5
Context-Free Languages & Grammars (CFLs & CFGs) Reading: Chapter 5 1 Not all languages are regular So what happens to the languages which are not regular? Can we still come up with a language recognizer?
More informationArrays. CSE 142 Programming I. Chapter 8. Another Motivation - Averaging Grades. Motivation: Sorting. Data Structures. Arrays
CSE 142 Programming I Chapter 8 8.1 Declaration and Referencing 8.2 Subscripts 8.3 Loop through arrays Arrays 8.4 & 8.5 Arrays arguments and parameters 8.6 Example 8.7 Multi-Dimensional Arrays 2000 UW
More informationScene Grammars, Factor Graphs, and Belief Propagation
Scene Grammars, Factor Graphs, and Belief Propagation Pedro Felzenszwalb Brown University Joint work with Jeroen Chua Probabilistic Scene Grammars General purpose framework for image understanding and
More informationThe Relational Model. Chapter 3
The Relational Model Chapter 3 Why Study the Relational Model? Most widely used model. Systems: IBM DB2, Informix, Microsoft (Access and SQL Server), Oracle, Sybase, MySQL, etc. Legacy systems in older
More informationCS 373: Combinatorial Algorithms, Spring 1999
CS 373: Combinatorial Algorithms, Spring 1999 Final Exam (May 7, 1999) Name: Net ID: Alias: This is a closed-book, closed-notes exam! If you brought anything with you besides writing instruments and your
More informationCS 188: Artificial Intelligence Fall Machine Learning
CS 188: Artificial Intelligence Fall 2007 Lecture 23: Naïve Bayes 11/15/2007 Dan Klein UC Berkeley Machine Learning Up till now: how to reason or make decisions using a model Machine learning: how to select
More informationScene Grammars, Factor Graphs, and Belief Propagation
Scene Grammars, Factor Graphs, and Belief Propagation Pedro Felzenszwalb Brown University Joint work with Jeroen Chua Probabilistic Scene Grammars General purpose framework for image understanding and
More informationThe Relational Model. Chapter 3. Database Management Systems, R. Ramakrishnan and J. Gehrke 1
The Relational Model Chapter 3 Database Management Systems, R. Ramakrishnan and J. Gehrke 1 Why Study the Relational Model? Most widely used model. Vendors: IBM, Informix, Microsoft, Oracle, Sybase, etc.
More informationMachine Learning (BSMC-GA 4439) Wenke Liu
Machine Learning (BSMC-GA 4439) Wenke Liu 01-31-017 Outline Background Defining proximity Clustering methods Determining number of clusters Comparing two solutions Cluster analysis as unsupervised Learning
More informationToday. Logistic Regression. Decision Trees Redux. Graphical Models. Maximum Entropy Formulation. Now using Information Theory
Today Logistic Regression Maximum Entropy Formulation Decision Trees Redux Now using Information Theory Graphical Models Representing conditional dependence graphically 1 Logistic Regression Optimization
More informationProbabilistic Graphical Models
Probabilistic Graphical Models Lecture 8 Junction Trees CS/CNS/EE 155 Andreas Krause Announcements Homework 2 due next Wednesday (Nov 4) in class Start early!!! Project milestones due Monday (Nov 9) 4
More informationIntroduction to Databases
Introduction to Databases Data Retrival SELECT * FROM Students S WHERE S.age < 18 Data Retrival SELECT S.name, S.login FROM Students S WHERE S.age < 18 Entity sets to tables Entity sets to tables CREATE
More informationData Structures and Algorithms Week 8
Data Structures and Algorithms Week 8 Dynamic programming Fibonacci numbers Optimization problems Matrix multiplication optimization Principles of dynamic programming Longest Common Subsequence Algorithm
More informationpointers + memory double x; string a; int x; main overhead int y; main overhead
pointers + memory computer have memory to store data. every program gets a piece of it to use as we create and use more variables, more space is allocated to a program memory int x; double x; string a;
More informationFREE CONCERTS IN PAVILIO N.
- 8 -» * /«>»*? - - * 6 * * - > 8< * * 77 < * * ««* ( 7 98 * «9 «8 [ 6 8 - «(» «* z-- < * ( * -»- «-?-- - * ]«-
More informationS1) It's another form of peak finder problem that we discussed in class, We exploit the idea used in binary search.
Q1) Given an array A which stores 0 and 1, such that each entry containing 0 appears before all those entries containing 1. In other words, it is like {0, 0, 0,..., 0, 0, 1, 1,..., 111}. Design an algorithm
More information1 >> Lecture 3 2 >> 3 >> -- Functions 4 >> Zheng-Liang Lu 169 / 221
1 >> Lecture 3 2 >> 3 >> -- Functions 4 >> Zheng-Liang Lu 169 / 221 Functions Recall that an algorithm is a feasible solution to the specific problem. 1 A function is a piece of computer code that accepts
More informationLecture 8. Divided Differences,Least-Squares Approximations. Ceng375 Numerical Computations at December 9, 2010
Lecture 8, Ceng375 Numerical Computations at December 9, 2010 Computer Engineering Department Çankaya University 8.1 Contents 1 2 3 8.2 : These provide a more efficient way to construct an interpolating
More informationSearching Elements in an Array: Linear and Binary Search. Spring Semester 2007 Programming and Data Structure 1
Searching Elements in an Array: Linear and Binary Search Spring Semester 2007 Programming and Data Structure 1 Searching Check if a given element (called key) occurs in the array. Example: array of student
More informationCS 188: Artificial Intelligence Fall Announcements
CS 188: Artificial Intelligence Fall 2006 Lecture 22: Naïve Bayes 11/14/2006 Dan Klein UC Berkeley Announcements Optional midterm On Tuesday 11/21 in class Review session 11/19, 7-9pm, in 306 Soda Projects
More informationAnnouncements. CS 188: Artificial Intelligence Fall Machine Learning. Classification. Classification. Bayes Nets for Classification
CS 88: Artificial Intelligence Fall 00 Lecture : Naïve Bayes //00 Announcements Optional midterm On Tuesday / in class Review session /9, 7-9pm, in 0 Soda Projects. due /. due /7 Dan Klein UC Berkeley
More informationDepartment of Computer Science Admission Test for PhD Program. Part I Time : 30 min Max Marks: 15
Department of Computer Science Admission Test for PhD Program Part I Time : 0 min Max Marks: 5 Each Q carries marks. ¼ mark will be deducted for every wrong answer. Part II of only those candidates will
More information3. Probability 51. probability A numerical value between 0 and 1 assigned to an event to indicate how often the event occurs (in the long run).
3. Probability 51 3 Probability 3.1 Key Definitions and Ideas random process A repeatable process that has multiple unpredictable potential outcomes. Although we sometimes use language that suggests that
More informationCode No: R Set No. 1
Code No: R05010106 Set No. 1 1. (a) Draw a Flowchart for the following The average score for 3 tests has to be greater than 80 for a candidate to qualify for the interview. Representing the conditional
More informationEC500. Design of Secure and Reliable Hardware. Lecture 1 & 2
EC500 Design of Secure and Reliable Hardware Lecture 1 & 2 Mark Karpovsky January 17 th, 2013 1 Security Errors injected by the attacker (active attacks) Reliability Errors injected by random sources e.g.
More informationUpper Bounds for Maximally Greedy Binary Search Trees
Upper Bounds for Maximally Greedy Binary Search Trees Kyle Fox -- University of Illinois at Urbana-Champaign Balanced Binary Search Trees Want data structures for dictionary, predecessor search, etc. Red-black
More informationAnnouncements. Project 5 is on the street. Second part is essay questions for CoS teaming requirements.
Announcements Project 5 is on the street. Second part is essay questions for CoS teaming requirements. The first part you do as a team The CoS essay gets individually answered and has separate submission
More information1 Dynamic Programming
Recitation 13 Dynamic Programming Parallel and Sequential Data Structures and Algorithms, 15-210 (Spring 2013) April 17, 2013 1 Dynamic Programming Dynamic programming is a technique to avoid needless
More informationGraphical Models. David M. Blei Columbia University. September 17, 2014
Graphical Models David M. Blei Columbia University September 17, 2014 These lecture notes follow the ideas in Chapter 2 of An Introduction to Probabilistic Graphical Models by Michael Jordan. In addition,
More informationMonte Carlo Integration and Random Numbers
Monte Carlo Integration and Random Numbers Higher dimensional integration u Simpson rule with M evaluations in u one dimension the error is order M -4! u d dimensions the error is order M -4/d u In general
More informationMath 205 Test 3 Grading Guidelines Problem 1 Part a: 1 point for figuring out r, 2 points for setting up the equation P = ln 2 P and 1 point for the initial condition. Part b: All or nothing. This is really
More informationArrays. CSE / ENGR 142 Programming I. Chapter 8. Another Motivation - Averaging Grades. Motivation: Sorting. Data Structures.
CSE / ENGR 142 Programming I Arrays Chapter 8 8.1 Declaration and Referencing 8.2 Subscripts 8.3 Loop through arrays 8.4 & 8.5 Arrays arguments and parameters 8.6 Example 8.7 Multi-Dimensional Arrays 1998
More informationOverview. Monte Carlo Methods. Statistics & Bayesian Inference Lecture 3. Situation At End Of Last Week
Statistics & Bayesian Inference Lecture 3 Joe Zuntz Overview Overview & Motivation Metropolis Hastings Monte Carlo Methods Importance sampling Direct sampling Gibbs sampling Monte-Carlo Markov Chains Emcee
More informationRecursion. COMS W1007 Introduction to Computer Science. Christopher Conway 26 June 2003
Recursion COMS W1007 Introduction to Computer Science Christopher Conway 26 June 2003 The Fibonacci Sequence The Fibonacci numbers are: 1, 1, 2, 3, 5, 8, 13, 21, 34,... We can calculate the nth Fibonacci
More informationClustering web search results
Clustering K-means Machine Learning CSE546 Emily Fox University of Washington November 4, 2013 1 Clustering images Set of Images [Goldberger et al.] 2 1 Clustering web search results 3 Some Data 4 2 K-means
More informationLecture 7: Decision Trees
Lecture 7: Decision Trees Instructor: Outline 1 Geometric Perspective of Classification 2 Decision Trees Geometric Perspective of Classification Perspective of Classification Algorithmic Geometric Probabilistic...
More informationGraphical models are a lot like a circuit diagram they are written down to visualize and better understand a problem.
Machine Learning (ML, F16) Lecture#15 (Tuesday, Nov. 1st) Lecturer: Byron Boots Graphical Models 1 Graphical Models Often, one is interested in representing a joint distribution P over a set of n random
More information1 Definition of Reduction
1 Definition of Reduction Problem A is reducible, or more technically Turing reducible, to problem B, denoted A B if there a main program M to solve problem A that lacks only a procedure to solve problem
More informationResearch Students Lecture Series 2015
Research Students Lecture Series 215 Analyse your big data with this one weird probabilistic approach! Or: applied probabilistic algorithms in 5 easy pieces Advait Sarkar advait.sarkar@cl.cam.ac.uk Research
More informationTree-Structured Indexes
Tree-Structured Indexes Yanlei Diao UMass Amherst Slides Courtesy of R. Ramakrishnan and J. Gehrke Access Methods v File of records: Abstraction of disk storage for query processing (1) Sequential scan;
More informationBinary floating point encodings
Week 1: Wednesday, Jan 25 Binary floating point encodings Binary floating point arithmetic is essentially scientific notation. Where in decimal scientific notation we write in floating point, we write
More informationCOSC160: Data Structures Hashing Structures. Jeremy Bolton, PhD Assistant Teaching Professor
COSC160: Data Structures Hashing Structures Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Hashing Structures I. Motivation and Review II. Hash Functions III. HashTables I. Implementations
More informationCS242: Probabilistic Graphical Models Lecture 3: Factor Graphs & Variable Elimination
CS242: Probabilistic Graphical Models Lecture 3: Factor Graphs & Variable Elimination Instructor: Erik Sudderth Brown University Computer Science September 11, 2014 Some figures and materials courtesy
More informationTree-Structured Indexes
Tree-Structured Indexes Chapter 9 Database Management Systems, R. Ramakrishnan and J. Gehrke 1 Introduction As for any index, 3 alternatives for data entries k*: Data record with key value k
More informationAccess Methods. Basic Concepts. Index Evaluation Metrics. search key pointer. record. value. Value
Access Methods This is a modified version of Prof. Hector Garcia Molina s slides. All copy rights belong to the original author. Basic Concepts search key pointer Value record? value Search Key - set of
More informationStructure of Social Networks
Structure of Social Networks Outline Structure of social networks Applications of structural analysis Social *networks* Twitter Facebook Linked-in IMs Email Real life Address books... Who Twitter #numbers
More informationApproximate (Monte Carlo) Inference in Bayes Nets. Monte Carlo (continued)
Approximate (Monte Carlo) Inference in Bayes Nets Basic idea: Let s repeatedly sample according to the distribution represented by the Bayes Net. If in 400/1000 draws, the variable X is true, then we estimate
More informationHomework /681: Artificial Intelligence (Fall 2017) Out: October 18, 2017 Due: October 29, 2017 at 11:59PM
Homework 2 15-381/681: Artificial Intelligence (Fall 2017) Out: October 18, 2017 Due: October 29, 2017 at 11:59PM Homework Policies Homework is due on Autolab by the posted deadline. Assignments submitted
More information