Introduc)on to Probabilis)c Latent Seman)c Analysis. NYP Predic)ve Analy)cs Meetup June 10, 2010

Size: px
Start display at page:

Download "Introduc)on to Probabilis)c Latent Seman)c Analysis. NYP Predic)ve Analy)cs Meetup June 10, 2010"

Transcription

1 Introduc)on to Probabilis)c Latent Seman)c Analysis NYP Predic)ve Analy)cs Meetup June 10, 2010

2 PLSA A type of latent variable model with observed count data and nominal latent variable(s). Despite the adjec)ve seman)c in the acronym, the method is not inherently about meaning. Not any more than, say, its cousin Latent Class Analysis Rather, the name must be read as P + LS(A I), marking the genealogy of PLSA as a probabilis)c re cast of Latent Seman)c Analysis/Indexing.

3 LSA Factoriza)on of data matrix into orthogonal matrices to form bases of (seman)c) vector space: Reduc)on of original matrix to lower rank: LSA for text complexity: cosine similarity between paragraphs.

4 Problems with LSA Non probabilis)c Fails to handle polysemy. Polysemy called noise in LSA literature. Shown (by Hofmann) to underperform compared to PLSA on IR task

5 Probabili)es Why? Probabilis)c systems allow for the evalua)on of proposi)ons under condi)ons of uncertainty. Probabilis)c seman)cs. Probabilis)c systems provide a uniform mechanism for integra)ng and reasoning over heterogeneous informa)on. In PLSA seman)c dimensions are represented by unigram language models, more transparent than eigenvectors. The latent variable structure allows for subtopics (hierarchical PLSA) If the weather is sunny tomorrow and I m not )red we will go to the beach p(beach) = p(sunny & ~)red) = p(sunny)(1 p()red))

6 A Genera)ve Model? Let X be a random vector with components {X 1, X 2,, X n } random variables. Each realiza)on of X is assigned to a class, one of a random variable Y. A genera(ve model tells a story about how the Xs came about: once upon a )me, a Y was selected, then Xs were created out of that Y. A discrimina(ve model strives to iden)fy, as unambiguously as possible, the Y value for some given X

7 A Genera)ve Model? A discrimina)ve model es)mates P(Y X) directly. A genera)ve model es)mates P(X Y) and P(Y) The predic)ve direc)on is then computed via Bayesian inversion: where P(X) is obtained by condi)oning on Y:

8 A Genera)ve Model? A classic genera)ve/discrimina)ve pair: Naïve Bayes vs Logis)c Regression. Naïve Bayes assumes that the X i s are condi)onally independent given Y, so it es)mates P(X i Y). Logis)c regression makes other assump)ons, e.g. linearity of the independent variables with logit of dependent, independence of errors, but handles correlated predictors (up to perfect collinearity).

9 A Genera)ve Model? Genera)ve models have richer probabilis)c seman)cs. Func)ons run both way. Assign distribu)ons to the independent variables, even previously unseen realiza)ons. Ng and Jordan (2002) show that logis)c regression has higher asympto)c accuracy, but converges more slowly, sugges)ng a trade off between accuracy and variance. Overall trade off between accuracy and usefulness.

10 A Genera)ve Model? Start with document Start with topic D P(D) P(D Z) D Z W P(Z D) P(W Z) P(Z) Z P(W Z) W

11 A Genera)ve Model? The observed data are cells of document term matrix We generate (doc, word) pairs. Random variables D, W and Z as sources of objects Either: Draw a document, draw a topic from the document, draw a word from the topic. Draw a topic, draw a document from the topic, draw a word from the topic. The two models are sta)s)cally equivalent Will generate iden)cal likelihoods when fit Proof by Bayesian inversion In any case D and W are condi)onally independent given Z.

12 A Genera)ve Model?

13 A Genera)ve Model? But what is a Document here? Just a label! There are no anributes associated with documents. P(D Z) relates topics to labels A previously unseen document is just a new label Therefore PLSA isn t genera)ve in an interes)ng way, as it cannot handle previously unseen inputs in a genera)ve manner. Though the P(Z) distribu)on may s)ll be of interest.

14 Es)ma)ng the Parameters Θ = {P(Z); P(D Z); P(W Z)} All distribu)ons refer to latent variable Z, so cannot be es)mated directly from the data. How do we know when we have the right parameters? When we have the θ that most closely generates the data, i.e. the document term matrix

15 Es)ma)ng the Parameters The joint P(D,W) generates the observed document term matrix. The parameter vector θ yields the joint P(D,W) We want θ that maximizes the probability of the observed data.

16 Es)ma)ng the Parameters For the mul)nomial distribu)on, Let X be the MxN document term matrix.

17 Es)ma)ng the Parameters Imagine we knew the X = MxNxK complete data matrix, where the counts for topics were overt. Then, New and interes)ng: unseen counts must sum to 1 for given d,w The usual parameters θ

18 Es)ma)ng the Parameters We can factorize the counts in terms of the observed counts and a hidden distribu)on: Let s give the hidden distribu)on its name: P(Z D,W), the posterior distribu)on of Z w.r.t. D,W

19 Es)ma)ng the Parameters P(Z D,W) can be obtained from the parameters via Bayes and our core model assump)on of condi)onal independence:

20 Es)ma)ng the Parameters Nobody said the genera)on of P(Z D,W) must be based on the same parameter vector as the one we re looking for! Say we obtain P(Z D,W) based on randomly generated parameters θ n : We get a func)on of the parameters:

21 Es)ma)ng the Parameters The resul)ng func)on, Q(θ), is the condi)onal expecta)on of the complete data likelihood with respect to the distribu)on P(Z D,W). It turns out that if we find the parameters that maximize Q we get a bener es)mate of the parameters! Expressions for the parameters can be had by sesng the par)al deriva)ves with respect to the parameters to zero and solving, using Laplace transforms.

22 Es)ma)ng the Parameters E step (misnamed) M step

23 Es)ma)ng the Parameters Concretely, we generate (randomly) θ 1 = {P θ1 (Z); P θ1 (D Z); P θ1 (W Z)}. Compute the posterior P θ1 (Z W,D). Compute new parameters θ 2. Repeat un)l convergence, say un)l the log likelihood stops changing a lot, or un)l boredom, or some N itera)ons. For stability, average over mul)ple starts, varying numbers of topics.

24 Folding In When a new document comes along, we want to es)mate the posterior of the topics for the document. What is it about? I.e. what is the distribu)on over topics of the new document? Perform a linle EM : E step: compute P(Z W, D new ) M step: compute P(Z D new ) keeping all other parameters unchanged. Converges very fast, five itera)ons? Overtly discrimina)ve! The true colors of the method emerge.

25 Problems with PLSA Easily huge number of parameters Leads to unstable es)ma)on (local maxima). Computa)onally intractable because of huge matrices Modeling the documents directly can be problem What if the collec)on has millions of documents? Not properly genera)ve (is this a problem?)

26 Examples of Applica)ons Informa)on Retrieval: compare topic distribu)ons for documents and queries using a similarity measure like rela)ve entropy. Collabora)ve Filtering (Hoffman, 2002) using Gaussian PLSA. Topic segmenta)on in texts, by looking for spikes in the distances between topic distribu)ons for neighbouring text blocks.

Logis&c Regression. Aar$ Singh & Barnabas Poczos. Machine Learning / Jan 28, 2014

Logis&c Regression. Aar$ Singh & Barnabas Poczos. Machine Learning / Jan 28, 2014 Logis&c Regression Aar$ Singh & Barnabas Poczos Machine Learning 10-701/15-781 Jan 28, 2014 Linear Regression & Linear Classifica&on Weight Height Linear fit Linear decision boundary 2 Naïve Bayes Recap

More information

Search Engines. Informa1on Retrieval in Prac1ce. Annotations by Michael L. Nelson

Search Engines. Informa1on Retrieval in Prac1ce. Annotations by Michael L. Nelson Search Engines Informa1on Retrieval in Prac1ce Annotations by Michael L. Nelson All slides Addison Wesley, 2008 Retrieval Models Provide a mathema1cal framework for defining the search process includes

More information

CS6200 Informa.on Retrieval. David Smith College of Computer and Informa.on Science Northeastern University

CS6200 Informa.on Retrieval. David Smith College of Computer and Informa.on Science Northeastern University CS6200 Informa.on Retrieval David Smith College of Computer and Informa.on Science Northeastern University Query Process Retrieval Models Provide a mathema.cal framework for defining the search process

More information

Machine Learning Crash Course: Part I

Machine Learning Crash Course: Part I Machine Learning Crash Course: Part I Ariel Kleiner August 21, 2012 Machine learning exists at the intersec

More information

STA 4273H: Sta-s-cal Machine Learning

STA 4273H: Sta-s-cal Machine Learning STA 4273H: Sta-s-cal Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! h0p://www.cs.toronto.edu/~rsalakhu/ Lecture 3 Parametric Distribu>ons We want model the probability

More information

Lecture 24 EM Wrapup and VARIATIONAL INFERENCE

Lecture 24 EM Wrapup and VARIATIONAL INFERENCE Lecture 24 EM Wrapup and VARIATIONAL INFERENCE Latent variables instead of bayesian vs frequen2st, think hidden vs not hidden key concept: full data likelihood vs par2al data likelihood probabilis2c model

More information

Machine Learning. CS 232: Ar)ficial Intelligence Naïve Bayes Oct 26, 2015

Machine Learning. CS 232: Ar)ficial Intelligence Naïve Bayes Oct 26, 2015 1 CS 232: Ar)ficial Intelligence Naïve Bayes Oct 26, 2015 Machine Learning Part 1 of course: how use a model to make op)mal decisions (state space, MDPs) Machine learning: how to acquire a model from data

More information

CS395T Visual Recogni5on and Search. Gautam S. Muralidhar

CS395T Visual Recogni5on and Search. Gautam S. Muralidhar CS395T Visual Recogni5on and Search Gautam S. Muralidhar Today s Theme Unsupervised discovery of images Main mo5va5on behind unsupervised discovery is that supervision is expensive Common tasks include

More information

Minimum Redundancy and Maximum Relevance Feature Selec4on. Hang Xiao

Minimum Redundancy and Maximum Relevance Feature Selec4on. Hang Xiao Minimum Redundancy and Maximum Relevance Feature Selec4on Hang Xiao Background Feature a feature is an individual measurable heuris4c property of a phenomenon being observed In character recogni4on: horizontal

More information

CSCI 599 Class Presenta/on. Zach Levine. Markov Chain Monte Carlo (MCMC) HMM Parameter Es/mates

CSCI 599 Class Presenta/on. Zach Levine. Markov Chain Monte Carlo (MCMC) HMM Parameter Es/mates CSCI 599 Class Presenta/on Zach Levine Markov Chain Monte Carlo (MCMC) HMM Parameter Es/mates April 26 th, 2012 Topics Covered in this Presenta2on A (Brief) Review of HMMs HMM Parameter Learning Expecta2on-

More information

CS 6140: Machine Learning Spring 2016

CS 6140: Machine Learning Spring 2016 CS 6140: Machine Learning Spring 2016 Instructor: Lu Wang College of Computer and Informa?on Science Northeastern University Webpage: www.ccs.neu.edu/home/luwang Email: luwang@ccs.neu.edu Logis?cs Exam

More information

Search Engines. Informa1on Retrieval in Prac1ce. Annotations by Michael L. Nelson

Search Engines. Informa1on Retrieval in Prac1ce. Annotations by Michael L. Nelson Search Engines Informa1on Retrieval in Prac1ce Annotations by Michael L. Nelson All slides Addison Wesley, 2008 Classifica1on and Clustering Classifica1on and clustering are classical padern recogni1on

More information

Decision Trees, Random Forests and Random Ferns. Peter Kovesi

Decision Trees, Random Forests and Random Ferns. Peter Kovesi Decision Trees, Random Forests and Random Ferns Peter Kovesi What do I want to do? Take an image. Iden9fy the dis9nct regions of stuff in the image. Mark the boundaries of these regions. Recognize and

More information

CSE 473: Ar+ficial Intelligence Machine Learning: Naïve Bayes and Perceptron

CSE 473: Ar+ficial Intelligence Machine Learning: Naïve Bayes and Perceptron CSE 473: Ar+ficial Intelligence Machine Learning: Naïve Bayes and Perceptron Luke ZeFlemoyer --- University of Washington [These slides were created by Dan Klein and Pieter Abbeel for CS188 Intro to AI

More information

Informa(on Retrieval

Informa(on Retrieval Introduc*on to Informa(on Retrieval CS276: Informa*on Retrieval and Web Search Pandu Nayak and Prabhakar Raghavan Lecture 12: Clustering Today s Topic: Clustering Document clustering Mo*va*ons Document

More information

Introduc)on to Informa)on Visualiza)on

Introduc)on to Informa)on Visualiza)on Introduc)on to Informa)on Visualiza)on Seeing the Science with Visualiza)on Raw Data 01001101011001 11001010010101 00101010100110 11101101011011 00110010111010 Visualiza(on Applica(on Visualiza)on on

More information

Informa(on Retrieval

Informa(on Retrieval Introduc*on to Informa(on Retrieval Clustering Chris Manning, Pandu Nayak, and Prabhakar Raghavan Today s Topic: Clustering Document clustering Mo*va*ons Document representa*ons Success criteria Clustering

More information

Search Engines. Informa1on Retrieval in Prac1ce. Annota1ons by Michael L. Nelson

Search Engines. Informa1on Retrieval in Prac1ce. Annota1ons by Michael L. Nelson Search Engines Informa1on Retrieval in Prac1ce Annota1ons by Michael L. Nelson All slides Addison Wesley, 2008 Evalua1on Evalua1on is key to building effec$ve and efficient search engines measurement usually

More information

TerraSwarm. A Machine Learning and Op0miza0on Toolkit for the Swarm. Ilge Akkaya, Shuhei Emoto, Edward A. Lee. University of California, Berkeley

TerraSwarm. A Machine Learning and Op0miza0on Toolkit for the Swarm. Ilge Akkaya, Shuhei Emoto, Edward A. Lee. University of California, Berkeley TerraSwarm A Machine Learning and Op0miza0on Toolkit for the Swarm Ilge Akkaya, Shuhei Emoto, Edward A. Lee University of California, Berkeley TerraSwarm Tools Telecon 17 November 2014 Sponsored by the

More information

Decision making for autonomous naviga2on. Anoop Aroor Advisor: Susan Epstein CUNY Graduate Center, Computer science

Decision making for autonomous naviga2on. Anoop Aroor Advisor: Susan Epstein CUNY Graduate Center, Computer science Decision making for autonomous naviga2on Anoop Aroor Advisor: Susan Epstein CUNY Graduate Center, Computer science Overview Naviga2on and Mobile robots Decision- making techniques for naviga2on Building

More information

Ensemble- Based Characteriza4on of Uncertain Features Dennis McLaughlin, Rafal Wojcik

Ensemble- Based Characteriza4on of Uncertain Features Dennis McLaughlin, Rafal Wojcik Ensemble- Based Characteriza4on of Uncertain Features Dennis McLaughlin, Rafal Wojcik Hydrology TRMM TMI/PR satellite rainfall Neuroscience - - MRI Medicine - - CAT Geophysics Seismic Material tes4ng Laser

More information

Cross- Valida+on & ROC curve. Anna Helena Reali Costa PCS 5024

Cross- Valida+on & ROC curve. Anna Helena Reali Costa PCS 5024 Cross- Valida+on & ROC curve Anna Helena Reali Costa PCS 5024 Resampling Methods Involve repeatedly drawing samples from a training set and refibng a model on each sample. Used in model assessment (evalua+ng

More information

An Latent Feature Model for

An Latent Feature Model for An Addi@ve Latent Feature Model for Mario Fritz UC Berkeley Michael Black Brown University Gary Bradski Willow Garage Sergey Karayev UC Berkeley Trevor Darrell UC Berkeley Mo@va@on Transparent objects

More information

About the Course. Reading List. Assignments and Examina5on

About the Course. Reading List. Assignments and Examina5on Uppsala University Department of Linguis5cs and Philology About the Course Introduc5on to machine learning Focus on methods used in NLP Decision trees and nearest neighbor methods Linear models for classifica5on

More information

K-Means and Gaussian Mixture Models

K-Means and Gaussian Mixture Models K-Means and Gaussian Mixture Models David Rosenberg New York University June 15, 2015 David Rosenberg (New York University) DS-GA 1003 June 15, 2015 1 / 43 K-Means Clustering Example: Old Faithful Geyser

More information

COMP 9517 Computer Vision

COMP 9517 Computer Vision COMP 9517 Computer Vision Pa6ern Recogni:on (1) 1 Introduc:on Pa#ern recogni,on is the scien:fic discipline whose goal is the classifica:on of objects into a number of categories or classes Pa6ern recogni:on

More information

Neural Networks: Learning. Cost func5on. Machine Learning

Neural Networks: Learning. Cost func5on. Machine Learning Neural Networks: Learning Cost func5on Machine Learning Neural Network (Classifica2on) total no. of layers in network no. of units (not coun5ng bias unit) in layer Layer 1 Layer 2 Layer 3 Layer 4 Binary

More information

TerraSwarm. A Machine Learning and Op0miza0on Toolkit for the Swarm. Ilge Akkaya, Shuhei Emoto, Edward A. Lee. University of California, Berkeley

TerraSwarm. A Machine Learning and Op0miza0on Toolkit for the Swarm. Ilge Akkaya, Shuhei Emoto, Edward A. Lee. University of California, Berkeley TerraSwarm A Machine Learning and Op0miza0on Toolkit for the Swarm Ilge Akkaya, Shuhei Emoto, Edward A. Lee University of California, Berkeley TerraSwarm Tools Telecon 17 November 2014 Sponsored by the

More information

Informa(on Retrieval

Informa(on Retrieval Introduc)on to Informa)on Retrieval CS3245 Informa(on Retrieval Lecture 7: Scoring, Term Weigh9ng and the Vector Space Model 7 Last Time: Index Compression Collec9on and vocabulary sta9s9cs: Heaps and

More information

ML4Bio Lecture #1: Introduc3on. February 24 th, 2016 Quaid Morris

ML4Bio Lecture #1: Introduc3on. February 24 th, 2016 Quaid Morris ML4Bio Lecture #1: Introduc3on February 24 th, 216 Quaid Morris Course goals Prac3cal introduc3on to ML Having a basic grounding in the terminology and important concepts in ML; to permit self- study,

More information

BoW model. Textual data: Bag of Words model

BoW model. Textual data: Bag of Words model BoW model Textual data: Bag of Words model With text, categoriza9on is the task of assigning a document to one or more categories based on its content. It is appropriate for: Detec9ng and indexing similar

More information

Large Scale Machine Learning for Informa2on Retrieval. Bo Long, Liang Zhang LinkedIn Applied Relevance Science CIKM 2013, San Francisco

Large Scale Machine Learning for Informa2on Retrieval. Bo Long, Liang Zhang LinkedIn Applied Relevance Science CIKM 2013, San Francisco Large Scale Machine Learning for Informa2on Retrieval Bo Long, Liang Zhang LinkedIn Applied Relevance Science CIKM 2013, San Francisco Structure of This Tutorial Part I: Introduc2on Challenges from Big

More information

Machine Learning / Jan 27, 2010

Machine Learning / Jan 27, 2010 Revisiting Logistic Regression & Naïve Bayes Aarti Singh Machine Learning 10-701/15-781 Jan 27, 2010 Generative and Discriminative Classifiers Training classifiers involves learning a mapping f: X -> Y,

More information

10703 Deep Reinforcement Learning and Control

10703 Deep Reinforcement Learning and Control 10703 Deep Reinforcement Learning and Control Russ Salakhutdinov Machine Learning Department rsalakhu@cs.cmu.edu Policy Gradient II Used Materials Disclaimer: Much of the material and slides for this lecture

More information

CSE 473: Ar+ficial Intelligence Uncertainty and Expec+max Tree Search

CSE 473: Ar+ficial Intelligence Uncertainty and Expec+max Tree Search CSE 473: Ar+ficial Intelligence Uncertainty and Expec+max Tree Search Instructors: Luke ZeDlemoyer Univeristy of Washington [These slides were adapted from Dan Klein and Pieter Abbeel for CS188 Intro to

More information

Recent Advances in Recommender Systems and Future Direc5ons

Recent Advances in Recommender Systems and Future Direc5ons Recent Advances in Recommender Systems and Future Direc5ons George Karypis Department of Computer Science & Engineering University of Minnesota 1 OVERVIEW OF RECOMMENDER SYSTEMS 2 Recommender Systems Recommender

More information

CS 6140: Machine Learning Spring 2017

CS 6140: Machine Learning Spring 2017 CS 6140: Machine Learning Spring 2017 Instructor: Lu Wang College of Computer and Informa@on Science Northeastern University Webpage: www.ccs.neu.edu/home/luwang Email: luwang@ccs.neu.edu Logis@cs Grades

More information

Quick review: Data Mining Tasks... Classifica(on [Predic(ve] Regression [Predic(ve] Clustering [Descrip(ve] Associa(on Rule Discovery [Descrip(ve]

Quick review: Data Mining Tasks... Classifica(on [Predic(ve] Regression [Predic(ve] Clustering [Descrip(ve] Associa(on Rule Discovery [Descrip(ve] Evaluation Quick review: Data Mining Tasks... Classifica(on [Predic(ve] Regression [Predic(ve] Clustering [Descrip(ve] Associa(on Rule Discovery [Descrip(ve] Classification: Definition Given a collec(on

More information

Naïve Bayes Classification. Material borrowed from Jonathan Huang and I. H. Witten s and E. Frank s Data Mining and Jeremy Wyatt and others

Naïve Bayes Classification. Material borrowed from Jonathan Huang and I. H. Witten s and E. Frank s Data Mining and Jeremy Wyatt and others Naïve Bayes Classification Material borrowed from Jonathan Huang and I. H. Witten s and E. Frank s Data Mining and Jeremy Wyatt and others Things We d Like to Do Spam Classification Given an email, predict

More information

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu FMA901F: Machine Learning Lecture 3: Linear Models for Regression Cristian Sminchisescu Machine Learning: Frequentist vs. Bayesian In the frequentist setting, we seek a fixed parameter (vector), with value(s)

More information

Gradient Descent. Michail Michailidis & Patrick Maiden

Gradient Descent. Michail Michailidis & Patrick Maiden Gradient Descent Michail Michailidis & Patrick Maiden Outline Mo4va4on Gradient Descent Algorithm Issues & Alterna4ves Stochas4c Gradient Descent Parallel Gradient Descent HOGWILD! Mo4va4on It is good

More information

Origin- des*na*on Flow Measurement in High- Speed Networks

Origin- des*na*on Flow Measurement in High- Speed Networks IEEE INFOCOM, 2012 Origin- des*na*on Flow Measurement in High- Speed Networks Tao Li Shigang Chen Yan Qiao Introduc*on (Defini*ons) Origin- des+na+on flow between two routers is the set of packets that

More information

IBL and clustering. Relationship of IBL with CBR

IBL and clustering. Relationship of IBL with CBR IBL and clustering Distance based methods IBL and knn Clustering Distance based and hierarchical Probability-based Expectation Maximization (EM) Relationship of IBL with CBR + uses previously processed

More information

What is machine learning?

What is machine learning? Machine learning, pattern recognition and statistical data modelling Lecture 12. The last lecture Coryn Bailer-Jones 1 What is machine learning? Data description and interpretation finding simpler relationship

More information

A Simple and Efficient Sampling Method for Es7ma7ng AP and ndcg

A Simple and Efficient Sampling Method for Es7ma7ng AP and ndcg A Simple and Efficient Sampling Method for Es7ma7ng AP and ndcg Emine Yilmaz Microso' Research, Cambridge, UK Evangelos Kanoulas Javed Aslam Northeastern University, Boston, USA Introduc7on Obtaining relevance

More information

Weka ( )

Weka (  ) Weka ( http://www.cs.waikato.ac.nz/ml/weka/ ) The phases in which classifier s design can be divided are reflected in WEKA s Explorer structure: Data pre-processing (filtering) and representation Supervised

More information

Randomized Branch Sampling (RBS): Size software projects without wasting time analyzing each user story. Dimitar Bakardzhiev

Randomized Branch Sampling (RBS): Size software projects without wasting time analyzing each user story. Dimitar Bakardzhiev Randomized Branch Sampling (RBS): Size software projects without wasting time analyzing each user story Dimitar Bakardzhiev How big is our project? Sizing es4mates the probable size of a piece of so9ware

More information

Differen'al Privacy. CS 297 Pragya Rana

Differen'al Privacy. CS 297 Pragya Rana Differen'al Privacy CS 297 Pragya Rana Outline Introduc'on Privacy Data Analysis: The SeAng Impossibility of Absolute Disclosure Preven'on Achieving Differen'al Privacy Introduc'on Sta's'c: quan'ty computed

More information

Bayes Net Learning. EECS 474 Fall 2016

Bayes Net Learning. EECS 474 Fall 2016 Bayes Net Learning EECS 474 Fall 2016 Homework Remaining Homework #3 assigned Homework #4 will be about semi-supervised learning and expectation-maximization Homeworks #3-#4: the how of Graphical Models

More information

CS6200 Informa.on Retrieval. David Smith College of Computer and Informa.on Science Northeastern University

CS6200 Informa.on Retrieval. David Smith College of Computer and Informa.on Science Northeastern University CS6200 Informa.on Retrieval David Smith College of Computer and Informa.on Science Northeastern University Indexing Process Indexes Indexes are data structures designed to make search faster Text search

More information

Informa(on Retrieval

Informa(on Retrieval Introduc)on to Informa)on Retrieval CS3245 Informa(on Retrieval Lecture 7: Scoring, Term Weigh9ng and the Vector Space Model 7 Last Time: Index Construc9on Sort- based indexing Blocked Sort- Based Indexing

More information

Literature Survey: Domain Adaptation Algorithms for Natural Language Processing.

Literature Survey: Domain Adaptation Algorithms for Natural Language Processing. Literature Survey: Domain Adaptation Algorithms for Natural Language Processing Qi Li qli@gc.cuny.edu Department of Computer Science The Graduate Center, The City University of New York June 13, 2012 Contents

More information

Deformable Part Models

Deformable Part Models Deformable Part Models References: Felzenszwalb, Girshick, McAllester and Ramanan, Object Detec@on with Discrimina@vely Trained Part Based Models, PAMI 2010 Code available at hkp://www.cs.berkeley.edu/~rbg/latent/

More information

Vector Semantics. Dense Vectors

Vector Semantics. Dense Vectors Vector Semantics Dense Vectors Sparse versus dense vectors PPMI vectors are long (length V = 20,000 to 50,000) sparse (most elements are zero) Alterna>ve: learn vectors which are short (length 200-1000)

More information

CS839: Probabilistic Graphical Models. Lecture 10: Learning with Partially Observed Data. Theo Rekatsinas

CS839: Probabilistic Graphical Models. Lecture 10: Learning with Partially Observed Data. Theo Rekatsinas CS839: Probabilistic Graphical Models Lecture 10: Learning with Partially Observed Data Theo Rekatsinas 1 Partially Observed GMs Speech recognition 2 Partially Observed GMs Evolution 3 Partially Observed

More information

Network Traffic Measurements and Analysis

Network Traffic Measurements and Analysis DEIB - Politecnico di Milano Fall, 2017 Sources Hastie, Tibshirani, Friedman: The Elements of Statistical Learning James, Witten, Hastie, Tibshirani: An Introduction to Statistical Learning Andrew Ng:

More information

Pa#ern Recogni-on for Neuroimaging Toolbox

Pa#ern Recogni-on for Neuroimaging Toolbox Pa#ern Recogni-on for Neuroimaging Toolbox Pa#ern Recogni-on Methods: Basics João M. Monteiro Based on slides from Jessica Schrouff and Janaina Mourão-Miranda PRoNTo course UCL, London, UK 2017 Outline

More information

CS60092: Informa0on Retrieval

CS60092: Informa0on Retrieval Introduc)on to CS60092: Informa0on Retrieval Sourangshu Bha1acharya Today s lecture hypertext and links We look beyond the content of documents We begin to look at the hyperlinks between them Address ques)ons

More information

Recommender Systems Collabora2ve Filtering and Matrix Factoriza2on

Recommender Systems Collabora2ve Filtering and Matrix Factoriza2on Recommender Systems Collaborave Filtering and Matrix Factorizaon Narges Razavian Thanks to lecture slides from Alex Smola@CMU Yahuda Koren@Yahoo labs and Bing Liu@UIC We Know What You Ought To Be Watching

More information

Improving Probabilistic Latent Semantic Analysis with Principal Component Analysis

Improving Probabilistic Latent Semantic Analysis with Principal Component Analysis Improving Probabilistic Latent Semantic Analysis with Principal Component Analysis Ayman Farahat Palo Alto Research Center 3333 Coyote Hill Road Palo Alto, CA 94304 ayman.farahat@gmail.com Francine Chen

More information

Inference and Representation

Inference and Representation Inference and Representation Rachel Hodos New York University Lecture 5, October 6, 2015 Rachel Hodos Lecture 5: Inference and Representation Today: Learning with hidden variables Outline: Unsupervised

More information

Feature Selec+on. Machine Learning Fall 2018 Kasthuri Kannan

Feature Selec+on. Machine Learning Fall 2018 Kasthuri Kannan Feature Selec+on Machine Learning Fall 2018 Kasthuri Kannan Interpretability vs. Predic+on Types of feature selec+on Subset selec+on/forward/backward Shrinkage (Lasso/Ridge) Best model (CV) Feature selec+on

More information

Machine Learning

Machine Learning Machine Learning 10-701 Tom M. Mitchell Machine Learning Department Carnegie Mellon University February 17, 2011 Today: Graphical models Learning from fully labeled data Learning from partly observed data

More information

Probabilistic Graphical Models Part III: Example Applications

Probabilistic Graphical Models Part III: Example Applications Probabilistic Graphical Models Part III: Example Applications Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Fall 2014 CS 551, Fall 2014 c 2014, Selim

More information

Clustering Lecture 5: Mixture Model

Clustering Lecture 5: Mixture Model Clustering Lecture 5: Mixture Model Jing Gao SUNY Buffalo 1 Outline Basics Motivation, definition, evaluation Methods Partitional Hierarchical Density-based Mixture model Spectral methods Advanced topics

More information

Machine Learning

Machine Learning Machine Learning 10-601 Tom M. Mitchell Machine Learning Department Carnegie Mellon University March 4, 2015 Today: Graphical models Bayes Nets: EM Mixture of Gaussian clustering Learning Bayes Net structure

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 Clustering and EM Barnabás Póczos & Aarti Singh Contents Clustering K-means Mixture of Gaussians Expectation Maximization Variational Methods 2 Clustering 3 K-

More information

Stages of (Batch) Machine Learning

Stages of (Batch) Machine Learning Evalua&on Stages of (Batch) Machine Learning Given: labeled training data X, Y = {hx i,y i i} n i=1 Assumes each x i D(X ) with y i = f target (x i ) Train the model: model ß classifier.train(x, Y ) x

More information

Machine Learning. Supervised Learning. Manfred Huber

Machine Learning. Supervised Learning. Manfred Huber Machine Learning Supervised Learning Manfred Huber 2015 1 Supervised Learning Supervised learning is learning where the training data contains the target output of the learning system. Training data D

More information

Lecture 13: Tracking mo3on features op3cal flow

Lecture 13: Tracking mo3on features op3cal flow Lecture 13: Tracking mo3on features op3cal flow Professor Fei- Fei Li Stanford Vision Lab Lecture 13-1! What we will learn today? Introduc3on Op3cal flow Feature tracking Applica3ons (Problem Set 3 (Q1))

More information

7 Ways to Increase Your Produc2vity with Revolu2on R Enterprise 3.0. David Smith, REvolu2on Compu2ng

7 Ways to Increase Your Produc2vity with Revolu2on R Enterprise 3.0. David Smith, REvolu2on Compu2ng 7 Ways to Increase Your Produc2vity with Revolu2on R Enterprise 3.0 David Smith, REvolu2on Compu2ng REvolu2on Compu2ng: The R Company REvolu2on R Free, high- performance binary distribu2on of R REvolu2on

More information

R (2) Data analysis case study using R for readily available data set using any one machine learning algorithm.

R (2) Data analysis case study using R for readily available data set using any one machine learning algorithm. Assignment No. 4 Title: SD Module- Data Science with R Program R (2) C (4) V (2) T (2) Total (10) Dated Sign Data analysis case study using R for readily available data set using any one machine learning

More information

Homework. Gaussian, Bishop 2.3 Non-parametric, Bishop 2.5 Linear regression Pod-cast lecture on-line. Next lectures:

Homework. Gaussian, Bishop 2.3 Non-parametric, Bishop 2.5 Linear regression Pod-cast lecture on-line. Next lectures: Homework Gaussian, Bishop 2.3 Non-parametric, Bishop 2.5 Linear regression 3.0-3.2 Pod-cast lecture on-line Next lectures: I posted a rough plan. It is flexible though so please come with suggestions Bayes

More information

CS 6140: Machine Learning Spring Final Exams. What we learned Final Exams 2/26/16

CS 6140: Machine Learning Spring Final Exams. What we learned Final Exams 2/26/16 Logis@cs CS 6140: Machine Learning Spring 2016 Instructor: Lu Wang College of Computer and Informa@on Science Northeastern University Webpage: www.ccs.neu.edu/home/luwang Email: luwang@ccs.neu.edu Assignment

More information

Unsupervised Learning: Clustering

Unsupervised Learning: Clustering Unsupervised Learning: Clustering Vibhav Gogate The University of Texas at Dallas Slides adapted from Carlos Guestrin, Dan Klein & Luke Zettlemoyer Machine Learning Supervised Learning Unsupervised Learning

More information

Using Sequen+al Run+me Distribu+ons for the Parallel Speedup Predic+on of SAT Local Search

Using Sequen+al Run+me Distribu+ons for the Parallel Speedup Predic+on of SAT Local Search Using Sequen+al Run+me Distribu+ons for the Parallel Speedup Predic+on of SAT Local Search Alejandro Arbelaez - CharloBe Truchet - Philippe Codognet JFLI University of Tokyo LINA, UMR 6241 University of

More information

Module: Sequence Alignment Theory and Applica8ons Session: BLAST

Module: Sequence Alignment Theory and Applica8ons Session: BLAST Module: Sequence Alignment Theory and Applica8ons Session: BLAST Learning Objec8ves and Outcomes v Understand the principles of the BLAST algorithm v Understand the different BLAST algorithms, parameters

More information

Missing Data. Where did it go?

Missing Data. Where did it go? Missing Data Where did it go? 1 Learning Objectives High-level discussion of some techniques Identify type of missingness Single vs Multiple Imputation My favourite technique 2 Problem Uh data are missing

More information

Chapter 9. Classification and Clustering

Chapter 9. Classification and Clustering Chapter 9 Classification and Clustering Classification and Clustering Classification and clustering are classical pattern recognition and machine learning problems Classification, also referred to as categorization

More information

Making Web Search More User- Centric. State of Play and Way Ahead

Making Web Search More User- Centric. State of Play and Way Ahead Making Web Search More User- Centric State of Play and Way Ahead A Yandex Overview 1997 Yandex.ru was launched 5 Search engine in the world * (# of queries) Offices > Moscow > 6 Offices in Russia > 3 Offices

More information

Learning from Data. COMP61011 : Machine Learning and Data Mining. Dr Gavin Brown Machine Learning and Op<miza<on Research Group

Learning from Data. COMP61011 : Machine Learning and Data Mining. Dr Gavin Brown Machine Learning and Op<miza<on Research Group Learning from Data COMP61011 : Machine Learning and Data Mining Dr Gavin Brown Machine Learning and Op

More information

The local geometry of graphs, or, how to read large graphs. Na7 Linial Hebrew University, Jerusalem

The local geometry of graphs, or, how to read large graphs. Na7 Linial Hebrew University, Jerusalem The local geometry of graphs, or, how to read large graphs Na7 Linial Hebrew University, Jerusalem 1 Graphs (aka Networks) 2 Sta7s7cs 001 What do you do with a large collec7on of numbers that come from

More information

Note Set 4: Finite Mixture Models and the EM Algorithm

Note Set 4: Finite Mixture Models and the EM Algorithm Note Set 4: Finite Mixture Models and the EM Algorithm Padhraic Smyth, Department of Computer Science University of California, Irvine Finite Mixture Models A finite mixture model with K components, for

More information

Conditional Random Fields - A probabilistic graphical model. Yen-Chin Lee 指導老師 : 鮑興國

Conditional Random Fields - A probabilistic graphical model. Yen-Chin Lee 指導老師 : 鮑興國 Conditional Random Fields - A probabilistic graphical model Yen-Chin Lee 指導老師 : 鮑興國 Outline Labeling sequence data problem Introduction conditional random field (CRF) Different views on building a conditional

More information

Lecture 14: Tracking mo3on features op3cal flow

Lecture 14: Tracking mo3on features op3cal flow Lecture 14: Tracking mo3on features op3cal flow Dr. Juan Carlos Niebles Stanford AI Lab Professor Fei- Fei Li Stanford Vision Lab Lecture 14-1 What we will learn today? Introduc3on Op3cal flow Feature

More information

http://www.xkcd.com/233/ Text Clustering David Kauchak cs160 Fall 2009 adapted from: http://www.stanford.edu/class/cs276/handouts/lecture17-clustering.ppt Administrative 2 nd status reports Paper review

More information

Reddit Recommendation System Daniel Poon, Yu Wu, David (Qifan) Zhang CS229, Stanford University December 11 th, 2011

Reddit Recommendation System Daniel Poon, Yu Wu, David (Qifan) Zhang CS229, Stanford University December 11 th, 2011 Reddit Recommendation System Daniel Poon, Yu Wu, David (Qifan) Zhang CS229, Stanford University December 11 th, 2011 1. Introduction Reddit is one of the most popular online social news websites with millions

More information

Decision Support Systems

Decision Support Systems Decision Support Systems 2011/2012 Week 3. Lecture 5 Previous Class: Data Pre- Processing Data quality: accuracy, completeness, consistency, 4meliness, believability, interpretability Data cleaning: handling

More information

Structured Learning. Jun Zhu

Structured Learning. Jun Zhu Structured Learning Jun Zhu Supervised learning Given a set of I.I.D. training samples Learn a prediction function b r a c e Supervised learning (cont d) Many different choices Logistic Regression Maximum

More information

The Prac)cal Applica)on of Knowledge Discovery to Image Data: A Prac))oners View in The Context of Medical Image Mining

The Prac)cal Applica)on of Knowledge Discovery to Image Data: A Prac))oners View in The Context of Medical Image Mining The Prac)cal Applica)on of Knowledge Discovery to Image Data: A Prac))oners View in The Context of Medical Image Mining Frans Coenen (http://cgi.csc.liv.ac.uk/~frans/) 10th Interna+onal Conference on Natural

More information

Bayes Classifiers and Generative Methods

Bayes Classifiers and Generative Methods Bayes Classifiers and Generative Methods CSE 4309 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 The Stages of Supervised Learning To

More information

Graphical Models. David M. Blei Columbia University. September 17, 2014

Graphical Models. David M. Blei Columbia University. September 17, 2014 Graphical Models David M. Blei Columbia University September 17, 2014 These lecture notes follow the ideas in Chapter 2 of An Introduction to Probabilistic Graphical Models by Michael Jordan. In addition,

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Overview of Part Two Probabilistic Graphical Models Part Two: Inference and Learning Christopher M. Bishop Exact inference and the junction tree MCMC Variational methods and EM Example General variational

More information

CSCI 599: Applications of Natural Language Processing Information Retrieval Retrieval Models (Part 3)"

CSCI 599: Applications of Natural Language Processing Information Retrieval Retrieval Models (Part 3) CSCI 599: Applications of Natural Language Processing Information Retrieval Retrieval Models (Part 3)" All slides Addison Wesley, Donald Metzler, and Anton Leuski, 2008, 2012! Language Model" Unigram language

More information

Informa/on Retrieval. Text Search. CISC437/637, Lecture #23 Ben CartereAe. Consider a database consis/ng of long textual informa/on fields

Informa/on Retrieval. Text Search. CISC437/637, Lecture #23 Ben CartereAe. Consider a database consis/ng of long textual informa/on fields Informa/on Retrieval CISC437/637, Lecture #23 Ben CartereAe Copyright Ben CartereAe 1 Text Search Consider a database consis/ng of long textual informa/on fields News ar/cles, patents, web pages, books,

More information

CPSC 340: Machine Learning and Data Mining. Principal Component Analysis Fall 2017

CPSC 340: Machine Learning and Data Mining. Principal Component Analysis Fall 2017 CPSC 340: Machine Learning and Data Mining Principal Component Analysis Fall 2017 Assignment 3: 2 late days to hand in tonight. Admin Assignment 4: Due Friday of next week. Last Time: MAP Estimation MAP

More information

Machine Learning

Machine Learning Machine Learning 10-601 Tom M. Mitchell Machine Learning Department Carnegie Mellon University October 9, 2012 Today: Graphical models Bayes Nets: Inference Learning Readings: Required: Bishop chapter

More information

Lecture 8: The EM algorithm

Lecture 8: The EM algorithm 10-708: Probabilistic Graphical Models 10-708, Spring 2017 Lecture 8: The EM algorithm Lecturer: Manuela M. Veloso, Eric P. Xing Scribes: Huiting Liu, Yifan Yang 1 Introduction Previous lecture discusses

More information

Ar#ficial Intelligence

Ar#ficial Intelligence Ar#ficial Intelligence Advanced Searching Prof Alexiei Dingli Gene#c Algorithms Charles Darwin Genetic Algorithms are good at taking large, potentially huge search spaces and navigating them, looking for

More information

CS 6140: Machine Learning Spring 2016

CS 6140: Machine Learning Spring 2016 CS 6140: Machine Learning Spring 2016 Instructor: Lu Wang College of Computer and Informa?on Science Northeastern University Webpage: www.ccs.neu.edu/home/luwang Email: luwang@ccs.neu.edu Logis?cs Assignment

More information