Social Interactions: A First-Person Perspective.

Size: px
Start display at page:

Download "Social Interactions: A First-Person Perspective."

Transcription

1 Social Interactions: A First-Person Perspective. A. Fathi, J. Hodgins, J. Rehg Presented by Jacob Menashe November 16, 2012

2 Social Interaction Detection Objective: Detect social interactions from video footage.

3 Social Interaction Detection Objective: Detect social interactions from video footage. Consider faces and attention

4 Social Interaction Detection Objective: Detect social interactions from video footage. Consider faces and attention Account for temporal context

5 Social Interaction Detection Objective: Detect social interactions from video footage. Consider faces and attention Account for temporal context Analyze first-person movements cues

6 Introduction Overview Features Temporal Context Experiments

7 Video Example Red Yellow Green Light Blue Dark Blue None Dialogue Walking Dialogue Discussion Walking Discussion Monologue Background Link

8 Features Features are constructed based on first- and third-person information.

9 Features Features are constructed based on first- and third-person information. 1. Dense optical flow (first-person movement).

10 Features Features are constructed based on first- and third-person information. 1. Dense optical flow (first-person movement). 2. Face locations (relative to first person)

11 Features Features are constructed based on first- and third-person information. 1. Dense optical flow (first-person movement). 2. Face locations (relative to first person) 3. Attention and Roles. For each person x:

12 Features Features are constructed based on first- and third-person information. 1. Dense optical flow (first-person movement). 2. Face locations (relative to first person) 3. Attention and Roles. For each person x: Faces looking at x

13 Features Features are constructed based on first- and third-person information. 1. Dense optical flow (first-person movement). 2. Face locations (relative to first person) 3. Attention and Roles. For each person x: Faces looking at x Whether first person looks at x

14 Features Features are constructed based on first- and third-person information. 1. Dense optical flow (first-person movement). 2. Face locations (relative to first person) 3. Attention and Roles. For each person x: Faces looking at x Whether first person looks at x Mutual attention between x and first person

15 Features Features are constructed based on first- and third-person information. 1. Dense optical flow (first-person movement). 2. Face locations (relative to first person) 3. Attention and Roles. For each person x: Faces looking at x Whether first person looks at x Mutual attention between x and first person Number of faces looking at where x is looking

16 Feature Example

17 Conditional Random Fields CRFs are described in Lafferty et al. [2001].

18 Conditional Random Fields CRFs are described in Lafferty et al. [2001]. Observations and labels form a Markov chain. Nodes pend on neighbors. y 1 y 2 y 3 x 1 x 2 x 3

19 Conditional Random Fields CRFs are described in Lafferty et al. [2001]. Observations and labels form a Markov chain. Nodes pend on neighbors. y 1 y 1 y 2 y 3 p(y 1 x 1, y 2 ) x 1 x 2 x 3

20 Conditional Random Fields CRFs are described in Lafferty et al. [2001]. Observations and labels form a Markov chain. Nodes pend on neighbors. y 1 y 2 y 3 p(y 2 y 1, y 3, x 2 ) x 1 x 2 x 3

21 Conditional Random Fields CRFs are described in Lafferty et al. [2001]. Observations and labels form a Markov chain. Nodes pend on neighbors. y 3 y 1 y 2 y 3 p(y 3 y 2, x 3 ) x 1 x 2 x 3

22 Hidden Conditional Random Fields A micro view of the HCRF model as described in Quattoni et al. [2007]. Y h 1 h 2 h 3 x i

23 Hidden Conditional Random Fields A micro view of the HCRF model as described in Quattoni et al. [2007]. Y is a label for the whole sequence. Y h 1 h 2 h 3 x i

24 Hidden Conditional Random Fields A micro view of the HCRF model as described in Quattoni et al. [2007]. Y is a label for the whole sequence. x i is a single observation in the sequence. Y h 1 h 2 h 3 x i

25 Hidden Conditional Random Fields A micro view of the HCRF model as described in Quattoni et al. [2007]. Y is a label for the whole sequence. x i is a single observation in the sequence. Each h i is a possible hidden state. Y h 1 h 2 h 3 x i

26 Hidden Conditional Random Fields (cont.) A macro view of the HCRF model as described in Quattoni et al. [2007]. Y h 1 h 2 h 3 x 1 x 2 x 3

27 Hidden Conditional Random Fields (cont.) A macro view of the HCRF model as described in Quattoni et al. [2007]. Y is a label for the whole sequence. Y h 1 h 2 h 3 x 1 x 2 x 3

28 Hidden Conditional Random Fields (cont.) A macro view of the HCRF model as described in Quattoni et al. [2007]. Y is a label for the whole sequence. Each x i is a single observation in the sequence. Y h 1 h 2 h 3 x 1 x 2 x 3

29 Hidden Conditional Random Fields (cont.) A macro view of the HCRF model as described in Quattoni et al. [2007]. Y is a label for the whole sequence. Each x i is a single observation in the sequence. Each h i is the hidden state label assigned to x i. Y h 1 h 2 h 3 x 1 x 2 x 3

30 Hidden Conditional Random Fields (cont.) A macro view of the HCRF model as described in Quattoni et al. [2007]. Y is a label for the whole sequence. Each x i is a single observation in the sequence. Each h i is the hidden state label assigned to x i. Y p(h 1 Y, h 2, x 1 ) h 1 h 2 h 3 x 1 x 2 x 3

31 Hidden Conditional Random Fields (cont.) A macro view of the HCRF model as described in Quattoni et al. [2007]. Y is a label for the whole sequence. Each x i is a single observation in the sequence. Each h i is the hidden state label assigned to x i. Y p(h 2 Y, h 1, h 3, x 2 ) h 1 h 2 h 3 x 1 x 2 x 3

32 Hidden Conditional Random Fields (cont.) A macro view of the HCRF model as described in Quattoni et al. [2007]. Y is a label for the whole sequence. Each x i is a single observation in the sequence. Each h i is the hidden state label assigned to x i. Y p(h 3 Y, h 2, x 3 ) h 1 h 2 h 3 x 1 x 2 x 3

33 Hidden Conditional Random Fields (cont.) A macro view of the HCRF model as described in Quattoni et al. [2007]. Y is a label for the whole sequence. Each x i is a single observation in the sequence. Each h i is the hidden state label assigned to x i. Y p(y {h i }) = p(y {x i }) h 1 h 2 h 3 x 1 x 2 x 3

34 HCRF Example Suppose we want to find the likelihood of walking dialogue (WDlg) vs walking discussion (WDisc). WDlg h 1 h 2 h 3 x 1 x 2 x 3

35 HCRF Example Suppose we want to find the likelihood of walking dialogue (WDlg) vs walking discussion (WDisc). Each x i is now a feature extracted from video frames. WDlg h 1 h 2 h 3 x 1 x 2 x 3

36 HCRF Example Suppose we want to find the likelihood of walking dialogue (WDlg) vs walking discussion (WDisc). Each x i is now a feature extracted from video frames. Each h i is determined from training: WDlg h 1 h 2 h 3 x 1 x 2 x 3

37 HCRF Example Suppose we want to find the likelihood of walking dialogue (WDlg) vs walking discussion (WDisc). Each x i is now a feature extracted from video frames. Each h i is determined from training: h1 : John wants to hear about my weekend. WDlg h 1 h 1 h 2 h 3 x 1 x 2 x 3

38 HCRF Example Suppose we want to find the likelihood of walking dialogue (WDlg) vs walking discussion (WDisc). Each x i is now a feature extracted from video frames. Each h i is determined from training: h 2 : I m feeling talkative. WDlg h 2 h 1 h 2 h 3 x 1 x 2 x 3

39 HCRF Example Suppose we want to find the likelihood of walking dialogue (WDlg) vs walking discussion (WDisc). Each x i is now a feature extracted from video frames. Each h i is determined from training: h 3 : Mary wants to listen to her ipod. WDlg h 3 h 1 h 2 h 3 x 1 x 2 x 3

40 HCRF Example Suppose we want to find the likelihood of walking dialogue (WDlg) vs walking discussion (WDisc). Each x i is now a feature extracted from video frames. Each h i is determined from training: h1 : John wants to hear about my weekend. WDlg p(h 1 Y, h 2, x 1 ) h 1 h 2 h 3 x 1 x 2 x 3

41 HCRF Example Suppose we want to find the likelihood of walking dialogue (WDlg) vs walking discussion (WDisc). Each x i is now a feature extracted from video frames. Each h i is determined from training: h 2 : I m feeling talkative. WDlg p(h 2 Y, h 1, h 3, x 2 ) h 1 h 2 h 3 x 1 x 2 x 3

42 HCRF Example Suppose we want to find the likelihood of walking dialogue (WDlg) vs walking discussion (WDisc). Each x i is now a feature extracted from video frames. Each h i is determined from training: h 3 : Mary wants to listen to her ipod. WDlg p(h 3 Y, h 2, x 3 ) h 1 h 2 h 3 x 1 x 2 x 3

43 HCRF Example Suppose we want to find the likelihood of walking dialogue (WDlg) vs walking discussion (WDisc). Each x i is now a feature extracted from video frames. Each h i is determined from training: h1 : John wants to hear about my weekend. h 2 : I m feeling talkative. h 3 : Mary wants to listen to her ipod. WDlg p(wdlg {h i }) = p(wdlg {x i }) h 1 h 2 h 3 x 1 x 2 x 3

44 HCRF Example Suppose we want to find the likelihood of walking dialogue (WDlg) vs walking discussion (WDisc). Each x i is now a feature extracted from video frames. Each h i is determined from training: h1 : John wants to hear about my weekend. h 2 : I m feeling talkative. h 3 : Mary wants to listen to her ipod. WDisc WDlg p(wdisc {h i }) = p(wdisc {x i }) h 1 h 2 h 3 x 1 x 2 x 3

45 HCRF Example Suppose we want to find the likelihood of walking dialogue (WDlg) vs walking discussion (WDisc). Each x i is now a feature extracted from video frames. Each h i is determined from training: h1 : John wants to hear about my weekend. h 2 : I m feeling talkative. h 3 : Mary wants to listen to her ipod. Y If p(wdlg) > p(wdisc), assign Y = WDlg. h 1 h 2 h 3 x 1 x 2 x 3

46 Introduction Overview Temporal Context Conditional Random Fields Hidden Conditional Random Fields HCRF Example Experiments

47 Introduction Overview Temporal Context Experiments Experiment Outline Experiment 1: Video Processing Experiment 2: Caltech Dataset Conclusion

48 Experiment Outline The following experiments are presented:

49 Experiment Outline The following experiments are presented: Video Processing

50 Experiment Outline The following experiments are presented: Video Processing Caltech image dataset

51 Experiment Outline The following experiments are presented: Video Processing Caltech image dataset Adjusted parameters:

52 Experiment Outline The following experiments are presented: Video Processing Caltech image dataset Adjusted parameters: Iterations

53 Experiment Outline The following experiments are presented: Video Processing Caltech image dataset Adjusted parameters: Iterations Hidden States

54 Experiment Outline The following experiments are presented: Video Processing Caltech image dataset Adjusted parameters: Iterations Hidden States Optimization Function

55 Experiment Outline The following experiments are presented: Video Processing Caltech image dataset Adjusted parameters: Iterations Hidden States Optimization Function Clusters

56 Experiment Outline The following experiments are presented: Video Processing Caltech image dataset Adjusted parameters: Iterations Hidden States Optimization Function Clusters Compared with linear SVM baseline

57 Experiment 1: Video Processing Mine Theirs 40 training intervals 4,000 training intervals 40 testing intervals [unspecified] Dialogue vs Discussion One vs. All All Features Location First-Person Motion Attention All Features

58 Experiment 1: Video Processing Mine Theirs 40 training intervals 4,000 training intervals 40 testing intervals [unspecified] Dialogue vs Discussion One vs. All All Features Location First-Person Motion Attention All Features ~42 hours = 11,340 intervals 11, hours per 20 intervals > 18 months

59 Experiment 1: Video Processing (cont.) My Results Their Results 1 HCRF Dialogue vs Discussion Detection True positive rate False positive rate

60 Experiment 2: Caltech Dataset Experiment 2 focuses on the Caltech image dataset.

61 Experiment 2: Caltech Dataset Experiment 2 focuses on the Caltech image dataset. Multi-class HCRF evaluated

62 Experiment 2: Caltech Dataset Experiment 2 focuses on the Caltech image dataset. Multi-class HCRF evaluated Classes are evaluated in isolation.

63 Experiment 2: Caltech Dataset Experiment 2 focuses on the Caltech image dataset. Multi-class HCRF evaluated Classes are evaluated in isolation. Temporal context is simulated with clustering

64 Experiment 2: Caltech Dataset Experiment 2 focuses on the Caltech image dataset. Multi-class HCRF evaluated Classes are evaluated in isolation. Temporal context is simulated with clustering Initial parameters are based on Fathi et al. [2012]:

65 Experiment 2: Caltech Dataset Experiment 2 focuses on the Caltech image dataset. Multi-class HCRF evaluated Classes are evaluated in isolation. Temporal context is simulated with clustering Initial parameters are based on Fathi et al. [2012]: Hidden States: 5

66 Experiment 2: Caltech Dataset Experiment 2 focuses on the Caltech image dataset. Multi-class HCRF evaluated Classes are evaluated in isolation. Temporal context is simulated with clustering Initial parameters are based on Fathi et al. [2012]: Hidden States: 5 Window Size: 5

67 Experiment 2: Caltech Dataset Experiment 2 focuses on the Caltech image dataset. Multi-class HCRF evaluated Classes are evaluated in isolation. Temporal context is simulated with clustering Initial parameters are based on Fathi et al. [2012]: Hidden States: 5 Window Size: 5 Max Iterations: 100

68 Experiment 2: Caltech Dataset Experiment 2 focuses on the Caltech image dataset. Multi-class HCRF evaluated Classes are evaluated in isolation. Temporal context is simulated with clustering Initial parameters are based on Fathi et al. [2012]: Hidden States: 5 Window Size: 5 Max Iterations: 100 Optimizer: Broyden Fletcher-Goldfarb-Shanno (BFGS)

69 Exp. 2a: Initial Settings test airplanes HCRF cars HCRF faces HCRF motorbikes HCRF svm (all) 0.7 True positive rate False positive rate Processing: ~18 minutes, 1 MB

70 Exp. 2a: Initial Settings (cont.) airplanes rank 1 rank 2 rank 3 rank 4 motorbikes faces cars

71 Exp. 2b: Low Iterations HCRF with 10 iterations airplanes HCRF cars HCRF faces HCRF motorbikes HCRF 0.7 True positive rate False positive rate Processing: ~3 minutes, 1 MB

72 Exp. 2c: Low Hidden States HCRF with 1 Hidden State airplanes HCRF cars HCRF faces HCRF motorbikes HCRF 0.7 True positive rate False positive rate Processing: ~2 minutes, 1 MB

73 Exp. 2d: CG Optimizer HCRF with Conjugate Gradient Optimizer airplanes HCRF cars HCRF faces HCRF motorbikes HCRF 0.7 True positive rate False positive rate Processing: ~11 minutes, 1 MB

74 Exp. 2e: Increased Iterations HCRF with 1000 iterations airplanes HCRF cars HCRF faces HCRF motorbikes HCRF 0.7 True positive rate False positive rate Processing: ~30 minutes, 1 MB

75 Exp. 2f: Increased Hidden States HCRF with 15 Hidden States airplanes HCRF cars HCRF faces HCRF motorbikes HCRF 0.7 True positive rate False positive rate Processing: ~1 hour, 3 GB

76 Exp. 2g: Clustering + 15 Hidden States HCRF with Clustering and 15 Hidden States airplanes HCRF cars HCRF faces HCRF motorbikes HCRF 0.7 True positive rate False positive rate Processing: ~1 hour 10 minutes, 3 GB

77 Exp. 2g: Clustering + 15 Hidden States (cont.) rank 1 rank 2 rank 3 rank 4 motorbikes faces cars airplanes

78 Exp. 2h: Clustering + 20 Hidden States HCRF with Clustering and 20 Hidden States airplanes HCRF cars HCRF faces HCRF motorbikes HCRF 0.7 True positive rate False positive rate Processing: ~1 hour 40 minutes, 5 GB

79 Exp. 2i: LDCRF with 20 Hidden States LDCRF with Clustering and 20 Hidden States airplanes LDCRF cars LDCRF faces LDCRF motorbikes LDCRF 0.7 True positive rate False positive rate Processing: ~5 hours 20 minutes, 5 GB

80 Exp. 2j: CRF with Initial Parameters CRF with Clustering airplanes CRF cars CRF faces CRF motorbikes CRF 0.7 True positive rate False positive rate Processing: ~21 seconds, 1 MB

81 Exp. 2j: CRF with Initial Parameters (cont.) rank 1 rank 2 rank 3 rank 4 motorbikes faces cars airplanes

82 Overall Results SVM, CRF, and LDCRF perform best

83 Overall Results SVM, CRF, and LDCRF perform best CRF almost outperforms all with negligible memory and processing requirements

84 Overall Results SVM, CRF, and LDCRF perform best CRF almost outperforms all with negligible memory and processing requirements Hidden states increase accuracy but at significant memory cost

85 Conclusion HCRF is accurate, but has a heavy performance cost. May be optimal for particular domains.

86 References I Alireza Fathi, Jessica K. Hodgins, and James M. Rehg. Social interactions: A first-person perspective. In CVPR, pages IEEE, ISBN URL cvpr2012.html#fathihr12. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, ICML 01, pages , San Francisco, CA, USA, Morgan Kaufmann Publishers Inc. ISBN URL http: //dl.acm.org/citation.cfm?id=

87 References II Ariadna Quattoni, Sybor Wang, Louis-Philippe Morency, Michael Collins, and Trevor Darrell. Hidden conditional random fields. IEEE Trans. Pattern Anal. Mach. Intell., 29 (10): , October ISSN doi: /TPAMI URL

Training LDCRF model on unsegmented sequences using Connectionist Temporal Classification

Training LDCRF model on unsegmented sequences using Connectionist Temporal Classification Training LDCRF model on unsegmented sequences using Connectionist Temporal Classification 1 Amir Ahooye Atashin, 2 Kamaledin Ghiasi-Shirazi, 3 Ahad Harati Department of Computer Engineering Ferdowsi University

More information

arxiv: v1 [cs.ne] 6 Oct 2016

arxiv: v1 [cs.ne] 6 Oct 2016 Sequence-based Sleep Stage Classification using Conditional Neural Fields Intan Nurma Yulita 1,2, Mohamad Ivan Fanany 1, Aniati Murni Arymurthy 1, arxiv:1610.01935v1 [cs.ne] 6 Oct 2016 1 Machine Learning

More information

arxiv: v1 [cs.cv] 19 Jun 2015

arxiv: v1 [cs.cv] 19 Jun 2015 CROWD FLOW SEGMENTATION IN COMPRESSED DOMAIN USING CRF Srinivas S S Kruthiventi and R. Venkatesh Babu Video Analytics Lab Supercomputer Education and Research Centre Indian Institute of Science, Bangalore,

More information

CS 6784 Paper Presentation

CS 6784 Paper Presentation Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data John La erty, Andrew McCallum, Fernando C. N. Pereira February 20, 2014 Main Contributions Main Contribution Summary

More information

Conditional Random Fields for Object Recognition

Conditional Random Fields for Object Recognition Conditional Random Fields for Object Recognition Ariadna Quattoni Michael Collins Trevor Darrell MIT Computer Science and Artificial Intelligence Laboratory Cambridge, MA 02139 {ariadna, mcollins, trevor}@csail.mit.edu

More information

Sequential Emotion Recognition using Latent-Dynamic Conditional Neural Fields

Sequential Emotion Recognition using Latent-Dynamic Conditional Neural Fields Sequential Emotion Recognition using Latent-Dynamic Conditional Neural Fields Julien-Charles Lévesque Louis-Philippe Morency Christian Gagné Abstract A wide number of problems in face and gesture analysis

More information

Conditional Random Fields : Theory and Application

Conditional Random Fields : Theory and Application Conditional Random Fields : Theory and Application Matt Seigel (mss46@cam.ac.uk) 3 June 2010 Cambridge University Engineering Department Outline The Sequence Classification Problem Linear Chain CRFs CRF

More information

Latent-Dynamic Discriminative Models for Continuous Gesture Recognition Louis-Philippe Morency, Ariadna Quattoni, and Trevor Darrell

Latent-Dynamic Discriminative Models for Continuous Gesture Recognition Louis-Philippe Morency, Ariadna Quattoni, and Trevor Darrell Computer Science and Artificial Intelligence Laboratory Technical Report MIT-CSAIL-TR-2007-002 January 7, 2007 Latent-Dynamic Discriminative Models for Continuous Gesture Recognition Louis-Philippe Morency,

More information

CRFs for Image Classification

CRFs for Image Classification CRFs for Image Classification Devi Parikh and Dhruv Batra Carnegie Mellon University Pittsburgh, PA 15213 {dparikh,dbatra}@ece.cmu.edu Abstract We use Conditional Random Fields (CRFs) to classify regions

More information

Complexity-Aware Assignment of Latent Values in Discriminative Models for Accurate Gesture Recognition

Complexity-Aware Assignment of Latent Values in Discriminative Models for Accurate Gesture Recognition Complexity-Aware Assignment of Latent Values in Discriminative Models for Accurate Gesture Recognition Manoel Horta Ribeiro, Bruno Teixeira, Antônio Otávio Fernandes, Wagner Meira Jr., Erickson R. Nascimento

More information

Sign Language Recognition using Dynamic Time Warping and Hand Shape Distance Based on Histogram of Oriented Gradient Features

Sign Language Recognition using Dynamic Time Warping and Hand Shape Distance Based on Histogram of Oriented Gradient Features Sign Language Recognition using Dynamic Time Warping and Hand Shape Distance Based on Histogram of Oriented Gradient Features Pat Jangyodsuk Department of Computer Science and Engineering The University

More information

Conditional Random Fields - A probabilistic graphical model. Yen-Chin Lee 指導老師 : 鮑興國

Conditional Random Fields - A probabilistic graphical model. Yen-Chin Lee 指導老師 : 鮑興國 Conditional Random Fields - A probabilistic graphical model Yen-Chin Lee 指導老師 : 鮑興國 Outline Labeling sequence data problem Introduction conditional random field (CRF) Different views on building a conditional

More information

Low-dimensional Representations of Hyperspectral Data for Use in CRF-based Classification

Low-dimensional Representations of Hyperspectral Data for Use in CRF-based Classification Rochester Institute of Technology RIT Scholar Works Presentations and other scholarship 8-31-2015 Low-dimensional Representations of Hyperspectral Data for Use in CRF-based Classification Yang Hu Nathan

More information

Object Recognition with Latent Conditional Random Fields. Ariadna Quattoni

Object Recognition with Latent Conditional Random Fields. Ariadna Quattoni Object Recognition with Latent Conditional Random Fields by Ariadna Quattoni Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the

More information

CS545 Project: Conditional Random Fields on an ecommerce Website

CS545 Project: Conditional Random Fields on an ecommerce Website CS545 Project: Conditional Random Fields on an ecommerce Website Brock Wilcox December 18, 2013 Contents 1 Conditional Random Fields 1 1.1 Overview................................................. 1 1.2

More information

Feature Extraction and Loss training using CRFs: A Project Report

Feature Extraction and Loss training using CRFs: A Project Report Feature Extraction and Loss training using CRFs: A Project Report Ankan Saha Department of computer Science University of Chicago March 11, 2008 Abstract POS tagging has been a very important problem in

More information

A Hierarchical Model for Human Interaction Recognition

A Hierarchical Model for Human Interaction Recognition A Hierarchical Model for Human Interaction Recognition Yu Kong and Yunde Jia Beijing Laboratory of Intelligent Information Technology School of Computer Science, Beijing Institute of Technology Beijing

More information

Conditional Random Fields for Word Hyphenation

Conditional Random Fields for Word Hyphenation Conditional Random Fields for Word Hyphenation Tsung-Yi Lin and Chen-Yu Lee Department of Electrical and Computer Engineering University of California, San Diego {tsl008, chl260}@ucsd.edu February 12,

More information

Structured Learning. Jun Zhu

Structured Learning. Jun Zhu Structured Learning Jun Zhu Supervised learning Given a set of I.I.D. training samples Learn a prediction function b r a c e Supervised learning (cont d) Many different choices Logistic Regression Maximum

More information

Joint Inference in Image Databases via Dense Correspondence. Michael Rubinstein MIT CSAIL (while interning at Microsoft Research)

Joint Inference in Image Databases via Dense Correspondence. Michael Rubinstein MIT CSAIL (while interning at Microsoft Research) Joint Inference in Image Databases via Dense Correspondence Michael Rubinstein MIT CSAIL (while interning at Microsoft Research) My work Throughout the year (and my PhD thesis): Temporal Video Analysis

More information

Computationally Efficient M-Estimation of Log-Linear Structure Models

Computationally Efficient M-Estimation of Log-Linear Structure Models Computationally Efficient M-Estimation of Log-Linear Structure Models Noah Smith, Doug Vail, and John Lafferty School of Computer Science Carnegie Mellon University {nasmith,dvail2,lafferty}@cs.cmu.edu

More information

Conditional Random Field for tracking user behavior based on his eye s movements 1

Conditional Random Field for tracking user behavior based on his eye s movements 1 Conditional Random Field for tracing user behavior based on his eye s movements 1 Trinh Minh Tri Do Thierry Artières LIP6, Université Paris 6 LIP6, Université Paris 6 8 rue du capitaine Scott 8 rue du

More information

HadoopPerceptron: a Toolkit for Distributed Perceptron Training and Prediction with MapReduce

HadoopPerceptron: a Toolkit for Distributed Perceptron Training and Prediction with MapReduce HadoopPerceptron: a Toolkit for Distributed Perceptron Training and Prediction with MapReduce Andrea Gesmundo Computer Science Department University of Geneva Geneva, Switzerland andrea.gesmundo@unige.ch

More information

Detecting Coarticulation in Sign Language using Conditional Random Fields

Detecting Coarticulation in Sign Language using Conditional Random Fields Detecting Coarticulation in Sign Language using Conditional Random Fields Ruiduo Yang and Sudeep Sarkar Computer Science and Engineering Department University of South Florida 4202 E. Fowler Ave. Tampa,

More information

Segmentation of Video Sequences using Spatial-temporal Conditional Random Fields

Segmentation of Video Sequences using Spatial-temporal Conditional Random Fields Segmentation of Video Sequences using Spatial-temporal Conditional Random Fields Lei Zhang and Qiang Ji Rensselaer Polytechnic Institute 110 8th St., Troy, NY 12180 {zhangl2@rpi.edu,qji@ecse.rpi.edu} Abstract

More information

HiRF: Hierarchical Random Field for Collective Activity Recognition in Videos

HiRF: Hierarchical Random Field for Collective Activity Recognition in Videos HiRF: Hierarchical Random Field for Collective Activity Recognition in Videos Mohamed R. Amer, Peng Lei, Sinisa Todorovic Oregon State University School of Electrical Engineering and Computer Science {amerm,

More information

Learning bottom-up visual processes using automatically generated ground truth data

Learning bottom-up visual processes using automatically generated ground truth data Learning bottom-up visual processes using automatically generated ground truth data Kalle Åström, Yubin Kuang, Magnus Oskarsson, Lars Kopp an Martin Byröd Mathematical Imaging Group Center for Mathematical

More information

JOINT INTENT DETECTION AND SLOT FILLING USING CONVOLUTIONAL NEURAL NETWORKS. Puyang Xu, Ruhi Sarikaya. Microsoft Corporation

JOINT INTENT DETECTION AND SLOT FILLING USING CONVOLUTIONAL NEURAL NETWORKS. Puyang Xu, Ruhi Sarikaya. Microsoft Corporation JOINT INTENT DETECTION AND SLOT FILLING USING CONVOLUTIONAL NEURAL NETWORKS Puyang Xu, Ruhi Sarikaya Microsoft Corporation ABSTRACT We describe a joint model for intent detection and slot filling based

More information

Research Article Contextual Hierarchical Part-Driven Conditional Random Field Model for Object Category Detection

Research Article Contextual Hierarchical Part-Driven Conditional Random Field Model for Object Category Detection Mathematical Problems in Engineering Volume 2012, Article ID 671397, 13 pages doi:10.1155/2012/671397 Research Article Contextual Hierarchical Part-Driven Conditional Random Field Model for Object Category

More information

Detection of Mitosis within a Stem Cell Population of High Cell Confluence in Phase-Contrast Microscopy Images

Detection of Mitosis within a Stem Cell Population of High Cell Confluence in Phase-Contrast Microscopy Images Detection of Mitosis within a Stem Cell Population of High Cell Confluence in Phase-Contrast Microscopy Images Seungil Huh Robotics Institute Carnegie Mellon University 5000 Forbes Ave. Pittsburgh, PA15213

More information

Action Recognition by Hierarchical Sequence Summarization

Action Recognition by Hierarchical Sequence Summarization IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 013. Action Recognition by Hierarchical Sequence Summarization Yale Song 1, Louis-Philippe Morency, Randall Davis 1 1 MIT Computer Science

More information

Surveillance Event Detection

Surveillance Event Detection Surveillance Event Detection Mert Dikmen, Huazhong Ning, Dennis J. Lin, Liangliang Cao, Vuong Le, Shen-Fu Tsai, Kai-Hsiang Lin, Zhen Li, Jianchao Yang, Thomas S. Huang Department of Electrical and Computer

More information

Conditional Random Fields and beyond D A N I E L K H A S H A B I C S U I U C,

Conditional Random Fields and beyond D A N I E L K H A S H A B I C S U I U C, Conditional Random Fields and beyond D A N I E L K H A S H A B I C S 5 4 6 U I U C, 2 0 1 3 Outline Modeling Inference Training Applications Outline Modeling Problem definition Discriminative vs. Generative

More information

Gradient of the lower bound

Gradient of the lower bound Weakly Supervised with Latent PhD advisor: Dr. Ambedkar Dukkipati Department of Computer Science and Automation gaurav.pandey@csa.iisc.ernet.in Objective Given a training set that comprises image and image-level

More information

Learning Diagram Parts with Hidden Random Fields

Learning Diagram Parts with Hidden Random Fields Learning Diagram Parts with Hidden Random Fields Martin Szummer Microsoft Research Cambridge, CB 0FB, United Kingdom szummer@microsoft.com Abstract Many diagrams contain compound objects composed of parts.

More information

Discriminative human action recognition using pairwise CSP classifiers

Discriminative human action recognition using pairwise CSP classifiers Discriminative human action recognition using pairwise CSP classifiers Ronald Poppe and Mannes Poel University of Twente, Dept. of Computer Science, Human Media Interaction Group P.O. Box 217, 7500 AE

More information

International Journal of Electrical, Electronics ISSN No. (Online): and Computer Engineering 3(2): 85-90(2014)

International Journal of Electrical, Electronics ISSN No. (Online): and Computer Engineering 3(2): 85-90(2014) I J E E E C International Journal of Electrical, Electronics ISSN No. (Online): 2277-2626 Computer Engineering 3(2): 85-90(2014) Robust Approach to Recognize Localize Text from Natural Scene Images Khushbu

More information

BRAIN computer interfaces (BCI) are systems that. Discriminative Methods for Classification of Asynchronous Imaginary Motor Tasks from EEG Data

BRAIN computer interfaces (BCI) are systems that. Discriminative Methods for Classification of Asynchronous Imaginary Motor Tasks from EEG Data IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING 1 Discriminative Methods for Classification of Asynchronous Imaginary Motor Tasks from EEG Data Jaime F. Delgado Saa, Student member,

More information

CRF Feature Induction

CRF Feature Induction CRF Feature Induction Andrew McCallum Efficiently Inducing Features of Conditional Random Fields Kuzman Ganchev 1 Introduction Basic Idea Aside: Transformation Based Learning Notation/CRF Review 2 Arbitrary

More information

Deformable Part Models

Deformable Part Models CS 1674: Intro to Computer Vision Deformable Part Models Prof. Adriana Kovashka University of Pittsburgh November 9, 2016 Today: Object category detection Window-based approaches: Last time: Viola-Jones

More information

Grounded Compositional Semantics for Finding and Describing Images with Sentences

Grounded Compositional Semantics for Finding and Describing Images with Sentences Grounded Compositional Semantics for Finding and Describing Images with Sentences R. Socher, A. Karpathy, V. Le,D. Manning, A Y. Ng - 2013 Ali Gharaee 1 Alireza Keshavarzi 2 1 Department of Computational

More information

Segmentation and labeling of documents using Conditional Random Fields

Segmentation and labeling of documents using Conditional Random Fields Segmentation and labeling of documents using Conditional Random Fields Shravya Shetty, Harish Srinivasan, Matthew Beal and Sargur Srihari Center of Excellence for Document Analysis and Recognition (CEDAR)

More information

Automatic Domain Partitioning for Multi-Domain Learning

Automatic Domain Partitioning for Multi-Domain Learning Automatic Domain Partitioning for Multi-Domain Learning Di Wang diwang@cs.cmu.edu Chenyan Xiong cx@cs.cmu.edu William Yang Wang ww@cmu.edu Abstract Multi-Domain learning (MDL) assumes that the domain labels

More information

Robust Place Recognition with Stereo Cameras

Robust Place Recognition with Stereo Cameras Robust Place Recognition with Stereo Cameras César Cadena, Dorian Gálvez-López, Fabio Ramos, Juan D. Tardós and José Neira Abstract Place recognition is a challenging task in any SLAM system. Algorithms

More information

Large Displacement Optical Flow & Applications

Large Displacement Optical Flow & Applications Large Displacement Optical Flow & Applications Narayanan Sundaram, Kurt Keutzer (Parlab) In collaboration with Thomas Brox (University of Freiburg) Michael Tao (University of California Berkeley) Parlab

More information

Training Restricted Boltzmann Machines using Approximations to the Likelihood Gradient. Ali Mirzapour Paper Presentation - Deep Learning March 7 th

Training Restricted Boltzmann Machines using Approximations to the Likelihood Gradient. Ali Mirzapour Paper Presentation - Deep Learning March 7 th Training Restricted Boltzmann Machines using Approximations to the Likelihood Gradient Ali Mirzapour Paper Presentation - Deep Learning March 7 th 1 Outline of the Presentation Restricted Boltzmann Machine

More information

Neural Network and Deep Learning. Donglin Zeng, Department of Biostatistics, University of North Carolina

Neural Network and Deep Learning. Donglin Zeng, Department of Biostatistics, University of North Carolina Neural Network and Deep Learning Early history of deep learning Deep learning dates back to 1940s: known as cybernetics in the 1940s-60s, connectionism in the 1980s-90s, and under the current name starting

More information

Metric Learning for Large-Scale Image Classification:

Metric Learning for Large-Scale Image Classification: Metric Learning for Large-Scale Image Classification: Generalizing to New Classes at Near-Zero Cost Florent Perronnin 1 work published at ECCV 2012 with: Thomas Mensink 1,2 Jakob Verbeek 2 Gabriela Csurka

More information

Trajectory analysis. Ivan Kukanov

Trajectory analysis. Ivan Kukanov Trajectory analysis Ivan Kukanov Joensuu, 2014 Semantic Trajectory Mining for Location Prediction Josh Jia-Ching Ying Tz-Chiao Weng Vincent S. Tseng Taiwan Wang-Chien Lee Wang-Chien Lee USA Copyright 2011

More information

A Probabilistic Graphical Model-based Approach for Minimizing Energy under Performance Constraints

A Probabilistic Graphical Model-based Approach for Minimizing Energy under Performance Constraints A Probabilistic Graphical Model-based Approach for Minimizing Energy under Performance Constraints Nikita Mishra, Huazhe Zhang, John Lafferty and Hank Hoffmann University of Chicago Fraction of time CPU

More information

Detection III: Analyzing and Debugging Detection Methods

Detection III: Analyzing and Debugging Detection Methods CS 1699: Intro to Computer Vision Detection III: Analyzing and Debugging Detection Methods Prof. Adriana Kovashka University of Pittsburgh November 17, 2015 Today Review: Deformable part models How can

More information

A Note on Semi-Supervised Learning using Markov Random Fields

A Note on Semi-Supervised Learning using Markov Random Fields A Note on Semi-Supervised Learning using Markov Random Fields Wei Li and Andrew McCallum {weili, mccallum}@cs.umass.edu Computer Science Department University of Massachusetts Amherst February 3, 2004

More information

Discriminative classifiers for image recognition

Discriminative classifiers for image recognition Discriminative classifiers for image recognition May 26 th, 2015 Yong Jae Lee UC Davis Outline Last time: window-based generic object detection basic pipeline face detection with boosting as case study

More information

Extracting and Querying Probabilistic Information From Text in BayesStore-IE

Extracting and Querying Probabilistic Information From Text in BayesStore-IE Extracting and Querying Probabilistic Information From Text in BayesStore-IE Daisy Zhe Wang, Michael J. Franklin, Minos Garofalakis 2, Joseph M. Hellerstein University of California, Berkeley Technical

More information

Part-based and local feature models for generic object recognition

Part-based and local feature models for generic object recognition Part-based and local feature models for generic object recognition May 28 th, 2015 Yong Jae Lee UC Davis Announcements PS2 grades up on SmartSite PS2 stats: Mean: 80.15 Standard Dev: 22.77 Vote on piazza

More information

BUPT at TREC 2009: Entity Track

BUPT at TREC 2009: Entity Track BUPT at TREC 2009: Entity Track Zhanyi Wang, Dongxin Liu, Weiran Xu, Guang Chen, Jun Guo Pattern Recognition and Intelligent System Lab, Beijing University of Posts and Telecommunications, Beijing, China,

More information

Deep Web Crawling and Mining for Building Advanced Search Application

Deep Web Crawling and Mining for Building Advanced Search Application Deep Web Crawling and Mining for Building Advanced Search Application Zhigang Hua, Dan Hou, Yu Liu, Xin Sun, Yanbing Yu {hua, houdan, yuliu, xinsun, yyu}@cc.gatech.edu College of computing, Georgia Tech

More information

Clinical Name Entity Recognition using Conditional Random Field with Augmented Features

Clinical Name Entity Recognition using Conditional Random Field with Augmented Features Clinical Name Entity Recognition using Conditional Random Field with Augmented Features Dawei Geng (Intern at Philips Research China, Shanghai) Abstract. In this paper, We presents a Chinese medical term

More information

ECLT 5810 Clustering

ECLT 5810 Clustering ECLT 5810 Clustering What is Cluster Analysis? Cluster: a collection of data objects Similar to one another within the same cluster Dissimilar to the objects in other clusters Cluster analysis Grouping

More information

Support Vector Machine-Based Human Behavior Classification in Crowd through Projection and Star Skeletonization

Support Vector Machine-Based Human Behavior Classification in Crowd through Projection and Star Skeletonization Journal of Computer Science 6 (9): 1008-1013, 2010 ISSN 1549-3636 2010 Science Publications Support Vector Machine-Based Human Behavior Classification in Crowd through Projection and Star Skeletonization

More information

CAP 6412 Advanced Computer Vision

CAP 6412 Advanced Computer Vision CAP 6412 Advanced Computer Vision http://www.cs.ucf.edu/~bgong/cap6412.html Boqing Gong April 21st, 2016 Today Administrivia Free parameters in an approach, model, or algorithm? Egocentric videos by Aisha

More information

Bias-Variance Trade-off + Other Models and Problems

Bias-Variance Trade-off + Other Models and Problems CS 1699: Intro to Computer Vision Bias-Variance Trade-off + Other Models and Problems Prof. Adriana Kovashka University of Pittsburgh November 3, 2015 Outline Support Vector Machines (review + other uses)

More information

Hidden Markov Models. Slides adapted from Joyce Ho, David Sontag, Geoffrey Hinton, Eric Xing, and Nicholas Ruozzi

Hidden Markov Models. Slides adapted from Joyce Ho, David Sontag, Geoffrey Hinton, Eric Xing, and Nicholas Ruozzi Hidden Markov Models Slides adapted from Joyce Ho, David Sontag, Geoffrey Hinton, Eric Xing, and Nicholas Ruozzi Sequential Data Time-series: Stock market, weather, speech, video Ordered: Text, genes Sequential

More information

Place Recognition using Near and Far Visual Information

Place Recognition using Near and Far Visual Information Place Recognition using Near and Far Visual Information César Cadena John McDonald John J. Leonard José Neira Instituto de Investigación en Ingeniería de Aragón (I3A), Universidad de Zaragoza, Zaragoza

More information

Theoretical Concepts of Machine Learning

Theoretical Concepts of Machine Learning Theoretical Concepts of Machine Learning Part 2 Institute of Bioinformatics Johannes Kepler University, Linz, Austria Outline 1 Introduction 2 Generalization Error 3 Maximum Likelihood 4 Noise Models 5

More information

Spanning Tree Approximations for Conditional Random Fields

Spanning Tree Approximations for Conditional Random Fields Patrick Pletscher Department of Computer Science ETH Zurich, Switzerland pletscher@inf.ethz.ch Cheng Soon Ong Department of Computer Science ETH Zurich, Switzerland chengsoon.ong@inf.ethz.ch Joachim M.

More information

Located Hidden Random Fields: Learning Discriminative Parts for Object Detection

Located Hidden Random Fields: Learning Discriminative Parts for Object Detection Located Hidden Random Fields: Learning Discriminative Parts for Object Detection Ashish Kapoor 1 and John Winn 2 1 MIT Media Laboratory, Cambridge, MA 02139, USA kapoor@media.mit.edu 2 Microsoft Research,

More information

Place Recognition using Near and Far Visual Information

Place Recognition using Near and Far Visual Information Place Recognition using Near and Far Visual Information César Cadena John McDonald John J. Leonard José Neira Instituto de Investigación en Ingeniería de Aragón (I3A), Universidad de Zaragoza, Zaragoza

More information

Leveraging Textural Features for Recognizing Actions in Low Quality Videos

Leveraging Textural Features for Recognizing Actions in Low Quality Videos Leveraging Textural Features for Recognizing Actions in Low Quality Videos Saimunur Rahman, John See, Chiung Ching Ho Centre of Visual Computing, Faculty of Computing and Informatics Multimedia University,

More information

Spatio-Temporal Event Detection Using Dynamic Conditional Random Fields

Spatio-Temporal Event Detection Using Dynamic Conditional Random Fields Spatio-Temporal Event Detection Using Dynamic Conditional Random Fields Jie Yin Information Engineering Laboratory CSIRO ICT Centre Australia jie.yin@csiro.au Derek Hao Hu and Qiang Yang Department of

More information

2 Signature-Based Retrieval of Scanned Documents Using Conditional Random Fields

2 Signature-Based Retrieval of Scanned Documents Using Conditional Random Fields 2 Signature-Based Retrieval of Scanned Documents Using Conditional Random Fields Harish Srinivasan and Sargur Srihari Summary. In searching a large repository of scanned documents, a task of interest is

More information

Temporal Models for Robot Classification of Human Interruptibility

Temporal Models for Robot Classification of Human Interruptibility Temporal Models for Robot Classification of Human Interruptibility ABSTRACT Siddhartha Banerjee Georgia Institute of Technology 801 Atlantic Dr NW Atlanta, GA 30332, USA siddhartha.banerjee@gatech.edu

More information

Learning Spatial Context: Using Stuff to Find Things

Learning Spatial Context: Using Stuff to Find Things Learning Spatial Context: Using Stuff to Find Things Wei-Cheng Su Motivation 2 Leverage contextual information to enhance detection Some context objects are non-rigid and are more naturally classified

More information

Strategies for simulating pedestrian navigation with multiple reinforcement learning agents

Strategies for simulating pedestrian navigation with multiple reinforcement learning agents Strategies for simulating pedestrian navigation with multiple reinforcement learning agents Francisco Martinez-Gil, Miguel Lozano, Fernando Ferna ndez Presented by: Daniel Geschwender 9/29/2016 1 Overview

More information

Monte Carlo MCMC: Efficient Inference by Sampling Factors

Monte Carlo MCMC: Efficient Inference by Sampling Factors University of Massachusetts Amherst From the SelectedWorks of Andrew McCallum 2012 Monte Carlo MCMC: Efficient Inference by Sampling Factors Sameer Singh Michael Wick Andrew McCallum, University of Massachusetts

More information

ECLT 5810 Clustering

ECLT 5810 Clustering ECLT 5810 Clustering What is Cluster Analysis? Cluster: a collection of data objects Similar to one another within the same cluster Dissimilar to the objects in other clusters Cluster analysis Grouping

More information

Latent Variable Models for Structured Prediction and Content-Based Retrieval

Latent Variable Models for Structured Prediction and Content-Based Retrieval Latent Variable Models for Structured Prediction and Content-Based Retrieval Ariadna Quattoni Universitat Politècnica de Catalunya Joint work with Borja Balle, Xavier Carreras, Adrià Recasens, Antonio

More information

Accelerating Markov Random Field Inference Using Molecular Optical Gibbs Sampling Units

Accelerating Markov Random Field Inference Using Molecular Optical Gibbs Sampling Units Accelerating Markov Random Field Inference Using Molecular Optical Gibbs Sampling Units Siyang Wang, Xiangyu Zhang, Yuxuan Li, Ramin Bashizade, Song Yang, Chris Dwyer, Alvin Lebeck Duke University Probabilistic

More information

Measuring and Reducing Observational Latency when Recognizing Actions

Measuring and Reducing Observational Latency when Recognizing Actions Measuring and Reducing Observational Latency when Recognizing Actions Syed Z. Masood Chris Ellis Adarsh Nagaraja Marshall F. Tappen Joseph J. LaViola Jr. University of Central Florida Orlando, Florida

More information

Estimating Missing Attribute Values Using Dynamically-Ordered Attribute Trees

Estimating Missing Attribute Values Using Dynamically-Ordered Attribute Trees Estimating Missing Attribute Values Using Dynamically-Ordered Attribute Trees Jing Wang Computer Science Department, The University of Iowa jing-wang-1@uiowa.edu W. Nick Street Management Sciences Department,

More information

Dynamic Bayesian network (DBN)

Dynamic Bayesian network (DBN) Readings: K&F: 18.1, 18.2, 18.3, 18.4 ynamic Bayesian Networks Beyond 10708 Graphical Models 10708 Carlos Guestrin Carnegie Mellon University ecember 1 st, 2006 1 ynamic Bayesian network (BN) HMM defined

More information

STRUCTURAL EDGE LEARNING FOR 3-D RECONSTRUCTION FROM A SINGLE STILL IMAGE. Nan Hu. Stanford University Electrical Engineering

STRUCTURAL EDGE LEARNING FOR 3-D RECONSTRUCTION FROM A SINGLE STILL IMAGE. Nan Hu. Stanford University Electrical Engineering STRUCTURAL EDGE LEARNING FOR 3-D RECONSTRUCTION FROM A SINGLE STILL IMAGE Nan Hu Stanford University Electrical Engineering nanhu@stanford.edu ABSTRACT Learning 3-D scene structure from a single still

More information

Graph-based Techniques for Searching Large-Scale Noisy Multimedia Data

Graph-based Techniques for Searching Large-Scale Noisy Multimedia Data Graph-based Techniques for Searching Large-Scale Noisy Multimedia Data Shih-Fu Chang Department of Electrical Engineering Department of Computer Science Columbia University Joint work with Jun Wang (IBM),

More information

First-Person Animal Activity Recognition from Egocentric Videos

First-Person Animal Activity Recognition from Egocentric Videos First-Person Animal Activity Recognition from Egocentric Videos Yumi Iwashita Asamichi Takamine Ryo Kurazume School of Information Science and Electrical Engineering Kyushu University, Fukuoka, Japan yumi@ieee.org

More information

Package CRF. February 1, 2017

Package CRF. February 1, 2017 Version 0.3-14 Title Conditional Random Fields Package CRF February 1, 2017 Implements modeling and computational tools for conditional random fields (CRF) model as well as other probabilistic undirected

More information

Handwritten Word Recognition using Conditional Random Fields

Handwritten Word Recognition using Conditional Random Fields Handwritten Word Recognition using Conditional Random Fields Shravya Shetty Harish Srinivasan Sargur Srihari Center of Excellence for Document Analysis and Recognition (CEDAR) Department of Computer Science

More information

Local Features and Kernels for Classifcation of Texture and Object Categories: A Comprehensive Study

Local Features and Kernels for Classifcation of Texture and Object Categories: A Comprehensive Study Local Features and Kernels for Classifcation of Texture and Object Categories: A Comprehensive Study J. Zhang 1 M. Marszałek 1 S. Lazebnik 2 C. Schmid 1 1 INRIA Rhône-Alpes, LEAR - GRAVIR Montbonnot, France

More information

Computer Vision. Exercise Session 10 Image Categorization

Computer Vision. Exercise Session 10 Image Categorization Computer Vision Exercise Session 10 Image Categorization Object Categorization Task Description Given a small number of training images of a category, recognize a-priori unknown instances of that category

More information

Combining Top-down and Bottom-up Segmentation

Combining Top-down and Bottom-up Segmentation Combining Top-down and Bottom-up Segmentation Authors: Eran Borenstein, Eitan Sharon, Shimon Ullman Presenter: Collin McCarthy Introduction Goal Separate object from background Problems Inaccuracies Top-down

More information

Probabilistic Structured-Output Perceptron: An Effective Probabilistic Method for Structured NLP Tasks

Probabilistic Structured-Output Perceptron: An Effective Probabilistic Method for Structured NLP Tasks Probabilistic Structured-Output Perceptron: An Effective Probabilistic Method for Structured NLP Tasks Xu Sun MOE Key Laboratory of Computational Linguistics, Peking University Abstract Many structured

More information

Beyond Actions: Discriminative Models for Contextual Group Activities

Beyond Actions: Discriminative Models for Contextual Group Activities Beyond Actions: Discriminative Models for Contextual Group Activities Tian Lan School of Computing Science Simon Fraser University tla58@sfu.ca Weilong Yang School of Computing Science Simon Fraser University

More information

Learning Deep Structured Models for Semantic Segmentation. Guosheng Lin

Learning Deep Structured Models for Semantic Segmentation. Guosheng Lin Learning Deep Structured Models for Semantic Segmentation Guosheng Lin Semantic Segmentation Outline Exploring Context with Deep Structured Models Guosheng Lin, Chunhua Shen, Ian Reid, Anton van dan Hengel;

More information

Probabilistic Cascade Random Fields for Man-Made Structure Detection

Probabilistic Cascade Random Fields for Man-Made Structure Detection Probabilistic Cascade Random Fields for Man-Made Structure Detection Songfeng Zheng Department of Mathematics Missouri State University, Springfield, MO 65897, USA SongfengZheng@MissouriState.edu Abstract.

More information

Automatic Gait Recognition. - Karthik Sridharan

Automatic Gait Recognition. - Karthik Sridharan Automatic Gait Recognition - Karthik Sridharan Gait as a Biometric Gait A person s manner of walking Webster Definition It is a non-contact, unobtrusive, perceivable at a distance and hard to disguise

More information

Introduction to Trajectory Clustering. By YONGLI ZHANG

Introduction to Trajectory Clustering. By YONGLI ZHANG Introduction to Trajectory Clustering By YONGLI ZHANG Outline 1. Problem Definition 2. Clustering Methods for Trajectory data 3. Model-based Trajectory Clustering 4. Applications 5. Conclusions 1 Problem

More information

Extracting Places and Activities from GPS Traces Using Hierarchical Conditional Random Fields

Extracting Places and Activities from GPS Traces Using Hierarchical Conditional Random Fields Extracting Places and Activities from GPS Traces Using Hierarchical Conditional Random Fields Lin Liao Dieter Fox Henry Kautz Department of Computer Science & Engineering University of Washington Seattle,

More information

Activity Recognition from Physiological Data using Conditional Random Fields

Activity Recognition from Physiological Data using Conditional Random Fields Activity Recognition from Physiological Data using Conditional Random Fields Hai Leong Chieu 1, Wee Sun Lee 2, Leslie Pack Kaelbling 3 1 Singapore-IT Alliance, National University of Singapore 2 Department

More information

PARALLEL SELECTIVE SAMPLING USING RELEVANCE VECTOR MACHINE FOR IMBALANCE DATA M. Athitya Kumaraguru 1, Viji Vinod 2, N.

PARALLEL SELECTIVE SAMPLING USING RELEVANCE VECTOR MACHINE FOR IMBALANCE DATA M. Athitya Kumaraguru 1, Viji Vinod 2, N. Volume 117 No. 20 2017, 873-879 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu ijpam.eu PARALLEL SELECTIVE SAMPLING USING RELEVANCE VECTOR MACHINE FOR IMBALANCE

More information

Lecture 18: Human Motion Recognition

Lecture 18: Human Motion Recognition Lecture 18: Human Motion Recognition Professor Fei Fei Li Stanford Vision Lab 1 What we will learn today? Introduction Motion classification using template matching Motion classification i using spatio

More information

Robustifying the Flock of Trackers

Robustifying the Flock of Trackers 1-Feb-11 Matas, Vojir : Robustifying the Flock of Trackers 1 Robustifying the Flock of Trackers Tomáš Vojíř and Jiří Matas {matas, vojirtom}@cmp.felk.cvut.cz Department of Cybernetics, Faculty of Electrical

More information