Graph-based Semi- Supervised Learning as Optimization

Size: px
Start display at page:

Download "Graph-based Semi- Supervised Learning as Optimization"

Transcription

1 Graph-based Semi- Supervised Learning as Optimization Partha Pratim Talukdar CMU Machine Learning with Large Datasets (10-605) April 3, 2012

2 Graph-based Semi-Supervised Learning

3 Graph-based Semi-Supervised Learning Labeled (seed)

4 Graph-based Semi-Supervised Learning Unlabeled Labeled (seed)

5 Graph-based Semi-Supervised Learning

6 Graph-based Semi-Supervised Learning Various methods: LP (Zhu et al., 2003); QC (Bengio et al., 2007); Adsorption (Baluja et al., 2008)

7 Nota%ons Seed Scores v Label Priors Estimated Scores 4

8 Nota%ons Ŷ v,l : score of estimated label l on node v v Seed Scores Label Priors Estimated Scores 4

9 Nota%ons Ŷ v,l : score of estimated label l on node v Y v,l : score of seed label l on node v v Seed Scores Label Priors Estimated Scores 4

10 Nota%ons Ŷ v,l : score of estimated label l on node v Y v,l : score of seed label l on node v R v,l : regularization target for label l on node v v Seed Scores Label Priors Estimated Scores 4

11 Nota%ons Ŷ v,l : score of estimated label l on node v Y v,l : score of seed label l on node v R v,l : regularization target for label l on node v v Seed Scores Label Priors Estimated Scores S : seed node indicator (diagonal matrix) 4

12 Nota%ons Ŷ v,l : score of estimated label l on node v Y v,l : score of seed label l on node v R v,l : regularization target for label l on node v v Seed Scores Label Priors Estimated Scores S : seed node indicator (diagonal matrix) W uv : weight of edge (u, v) in the graph 4

13 LP- ZGL (Zhu et al., ICML 2003) 5

14 LP- ZGL (Zhu et al., ICML 2003) m arg min Ŷ l=1 W uv (Ŷul Ŷvl) 2 such that Y ul = Ŷul, S uu =1 5

15 LP- ZGL (Zhu et al., ICML 2003) m Smooth arg min Ŷ l=1 W uv (Ŷul Ŷvl) 2 such that Y ul = Ŷul, S uu =1 5

16 LP- ZGL (Zhu et al., ICML 2003) m Smooth arg min Ŷ l=1 W uv (Ŷul Ŷvl) 2 such that Y ul = Ŷul, S uu =1 Match Seeds (hard) 5

17 LP- ZGL (Zhu et al., ICML 2003) Smooth arg min Ŷ such that m l=1 W uv (Ŷul Ŷvl) 2 Y ul = Ŷul, S uu =1 Match Seeds (hard) = m l=1 Ŷ T l LŶl Graph Laplacian L = D - W (PSD) 5

18 LP- ZGL (Zhu et al., ICML 2003) Smooth arg min Ŷ m l=1 W uv (Ŷul Ŷvl) 2 = m l=1 Ŷ T l LŶl such that Y ul = Ŷul, S uu =1 Match Seeds (hard) Graph Laplacian L = D - W (PSD) Smoothness two nodes connected by an edge with high weight should be assigned similar labels 5

19 LP- ZGL (Zhu et al., ICML 2003) Smooth arg min Ŷ m l=1 W uv (Ŷul Ŷvl) 2 = m l=1 Ŷ T l LŶl such that Y ul = Ŷul, S uu =1 Match Seeds (hard) Graph Laplacian L = D - W (PSD) Smoothness two nodes connected by an edge with high weight should be assigned similar labels Solu7on sa7sfies harmonic property 5

20 Two Related Views Random Walk L U

21 Two Related Views Random Walk L U Label Diffusion L U

22 Random Walk View

23 Random Walk View what next? V U

24 Random Walk View what next? V U Continue walk with prob. p cont v Assign V s seed label to U with prob. p inj v Abandon random walk with prob. assign U a dummy label p abnd v

25 Discounting Nodes

26 Discounting Nodes Certain nodes can be unreliable (e.g., high degree nodes) do not allow propagation/walk through them

27 Discounting Nodes Certain nodes can be unreliable (e.g., high degree nodes) do not allow propagation/walk through them Solution: increase abandon probability on such nodes:

28 Discounting Nodes Certain nodes can be unreliable (e.g., high degree nodes) do not allow propagation/walk through them Solution: increase abandon probability on such nodes: p abnd v degree(v)

29 Redefining Matrices

30 Redefining Matrices W uv = p cont u W uv New Edge Weight

31 Redefining Matrices W uv = p cont u W uv New Edge Weight S uu = p inj u

32 Redefining Matrices W uv = p cont u W uv New Edge Weight S uu = p inj u R u = p abnd u, and 0 for non-dummy labels Dummy Label

33 10 Modified Adsorption (MAD) [Talukdar and Crammer, ECML 2009]

34 Modified Adsorption (MAD) [Talukdar and Crammer, ECML 2009] arg min Ŷ m+1 l=1 SŶ l SY l 2 + µ 1 u,v M uv (Ŷ ul Ŷ vl) 2 + µ 2 Ŷ l R l 2 m labels, +1 dummy label M = W + W is the symmetrized weight matrix Ŷ vl: weight of label l on node v Y vl : seed weight for label l on node v S: diagonal matrix, nonzero for seed nodes R vl : regularization target for label l on node v v Seed Scores Label Priors Estimated Scores 10

35 Modified Adsorption (MAD) [Talukdar and Crammer, ECML 2009] arg min Ŷ m+1 l=1 Match Seeds (soft) SŶ l SY l 2 + µ 1 u,v M uv (Ŷ ul Ŷ vl) 2 + µ 2 Ŷ l R l 2 m labels, +1 dummy label M = W + W is the symmetrized weight matrix Ŷ vl: weight of label l on node v Y vl : seed weight for label l on node v S: diagonal matrix, nonzero for seed nodes R vl : regularization target for label l on node v v Seed Scores Label Priors Estimated Scores 10

36 Modified Adsorption (MAD) [Talukdar and Crammer, ECML 2009] arg min Ŷ m+1 l=1 Match Seeds (soft) SŶ l SY l 2 + µ 1 u,v Smooth M uv (Ŷ ul Ŷ vl) 2 + µ 2 Ŷ l R l 2 m labels, +1 dummy label M = W + W is the symmetrized weight matrix Ŷ vl: weight of label l on node v Y vl : seed weight for label l on node v S: diagonal matrix, nonzero for seed nodes R vl : regularization target for label l on node v v Seed Scores Label Priors Estimated Scores 10

37 Modified Adsorption (MAD) arg min Ŷ m+1 l=1 [Talukdar and Crammer, ECML 2009] Match Seeds (soft) SŶ l SY l 2 + µ 1 u,v Match Priors (Regularizer) Smooth M uv (Ŷ ul Ŷ vl) 2 + µ 2 Ŷ l R l 2 m labels, +1 dummy label M = W + W is the symmetrized weight matrix Ŷ vl: weight of label l on node v Y vl : seed weight for label l on node v S: diagonal matrix, nonzero for seed nodes R vl : regularization target for label l on node v v Seed Scores Label Priors Estimated Scores 10

38 Modified Adsorption (MAD) arg min Ŷ m+1 l=1 [Talukdar and Crammer, ECML 2009] Match Seeds (soft) SŶ l SY l 2 + µ 1 u,v Match Priors (Regularizer) Smooth M uv (Ŷ ul Ŷ vl) 2 + µ 2 Ŷ l R l 2 m labels, +1 dummy label M = W used + W to indicate is the symmetrized none-of-the-above weight label matrix Ŷ vl: weight of label l on node v Y vl : seed weight for label l on node v S: diagonal matrix, nonzero for seed nodes R vl : regularization target for label l on node v v Seed Scores Label Priors Estimated Scores 10

39 Modified Adsorption (MAD) arg min Ŷ m+1 l=1 [Talukdar and Crammer, ECML 2009] Match Seeds (soft) SŶ l SY l 2 + µ 1 u,v Match Priors (Regularizer) Smooth M uv (Ŷ ul Ŷ vl) 2 + µ 2 Ŷ l R l 2 m labels, +1 dummy label M = W used + W to indicate is the symmetrized none-of-the-above weight label matrix Ŷ vl: weight of label l on node v Y vl : seed weight for label l on node v S: diagonal matrix, nonzero for seed nodes R vl : regularization target for label l on node v v Seed Scores Label Priors Estimated Scores 10 MAD has extra regularization compared to LP-ZGL (Zhu et al, ICML 03); similar to QC (Bengio et al, 2006)

40 MAD (contd.) Inputs Y, R : V ( L + 1), W : V V, S : V V diagonal Ŷ Y M = W + W Z v S vv + µ 1 u=v M vu + µ 2 v V repeat for all v V do Ŷ v 1 (SY ) v + µ 1 M v Ŷ + µ 2 R v Z v end for until convergence 11

41 MAD (contd.) Inputs Y, R : V ( L + 1), W : V V, S : V V diagonal Ŷ Y M = W + W Z v S vv + µ 1 u=v M vu + µ 2 v V repeat for all v V do Ŷ v 1 (SY ) v + µ 1 M v Ŷ + µ 2 R v Z v end for until convergence Extends Adsorption with well-defined optimization Importance of a node can be discounted Easily Parallelizable: Scalable 11

42 MapReduce Implementation of MAD 12

43 MapReduce Implementation of MAD Map Each node send its current label assignments to its neighbors 12

44 MapReduce Implementation of MAD Map Each node send its current label assignments to its neighbors Reduce Each node updates its own label assignment using messages received from neighbors, and its own information (e.g., seed labels, reg. penalties etc.) 12

45 MapReduce Implementation of MAD Map Each node send its current label assignments to its neighbors Reduce Each node updates its own label assignment using messages received from neighbors, and its own information (e.g., seed labels, reg. penalties etc.) Repeat until convergence 12

46 13 Experiments

47 Text Classification

48 Text Classification PRBEP (macro-averaged) on WebKB Dataset, 3148 test instances

49 Sentiment Classification

50 Sentiment Classification Precision on 3568 Sentiment test instances

51 16 Class-Instance Acquisition

52 Class-Instance Acquisition Graph with 303k nodes, 2.3m edges. 16

53 Class-Instance Acquisition 0.39 Freebase-2 Graph, 192 WordNet Classes LP-ZGL Adsorption MAD Mean Reciprocal Rank (MRR) Graph with 303k nodes, 2.3m edges x x 10 Amount of Supervision 16

54 New (Class, Instance) Pairs Found Class Scientific Journals NFL Players Book Publishers A few non-seed Instances found by Adsorption Journal of Physics, Nature, Structural and Molecular Biology, Sciences Sociales et sante, Kidney and Blood Pressure Research, American Journal of Physiology-Cell Physiology, Tony Gonzales, Thabiti Davis, Taylor Stubblefield, Ron Dixon, Rodney Hannan, Small Night Shade Books, House of Ansari Press, Highwater Books, Distributed Art Publishers, Cooper Canyon Press, 17 Total classes: 9081

55 18 When is MAD most effective?

56 When is MAD most effective? 0.4 Relative Increase in MRR by MAD over LP-ZGL Average Degree 18

57 When is MAD most effective? Relative Increase in MRR by MAD over LP-ZGL MAD seems to be more effective in graphs 0.2 with high average degree, where there is greater need for regularization Average Degree 18

58 Benefits of Having an Optimization Framework 19

59 Extension to Dependent Labels Labels are not always mutually exclusive.

60 Extension to Dependent Labels Labels are not always mutually exclusive. White ScotchAle BrownAle PaleAle Ale 0.95 TopFormentedBeer 0.8 Porter

61 Extension to Dependent Labels Labels are not always mutually exclusive. White ScotchAle BrownAle PaleAle Ale 0.95 TopFormentedBeer 0.8 Labels Porter Label Similarity Label Graph

62 MAD with Dependent Labels (MADDL) min Seed Label Loss (if any) Edge Smoothness Loss + + Label Prior Loss (e.g. prior on dummy label)

63 MAD with Dependent Labels (MADDL) min Seed Label Edge Label Prior Loss Loss + Smoothness + (e.g. prior on + Dependent Label Loss (if any) Loss dummy label)

64 MAD with Dependent Labels (MADDL) MADDL Objective min Seed Label Edge Label Prior Loss Loss + Smoothness + (e.g. prior on + Dependent Label Loss (if any) Loss dummy label)

65 MAD with Dependent Labels (MADDL) MADDL Objective min Seed Label Edge Label Prior Loss Loss + Smoothness + (e.g. prior on + Dependent Label Loss (if any) Loss dummy label) Penalize if similar labels are assigned different scores on a node

66 MAD with Dependent Labels (MADDL) MADDL Objective min Seed Label Edge Label Prior Loss Loss + Smoothness + (e.g. prior on + Dependent Label Loss (if any) Loss dummy label) Penalize if similar labels are assigned different scores on a node BrownAle 1.0 Ale

67 MAD with Dependent Labels (MADDL) MADDL Objective min Seed Label Edge Label Prior Loss Loss + Smoothness + (e.g. prior on + Dependent Label Loss (if any) Loss dummy label) MADDL objective results in a scalable iterative update, with convergence guarantee. Penalize if similar labels are assigned different scores on a node BrownAle 1.0 Ale

68 Smooth Sentiment Ranking rank 1 rank 4 smooth predictions

69 Smooth Sentiment Ranking rank 1 rank 4 smooth predictions non-smooth predictions

70 Smooth Sentiment Ranking rank 1 Prefer over rank 4 smooth predictions non-smooth predictions

71 Smooth Sentiment Ranking rank 1 Prefer over rank 4 smooth predictions non-smooth predictions 1.0 MADDL Label Constraints 1.0

72 Smooth Sentiment Ranking Count of Top Predicted Pair in MAD Output Label Label 1 4

73 Smooth Sentiment Ranking Count of Top Predicted Pair in MAD Output Count of Top Predicted Pair in MADDL Output Label Label Label Label 1 4

74 Smooth Sentiment Ranking Count of Top Predicted Pair in MAD Output Count of Top Predicted Pair in MADDL Output Label Label Label Label 1 4 MADDL generates smoother ranking, while preserving accuracy of prediction.

75 Other Methods and Graph Construction

76 Other Methods and Graph Construction Measure Propagation [Subramanya and Bilmes, NIPS 2009]

77 Other Methods and Graph Construction Measure Propagation [Subramanya and Bilmes, NIPS 2009] - KL-based loss

78 Other Methods and Graph Construction Measure Propagation [Subramanya and Bilmes, NIPS 2009] - KL-based loss - Results over 120m node graph

79 Other Methods and Graph Construction Measure Propagation [Subramanya and Bilmes, NIPS 2009] - KL-based loss - Results over 120m node graph Chapter 21 of the SSL book [Chappelle et al.]

80 Other Methods and Graph Construction Measure Propagation [Subramanya and Bilmes, NIPS 2009] - KL-based loss - Results over 120m node graph Chapter 21 of the SSL book [Chappelle et al.] Graph Construction

81 Other Methods and Graph Construction Measure Propagation [Subramanya and Bilmes, NIPS 2009] - KL-based loss - Results over 120m node graph Chapter 21 of the SSL book [Chappelle et al.] Graph Construction Task-specific Graph Construction using Metric Learning [Dhillon, Talukdar, and Crammer, ACL 2010]

82 25 Graph-based SSL: Final Thoughts

83 Graph-based SSL: Final Thoughts Provide flexible representation for both relational and IID data 25

84 Graph-based SSL: Final Thoughts Provide flexible representation for both relational and IID data Scalable: MR/Hadoop-based implementation 25

85 Graph-based SSL: Final Thoughts Provide flexible representation for both relational and IID data Scalable: MR/Hadoop-based implementation Can handle labeled as well as unlabeled data 25

86 Graph-based SSL: Final Thoughts Provide flexible representation for both relational and IID data Scalable: MR/Hadoop-based implementation Can handle labeled as well as unlabeled data Can handle multiple-classes 25

87 Graph-based SSL: Final Thoughts Provide flexible representation for both relational and IID data Scalable: MR/Hadoop-based implementation Can handle labeled as well as unlabeled data Can handle multiple-classes Effective in practice 25

88

89 Code in Junto Label Propagation Toolkit (includes Hadoop-based implementation)

90 Code in Junto Label Propagation Toolkit (includes Hadoop-based implementation) Thanks!

(Graph-based) Semi-Supervised Learning. Partha Pratim Talukdar Indian Institute of Science

(Graph-based) Semi-Supervised Learning. Partha Pratim Talukdar Indian Institute of Science (Graph-based) Semi-Supervised Learning Partha Pratim Talukdar Indian Institute of Science ppt@serc.iisc.in April 7, 2015 Supervised Learning Labeled Data Learning Algorithm Model 2 Supervised Learning

More information

Graph-Based Weakly- Supervised Methods for Information Extraction & Integration. Partha Pratim Talukdar University of Pennsylvania

Graph-Based Weakly- Supervised Methods for Information Extraction & Integration. Partha Pratim Talukdar University of Pennsylvania Graph-Based Weakly- Supervised Methods for Information Extraction & Integration Partha Pratim Talukdar University of Pennsylvania Dissertation Defense, February 24, 2010 End Goal We should be able to answer

More information

Scaling Graph-based Semi Supervised Learning to Large Number of Labels Using Count-Min Sketch

Scaling Graph-based Semi Supervised Learning to Large Number of Labels Using Count-Min Sketch Scaling Graph-based Semi Supervised Learning to Large Number of Labels Using Count-Min Sketch Graph-based SSL using a count-min sketch has a number of properties that are desirable, and somewhat surprising.

More information

Semi-Supervised Learning: Lecture Notes

Semi-Supervised Learning: Lecture Notes Semi-Supervised Learning: Lecture Notes William W. Cohen March 30, 2018 1 What is Semi-Supervised Learning? In supervised learning, a learner is given a dataset of m labeled examples {(x 1, y 1 ),...,

More information

Learning Better Data Representation using Inference-Driven Metric Learning

Learning Better Data Representation using Inference-Driven Metric Learning Learning Better Data Representation using Inference-Driven Metric Learning Paramveer S. Dhillon CIS Deptt., Univ. of Penn. Philadelphia, PA, U.S.A dhillon@cis.upenn.edu Partha Pratim Talukdar Search Labs,

More information

Transductive Phoneme Classification Using Local Scaling And Confidence

Transductive Phoneme Classification Using Local Scaling And Confidence 202 IEEE 27-th Convention of Electrical and Electronics Engineers in Israel Transductive Phoneme Classification Using Local Scaling And Confidence Matan Orbach Dept. of Electrical Engineering Technion

More information

Semi-supervised learning SSL (on graphs)

Semi-supervised learning SSL (on graphs) Semi-supervised learning SSL (on graphs) 1 Announcement No office hour for William after class today! 2 Semi-supervised learning Given: A pool of labeled examples L A (usually larger) pool of unlabeled

More information

Large Scale Distributed Semi-Supervised Learning Using Streaming Approximation

Large Scale Distributed Semi-Supervised Learning Using Streaming Approximation Large Scale Distributed Semi-Supervised Learning Using Streaming Approximation Several classification and knowledge expansion type of problems involve a large number of labels in realworld scenarios. For

More information

Inference Driven Metric Learning (IDML) for Graph Construction

Inference Driven Metric Learning (IDML) for Graph Construction University of Pennsylvania ScholarlyCommons Technical Reports (CIS) Department of Computer & Information Science 1-1-2010 Inference Driven Metric Learning (IDML) for Graph Construction Paramveer S. Dhillon

More information

INTRO TO SEMI-SUPERVISED LEARNING (SSL)

INTRO TO SEMI-SUPERVISED LEARNING (SSL) SSL (on graphs) 1 INTRO TO SEMI-SUPERVISED LEARNING (SSL) Semi-supervised learning Given: A pool of labeled examples L A (usually larger) pool of unlabeled examples U Option 1 for using L and U : Ignore

More information

Efficient Iterative Semi-supervised Classification on Manifold

Efficient Iterative Semi-supervised Classification on Manifold . Efficient Iterative Semi-supervised Classification on Manifold... M. Farajtabar, H. R. Rabiee, A. Shaban, A. Soltani-Farani Sharif University of Technology, Tehran, Iran. Presented by Pooria Joulani

More information

Graph-based Techniques for Searching Large-Scale Noisy Multimedia Data

Graph-based Techniques for Searching Large-Scale Noisy Multimedia Data Graph-based Techniques for Searching Large-Scale Noisy Multimedia Data Shih-Fu Chang Department of Electrical Engineering Department of Computer Science Columbia University Joint work with Jun Wang (IBM),

More information

Thorsten Joachims Then: Universität Dortmund, Germany Now: Cornell University, USA

Thorsten Joachims Then: Universität Dortmund, Germany Now: Cornell University, USA Retrospective ICML99 Transductive Inference for Text Classification using Support Vector Machines Thorsten Joachims Then: Universität Dortmund, Germany Now: Cornell University, USA Outline The paper in

More information

Semi-supervised vs. Cross-domain Graphs for Sentiment Analysis

Semi-supervised vs. Cross-domain Graphs for Sentiment Analysis Semi-supervised vs. Cross-domain Graphs for Sentiment Analysis Natalia Ponomareva University of Wolverhampton, UK nata.ponomareva@wlv.ac.uk Mike Thelwall University of Wolverhampton, UK m.thelwall@wlv.ac.uk

More information

Experiments in Graph-based Semi-Supervised Learning Methods for Class-Instance Acquisition

Experiments in Graph-based Semi-Supervised Learning Methods for Class-Instance Acquisition Experiments in Graph-based Semi-Superised Learning Methods for Class-Instance Acquisition Partha Pratim Talukdar Search Labs, Microsoft Research Mountain View, CA 94043 partha@talukdar.net Fernando Pereira

More information

Overview Citation. ML Introduction. Overview Schedule. ML Intro Dataset. Introduction to Semi-Supervised Learning Review 10/4/2010

Overview Citation. ML Introduction. Overview Schedule. ML Intro Dataset. Introduction to Semi-Supervised Learning Review 10/4/2010 INFORMATICS SEMINAR SEPT. 27 & OCT. 4, 2010 Introduction to Semi-Supervised Learning Review 2 Overview Citation X. Zhu and A.B. Goldberg, Introduction to Semi- Supervised Learning, Morgan & Claypool Publishers,

More information

ECS289: Scalable Machine Learning

ECS289: Scalable Machine Learning ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Sept 24, 2015 Course Information Website: www.stat.ucdavis.edu/~chohsieh/ecs289g_scalableml.html My office: Mathematical Sciences Building (MSB)

More information

INF4820 Algorithms for AI and NLP. Evaluating Classifiers Clustering

INF4820 Algorithms for AI and NLP. Evaluating Classifiers Clustering INF4820 Algorithms for AI and NLP Evaluating Classifiers Clustering Murhaf Fares & Stephan Oepen Language Technology Group (LTG) September 27, 2017 Today 2 Recap Evaluation of classifiers Unsupervised

More information

Alternating Minimization. Jun Wang, Tony Jebara, and Shih-fu Chang

Alternating Minimization. Jun Wang, Tony Jebara, and Shih-fu Chang Graph Transduction via Alternating Minimization Jun Wang, Tony Jebara, and Shih-fu Chang 1 Outline of the presentation Brief introduction and related work Problems with Graph Labeling Imbalanced labels

More information

ECS289: Scalable Machine Learning

ECS289: Scalable Machine Learning ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Sept 22, 2016 Course Information Website: http://www.stat.ucdavis.edu/~chohsieh/teaching/ ECS289G_Fall2016/main.html My office: Mathematical Sciences

More information

Topics in Graph Construction for Semi-Supervised Learning

Topics in Graph Construction for Semi-Supervised Learning University of Pennsylvania ScholarlyCommons Technical Reports (CIS) Department of Computer & Information Science 8-27-2009 Topics in Graph Construction for Semi-Supervised Learning Partha Pratim Talukdar

More information

Semi-supervised Data Representation via Affinity Graph Learning

Semi-supervised Data Representation via Affinity Graph Learning 1 Semi-supervised Data Representation via Affinity Graph Learning Weiya Ren 1 1 College of Information System and Management, National University of Defense Technology, Changsha, Hunan, P.R China, 410073

More information

Classification of Log Files with Limited Labeled Data

Classification of Log Files with Limited Labeled Data Classification of Log Files with Limited Labeled Data Stefan Hommes, Radu State, Thomas Engel University of Luxembourg 15.10.2013 1 Motivation Firewall log files store all accepted and dropped connections.

More information

The Un-normalized Graph p-laplacian based Semi-supervised Learning Method and Speech Recognition Problem

The Un-normalized Graph p-laplacian based Semi-supervised Learning Method and Speech Recognition Problem Int. J. Advance Soft Compu. Appl, Vol. 9, No. 1, March 2017 ISSN 2074-8523 The Un-normalized Graph p-laplacian based Semi-supervised Learning Method and Speech Recognition Problem Loc Tran 1 and Linh Tran

More information

Graph Transduction via Alternating Minimization

Graph Transduction via Alternating Minimization Jun Wang Department of Electrical Engineering, Columbia University Tony Jebara Department of Computer Science, Columbia University Shih-Fu Chang Department of Electrical Engineering, Columbia University

More information

Effective Latent Space Graph-based Re-ranking Model with Global Consistency

Effective Latent Space Graph-based Re-ranking Model with Global Consistency Effective Latent Space Graph-based Re-ranking Model with Global Consistency Feb. 12, 2009 1 Outline Introduction Related work Methodology Graph-based re-ranking model Learning a latent space graph A case

More information

Latent Space Model for Road Networks to Predict Time-Varying Traffic. Presented by: Rob Fitzgerald Spring 2017

Latent Space Model for Road Networks to Predict Time-Varying Traffic. Presented by: Rob Fitzgerald Spring 2017 Latent Space Model for Road Networks to Predict Time-Varying Traffic Presented by: Rob Fitzgerald Spring 2017 Definition of Latent https://en.oxforddictionaries.com/definition/latent Latent Space Model?

More information

Graph Transduction via Alternating Minimization

Graph Transduction via Alternating Minimization Jun Wang Department of Electrical Engineering, Columbia University Tony Jebara Department of Computer Science, Columbia University Shih-Fu Chang Department of Electrical Engineering, Columbia University

More information

Semi-Supervised Clustering with Partial Background Information

Semi-Supervised Clustering with Partial Background Information Semi-Supervised Clustering with Partial Background Information Jing Gao Pang-Ning Tan Haibin Cheng Abstract Incorporating background knowledge into unsupervised clustering algorithms has been the subject

More information

Automatically Incorporating New Sources in Keyword Search-Based Data Integration

Automatically Incorporating New Sources in Keyword Search-Based Data Integration University of Pennsylvania ScholarlyCommons Departmental Papers (CIS) Department of Computer & Information Science 6-2010 Automatically Incorporating New Sources in Keyword Search-Based Data Integration

More information

Kernel-based Transductive Learning with Nearest Neighbors

Kernel-based Transductive Learning with Nearest Neighbors Kernel-based Transductive Learning with Nearest Neighbors Liangcai Shu, Jinhui Wu, Lei Yu, and Weiyi Meng Dept. of Computer Science, SUNY at Binghamton Binghamton, New York 13902, U. S. A. {lshu,jwu6,lyu,meng}@cs.binghamton.edu

More information

Regularized Learning with Networks of Features

Regularized Learning with Networks of Features Regularized Learning with Networks of Features Ted Sandler, Partha Pratim Talukdar, and Lyle H. Ungar Department of Computer & Information Science, University of Pennsylvania {tsandler,partha,ungar}@cis.upenn.edu

More information

A Taxonomy of Semi-Supervised Learning Algorithms

A Taxonomy of Semi-Supervised Learning Algorithms A Taxonomy of Semi-Supervised Learning Algorithms Olivier Chapelle Max Planck Institute for Biological Cybernetics December 2005 Outline 1 Introduction 2 Generative models 3 Low density separation 4 Graph

More information

INF4820 Algorithms for AI and NLP. Evaluating Classifiers Clustering

INF4820 Algorithms for AI and NLP. Evaluating Classifiers Clustering INF4820 Algorithms for AI and NLP Evaluating Classifiers Clustering Erik Velldal & Stephan Oepen Language Technology Group (LTG) September 23, 2015 Agenda Last week Supervised vs unsupervised learning.

More information

Using PageRank in Feature Selection

Using PageRank in Feature Selection Using PageRank in Feature Selection Dino Ienco, Rosa Meo, and Marco Botta Dipartimento di Informatica, Università di Torino, Italy fienco,meo,bottag@di.unito.it Abstract. Feature selection is an important

More information

arxiv: v1 [cs.si] 17 Jan 2011

arxiv: v1 [cs.si] 17 Jan 2011 arxiv:1101.3291v1 [cs.si] 17 Jan 2011 Chapter 1 NODE CLASSIFICATION IN SOCIAL NETWORKS Smriti Bhagat Rutgers University smbhagat@cs.rutgers.edu Graham Cormode AT&T Labs Research graham@research.att.com

More information

Graph based machine learning with applications to media analytics

Graph based machine learning with applications to media analytics Graph based machine learning with applications to media analytics Lei Ding, PhD 9-1-2011 with collaborators at Outline Graph based machine learning Basic structures Algorithms Examples Applications in

More information

Large-Scale Face Manifold Learning

Large-Scale Face Manifold Learning Large-Scale Face Manifold Learning Sanjiv Kumar Google Research New York, NY * Joint work with A. Talwalkar, H. Rowley and M. Mohri 1 Face Manifold Learning 50 x 50 pixel faces R 2500 50 x 50 pixel random

More information

Learning Two-View Stereo Matching

Learning Two-View Stereo Matching Learning Two-View Stereo Matching Jianxiong Xiao Jingni Chen Dit-Yan Yeung Long Quan Department of Computer Science and Engineering The Hong Kong University of Science and Technology The 10th European

More information

QUINT: On Query-Specific Optimal Networks

QUINT: On Query-Specific Optimal Networks QUINT: On Query-Specific Optimal Networks Presenter: Liangyue Li Joint work with Yuan Yao (NJU) -1- Jie Tang (Tsinghua) Wei Fan (Baidu) Hanghang Tong (ASU) Node Proximity: What? Node proximity: the closeness

More information

Graph-Based Lexicon Expansion with Sparsity-Inducing Penalties

Graph-Based Lexicon Expansion with Sparsity-Inducing Penalties Graph-Based Lexicon Expansion with Sparsity-Inducing Penalties Dipanjan Das and Noah A. Smith Language Technologies Institute Carnegie Mellon University Pittsburgh, PA 15213, USA {dipanjan,nasmith}@cs.cmu.edu

More information

INF4820, Algorithms for AI and NLP: Evaluating Classifiers Clustering

INF4820, Algorithms for AI and NLP: Evaluating Classifiers Clustering INF4820, Algorithms for AI and NLP: Evaluating Classifiers Clustering Erik Velldal University of Oslo Sept. 18, 2012 Topics for today 2 Classification Recap Evaluating classifiers Accuracy, precision,

More information

TRANSDUCTIVE LINK SPAM DETECTION

TRANSDUCTIVE LINK SPAM DETECTION TRANSDUCTIVE LINK SPAM DETECTION Denny Zhou Microsoft Research http://research.microsoft.com/~denzho Joint work with Chris Burges and Tao Tao Presenter: Krysta Svore Link spam detection problem Classification

More information

Using PageRank in Feature Selection

Using PageRank in Feature Selection Using PageRank in Feature Selection Dino Ienco, Rosa Meo, and Marco Botta Dipartimento di Informatica, Università di Torino, Italy {ienco,meo,botta}@di.unito.it Abstract. Feature selection is an important

More information

Fast Random Walk with Restart: Algorithms and Applications U Kang Dept. of CSE Seoul National University

Fast Random Walk with Restart: Algorithms and Applications U Kang Dept. of CSE Seoul National University Fast Random Walk with Restart: Algorithms and Applications U Kang Dept. of CSE Seoul National University U Kang (SNU) 1 Today s Talk RWR for ranking in graphs: important problem with many real world applications

More information

CS224W: Analysis of Networks Jure Leskovec, Stanford University

CS224W: Analysis of Networks Jure Leskovec, Stanford University CS224W: Analysis of Networks Jure Leskovec, Stanford University http://cs224w.stanford.edu Jure Leskovec, Stanford CS224W: Analysis of Networks, http://cs224w.stanford.edu 2????? Machine Learning Node

More information

Bipartite Edge Prediction via Transductive Learning over Product Graphs

Bipartite Edge Prediction via Transductive Learning over Product Graphs Bipartite Edge Prediction via Transductive Learning over Product Graphs Hanxiao Liu, Yiming Yang School of Computer Science, Carnegie Mellon University July 8, 2015 ICML 2015 Bipartite Edge Prediction

More information

Unsupervised and Semi-Supervised Learning vial 1 -Norm Graph

Unsupervised and Semi-Supervised Learning vial 1 -Norm Graph Unsupervised and Semi-Supervised Learning vial -Norm Graph Feiping Nie, Hua Wang, Heng Huang, Chris Ding Department of Computer Science and Engineering University of Texas, Arlington, TX 769, USA {feipingnie,huawangcs}@gmail.com,

More information

Clustering. Mihaela van der Schaar. January 27, Department of Engineering Science University of Oxford

Clustering. Mihaela van der Schaar. January 27, Department of Engineering Science University of Oxford Department of Engineering Science University of Oxford January 27, 2017 Many datasets consist of multiple heterogeneous subsets. Cluster analysis: Given an unlabelled data, want algorithms that automatically

More information

SEMI-SUPERVISED LEARNING (SSL) for classification

SEMI-SUPERVISED LEARNING (SSL) for classification IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 12, DECEMBER 2015 2411 Bilinear Embedding Label Propagation: Towards Scalable Prediction of Image Labels Yuchen Liang, Zhao Zhang, Member, IEEE, Weiming Jiang,

More information

Online Social Networks and Media

Online Social Networks and Media Online Social Networks and Media Absorbing Random Walks Link Prediction Why does the Power Method work? If a matrix R is real and symmetric, it has real eigenvalues and eigenvectors: λ, w, λ 2, w 2,, (λ

More information

Understanding seed selection in bootstrapping

Understanding seed selection in bootstrapping Understanding seed selection in bootstrapping Yo Ehara Issei Sato Hidekazu Oiwa Hiroshi Nakagawa Graduate School of Information Science and Technology Information Technology Center The University of Tokyo

More information

Automatic Domain Partitioning for Multi-Domain Learning

Automatic Domain Partitioning for Multi-Domain Learning Automatic Domain Partitioning for Multi-Domain Learning Di Wang diwang@cs.cmu.edu Chenyan Xiong cx@cs.cmu.edu William Yang Wang ww@cmu.edu Abstract Multi-Domain learning (MDL) assumes that the domain labels

More information

Absorbing Random walks Coverage

Absorbing Random walks Coverage DATA MINING LECTURE 3 Absorbing Random walks Coverage Random Walks on Graphs Random walk: Start from a node chosen uniformly at random with probability. n Pick one of the outgoing edges uniformly at random

More information

Graph-Based Semi-Supervised Learning

Graph-Based Semi-Supervised Learning Graph-Based Semi-Supervised Learning Synthesis Lectures on Artificial Intelligence and Machine Learning Editors Ronald J. Brachman, Yahoo! Labs William W. Cohen, Carnegie Mellon University Peter Stone,

More information

Graph Based Semi-Supervised Learning with Sharper Edges

Graph Based Semi-Supervised Learning with Sharper Edges Graph Based Semi-Supervised Learning with Sharper Edges Hyunjung (Helen) Shin 2, N. Jeremy Hi 3, and Gunnar Rätsch Friedrich Miescher Laboratory, Max Planck Society, Spemannstrasse. 37, 72076 Tübingen,

More information

MIL at ImageCLEF 2013: Scalable System for Image Annotation

MIL at ImageCLEF 2013: Scalable System for Image Annotation MIL at ImageCLEF 2013: Scalable System for Image Annotation Masatoshi Hidaka, Naoyuki Gunji, and Tatsuya Harada Machine Intelligence Lab., The University of Tokyo {hidaka,gunji,harada}@mi.t.u-tokyo.ac.jp

More information

Transfer Learning Algorithms for Image Classification

Transfer Learning Algorithms for Image Classification Transfer Learning Algorithms for Image Classification Ariadna Quattoni MIT, CSAIL Advisors: Michael Collins Trevor Darrell 1 Motivation Goal: We want to be able to build classifiers for thousands of visual

More information

Constrained Clustering with Interactive Similarity Learning

Constrained Clustering with Interactive Similarity Learning SCIS & ISIS 2010, Dec. 8-12, 2010, Okayama Convention Center, Okayama, Japan Constrained Clustering with Interactive Similarity Learning Masayuki Okabe Toyohashi University of Technology Tenpaku 1-1, Toyohashi,

More information

Dictionary learning Based on Laplacian Score in Sparse Coding

Dictionary learning Based on Laplacian Score in Sparse Coding Dictionary learning Based on Laplacian Score in Sparse Coding Jin Xu and Hong Man Department of Electrical and Computer Engineering, Stevens institute of Technology, Hoboken, NJ 73 USA Abstract. Sparse

More information

Transductive Classification Methods for Mixed Graphs

Transductive Classification Methods for Mixed Graphs Transductive Classification Methods for Mixed Graphs S Sundararajan Yahoo! Labs Bangalore, India ssrajan@yahoo-inc.com S Sathiya Keerthi Yahoo! Labs Santa Clara, CA selvarak@yahoo-inc.com ABSTRACT In this

More information

Lecture 03 Linear classification methods II

Lecture 03 Linear classification methods II Lecture 03 Linear classification methods II 25 January 2016 Taylor B. Arnold Yale Statistics STAT 365/665 1/35 Office hours Taylor Arnold Mondays, 13:00-14:15, HH 24, Office 203 (by appointment) Yu Lu

More information

Academic Paper Recommendation Based on Heterogeneous Graph

Academic Paper Recommendation Based on Heterogeneous Graph Academic Paper Recommendation Based on Heterogeneous Graph Linlin Pan, Xinyu Dai, Shujian Huang, and Jiajun Chen National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023,

More information

S n FFT: A Julia Toolkit for Fourier Analysis of Functions over Permutations

S n FFT: A Julia Toolkit for Fourier Analysis of Functions over Permutations Journal of Machine Learning Research 16 (2015) 3469-3473 Submitted 12/14; Revised 5/15; Published 12/15 S n FFT: A Julia Toolkit for Fourier Analysis of Functions over Permutations Gregory Plumb gplumb@wisc.edu

More information

Semi-supervised learning and active learning

Semi-supervised learning and active learning Semi-supervised learning and active learning Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Combining classifiers Ensemble learning: a machine learning paradigm where multiple learners

More information

Machine Learning and Data Mining. Clustering. (adapted from) Prof. Alexander Ihler

Machine Learning and Data Mining. Clustering. (adapted from) Prof. Alexander Ihler Machine Learning and Data Mining Clustering (adapted from) Prof. Alexander Ihler Overview What is clustering and its applications? Distance between two clusters. Hierarchical Agglomerative clustering.

More information

PERSONALIZED TAG RECOMMENDATION

PERSONALIZED TAG RECOMMENDATION PERSONALIZED TAG RECOMMENDATION Ziyu Guan, Xiaofei He, Jiajun Bu, Qiaozhu Mei, Chun Chen, Can Wang Zhejiang University, China Univ. of Illinois/Univ. of Michigan 1 Booming of Social Tagging Applications

More information

I How does the formulation (5) serve the purpose of the composite parameterization

I How does the formulation (5) serve the purpose of the composite parameterization Supplemental Material to Identifying Alzheimer s Disease-Related Brain Regions from Multi-Modality Neuroimaging Data using Sparse Composite Linear Discrimination Analysis I How does the formulation (5)

More information

DeepWalk: Online Learning of Social Representations

DeepWalk: Online Learning of Social Representations DeepWalk: Online Learning of Social Representations ACM SIG-KDD August 26, 2014, Rami Al-Rfou, Steven Skiena Stony Brook University Outline Introduction: Graphs as Features Language Modeling DeepWalk Evaluation:

More information

Random Walk Inference and Learning. Carnegie Mellon University 7/28/2011 EMNLP 2011, Edinburgh, Scotland, UK

Random Walk Inference and Learning. Carnegie Mellon University 7/28/2011 EMNLP 2011, Edinburgh, Scotland, UK Random Walk Inference and Learning in A Large Scale Knowledge Base Ni Lao, Tom Mitchell, William W. Cohen Carnegie Mellon University 2011.7.28 1 Outline Motivation Inference in Knowledge Bases The NELL

More information

Scalable Clustering of Signed Networks Using Balance Normalized Cut

Scalable Clustering of Signed Networks Using Balance Normalized Cut Scalable Clustering of Signed Networks Using Balance Normalized Cut Kai-Yang Chiang,, Inderjit S. Dhillon The 21st ACM International Conference on Information and Knowledge Management (CIKM 2012) Oct.

More information

EE 6882 Visual Search Engine

EE 6882 Visual Search Engine EE 6882 Visual Search Engine Relevance Feedback March 5 th, 2012 Lecture #7 Graph Based Semi Supervised Learning Application of Image Matching: Manipulation Detection What We have Learned Image Representation

More information

Random projection for non-gaussian mixture models

Random projection for non-gaussian mixture models Random projection for non-gaussian mixture models Győző Gidófalvi Department of Computer Science and Engineering University of California, San Diego La Jolla, CA 92037 gyozo@cs.ucsd.edu Abstract Recently,

More information

WebSci and Learning to Rank for IR

WebSci and Learning to Rank for IR WebSci and Learning to Rank for IR Ernesto Diaz-Aviles L3S Research Center. Hannover, Germany diaz@l3s.de Ernesto Diaz-Aviles www.l3s.de 1/16 Motivation: Information Explosion Ernesto Diaz-Aviles

More information

GRAPH-BASED SEMI-SUPERVISED HYPERSPECTRAL IMAGE CLASSIFICATION USING SPATIAL INFORMATION

GRAPH-BASED SEMI-SUPERVISED HYPERSPECTRAL IMAGE CLASSIFICATION USING SPATIAL INFORMATION GRAPH-BASED SEMI-SUPERVISED HYPERSPECTRAL IMAGE CLASSIFICATION USING SPATIAL INFORMATION Nasehe Jamshidpour a, Saeid Homayouni b, Abdolreza Safari a a Dept. of Geomatics Engineering, College of Engineering,

More information

Link prediction in graph construction for supervised and semi-supervised learning

Link prediction in graph construction for supervised and semi-supervised learning Link prediction in graph construction for supervised and semi-supervised learning Lilian Berton, Jorge Valverde-Rebaza and Alneu de Andrade Lopes Laboratory of Computational Intelligence (LABIC) University

More information

Spectral Kernel Learning for Semi-Supervised Classification

Spectral Kernel Learning for Semi-Supervised Classification Proceedings of the Twenty-First International Joint Conference on Artificial Intelligence (IJCAI-09) Spectral Kernel Learning for Semi-Supervised Classification Wei Liu The Chinese University of Hong Kong

More information

Revisiting Semi-Supervised Learning with Graph Embeddings

Revisiting Semi-Supervised Learning with Graph Embeddings Zhilin Yang William W. Cohen Ruslan Salakhutdinov School of Computer Science, Carnegie Mellon University ZHILINY@CS.CMU.EDU WCOHEN@CS.CMU.EDU RSALAKHU@CS.CMU.EDU Abstract We present a semi-supervised learning

More information

DSCI 575: Advanced Machine Learning. PageRank Winter 2018

DSCI 575: Advanced Machine Learning. PageRank Winter 2018 DSCI 575: Advanced Machine Learning PageRank Winter 2018 http://ilpubs.stanford.edu:8090/422/1/1999-66.pdf Web Search before Google Unsupervised Graph-Based Ranking We want to rank importance based on

More information

Combining Active Learning and Semi-Supervised Learning Using Gaussian Fields and Harmonic Functions

Combining Active Learning and Semi-Supervised Learning Using Gaussian Fields and Harmonic Functions Combining and Semi-Supervised Learning Using Gaussian Fields and Harmonic Functions Xiaojin Zhu Λ ZHUXJ@CS.CMU.EDU John Lafferty Λ LAFFERTY@CS.CMU.EDU Zoubin Ghahramani yλ ZOUBIN@GATSBY.UCL.AC.UK Λ School

More information

Case Study 4: Collaborative Filtering. GraphLab

Case Study 4: Collaborative Filtering. GraphLab Case Study 4: Collaborative Filtering GraphLab Machine Learning/Statistics for Big Data CSE599C1/STAT592, University of Washington Carlos Guestrin March 14 th, 2013 Carlos Guestrin 2013 1 Social Media

More information

Absorbing Random walks Coverage

Absorbing Random walks Coverage DATA MINING LECTURE 3 Absorbing Random walks Coverage Random Walks on Graphs Random walk: Start from a node chosen uniformly at random with probability. n Pick one of the outgoing edges uniformly at random

More information

Supplementary A. Overview. C. Time and Space Complexity. B. Shape Retrieval. D. Permutation Invariant SOM. B.1. Dataset

Supplementary A. Overview. C. Time and Space Complexity. B. Shape Retrieval. D. Permutation Invariant SOM. B.1. Dataset Supplementary A. Overview This supplementary document provides more technical details and experimental results to the main paper. Shape retrieval experiments are demonstrated with ShapeNet Core55 dataset

More information

Network embedding. Cheng Zheng

Network embedding. Cheng Zheng Network embedding Cheng Zheng Outline Problem definition Factorization based algorithms --- Laplacian Eigenmaps(NIPS, 2001) Random walk based algorithms ---DeepWalk(KDD, 2014), node2vec(kdd, 2016) Deep

More information

Instance-level Semi-supervised Multiple Instance Learning

Instance-level Semi-supervised Multiple Instance Learning Instance-level Semi-supervised Multiple Instance Learning Yangqing Jia and Changshui Zhang State Key Laboratory on Intelligent Technology and Systems Tsinghua National Laboratory for Information Science

More information

Machine Learning and Data Mining. Clustering (1): Basics. Kalev Kask

Machine Learning and Data Mining. Clustering (1): Basics. Kalev Kask Machine Learning and Data Mining Clustering (1): Basics Kalev Kask Unsupervised learning Supervised learning Predict target value ( y ) given features ( x ) Unsupervised learning Understand patterns of

More information

A Geometric Perspective on Machine Learning

A Geometric Perspective on Machine Learning A Geometric Perspective on Machine Learning Partha Niyogi The University of Chicago Collaborators: M. Belkin, V. Sindhwani, X. He, S. Smale, S. Weinberger A Geometric Perspectiveon Machine Learning p.1

More information

Trace Ratio Criterion for Feature Selection

Trace Ratio Criterion for Feature Selection Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (2008) Trace Ratio Criterion for Feature Selection Feiping Nie 1, Shiming Xiang 1, Yangqing Jia 1, Changshui Zhang 1 and Shuicheng

More information

Effectiveness of Sparse Features: An Application of Sparse PCA

Effectiveness of Sparse Features: An Application of Sparse PCA 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050

More information

Network community detection with edge classifiers trained on LFR graphs

Network community detection with edge classifiers trained on LFR graphs Network community detection with edge classifiers trained on LFR graphs Twan van Laarhoven and Elena Marchiori Department of Computer Science, Radboud University Nijmegen, The Netherlands Abstract. Graphs

More information

1 Document Classification [60 points]

1 Document Classification [60 points] CIS519: Applied Machine Learning Spring 2018 Homework 4 Handed Out: April 3 rd, 2018 Due: April 14 th, 2018, 11:59 PM 1 Document Classification [60 points] In this problem, you will implement several text

More information

SmartLabel: An Object Labeling Tool Using Iterated Harmonic Energy Minimization

SmartLabel: An Object Labeling Tool Using Iterated Harmonic Energy Minimization SmartLabel: An Object Labeling Tool Using Iterated Harmonic Energy Minimization Wen Wu and Jie Yang School of Computer Science, Carnegie Mellon University Pittsburgh, PA, USA wenwu@cs.cmu.edu, jie.yang@cs.cmu.edu

More information

Deeper Insights into Graph Convolutional Networks for Semi-Supervised Learning

Deeper Insights into Graph Convolutional Networks for Semi-Supervised Learning The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18) Deeper Insights into Graph Convolutional Networks for Semi-Supervised Learning Qimai Li, 1 Zhichao Han, 1,2 Xiao-Ming Wu 1 1 The Hong

More information

Harvesting Facts from Textual Web Sources by Constrained Label Propagation

Harvesting Facts from Textual Web Sources by Constrained Label Propagation Harvesting Facts from Textual Web Sources by Constrained Label Propagation Yafang Wang, Bin Yang, Lizhen Qu, Marc Spaniol and Gerhard Weikum Max-Planck-Institut für Informatik Saarbrücken, Germany {ywang

More information

Localized Lasso for High-Dimensional Regression

Localized Lasso for High-Dimensional Regression Localized Lasso for High-Dimensional Regression Makoto Yamada Kyoto University, PRESTO JST myamada@kuicr.kyoto-u.ac.jp John Shawe-Taylor University College London j.shawe-taylor@ucl.ac.uk Koh Takeuchi,

More information

TGNet: Learning to Rank Nodes in Temporal Graphs. Qi Song 1 Bo Zong 2 Yinghui Wu 1,3 Lu-An Tang 2 Hui Zhang 2 Guofei Jiang 2 Haifeng Chen 2

TGNet: Learning to Rank Nodes in Temporal Graphs. Qi Song 1 Bo Zong 2 Yinghui Wu 1,3 Lu-An Tang 2 Hui Zhang 2 Guofei Jiang 2 Haifeng Chen 2 TGNet: Learning to Rank Nodes in Temporal Graphs Qi Song 1 Bo Zong 2 Yinghui Wu 1,3 Lu-An Tang 2 Hui Zhang 2 Guofei Jiang 2 Haifeng Chen 2 1 2 3 Node Ranking in temporal graphs Temporal graphs have been

More information

Introduction to Text Mining. Hongning Wang

Introduction to Text Mining. Hongning Wang Introduction to Text Mining Hongning Wang CS@UVa Who Am I? Hongning Wang Assistant professor in CS@UVa since August 2014 Research areas Information retrieval Data mining Machine learning CS@UVa CS6501:

More information

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar. Matching Compare region of image to region of image. We talked about this for stereo. Important for motion. Epipolar constraint unknown. But motion small. Recognition Find object in image. Recognize object.

More information

Community Preserving Network Embedding

Community Preserving Network Embedding Community Preserving Network Embedding Xiao Wang, Peng Cui, Jing Wang, Jian Pei, Wenwu Zhu, Shiqiang Yang Presented by: Ben, Ashwati, SK What is Network Embedding 1. Representation of a node in an m-dimensional

More information

Metric Learning with Two-Dimensional Smoothness for Visual Analysis

Metric Learning with Two-Dimensional Smoothness for Visual Analysis Metric Learning with Two-Dimensional Smoothness for Visual Analysis Xinlei Chen Zifei Tong Haifeng Liu Deng Cai The State Key Lab of CAD&CG, College of Computer Science, Zhejiang University 388 Yu Hang

More information