Author-Specific Sentiment Aggregation for Rating Prediction of Reviews

Similar documents
Learning Meanings for Sentences with Recursive Autoencoders

Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank text

CSE 250B Project Assignment 4

Introduction to Text Mining. Hongning Wang

Leveraging Knowledge Graphs for Web-Scale Unsupervised Semantic Parsing. Interspeech 2013

NLP Final Project Fall 2015, Due Friday, December 18

DeepWalk: Online Learning of Social Representations

Question Answering Systems

Automatically Extracting Polarity-Bearing Topics for Cross-Domain Sentiment Classification

Sparse unsupervised feature learning for sentiment classification of short documents

Sentiment Analysis using Recursive Neural Network

Classification. I don t like spam. Spam, Spam, Spam. Information Retrieval

Compressed Representations of Text Documents

PTE : Predictive Text Embedding through Large-scale Heterogeneous Text Networks

Building Corpus with Emoticons for Sentiment Analysis

Domain Adaptation Using Domain Similarity- and Domain Complexity-based Instance Selection for Cross-domain Sentiment Analysis

Chapter 1, Introduction

What is this Song About?: Identification of Keywords in Bollywood Lyrics

Columbia University High-Level Feature Detection: Parts-based Concept Detectors

CSE 158. Web Mining and Recommender Systems. Midterm recap

TriRank: Review-aware Explainable Recommendation by Modeling Aspects

Mining Social Media Users Interest

Classifica(on and Clustering with WEKA. Classifica*on and Clustering with WEKA

Recommender Systems. Collaborative Filtering & Content-Based Recommending

Using Optical Flow for Stabilizing Image Sequences. Peter O Donovan

Text classification with Naïve Bayes. Lab 3

Karami, A., Zhou, B. (2015). Online Review Spam Detection by New Linguistic Features. In iconference 2015 Proceedings.

(Multinomial) Logistic Regression + Feature Engineering

How to predict IMDb score

Sentiment analysis under temporal shift

Natural Language Processing on Hospitals: Sentimental Analysis and Feature Extraction #1 Atul Kamat, #2 Snehal Chavan, #3 Neil Bamb, #4 Hiral Athwani,

Best Customer Services among the E-Commerce Websites A Predictive Analysis

Introduction p. 1 What is the World Wide Web? p. 1 A Brief History of the Web and the Internet p. 2 Web Data Mining p. 4 What is Data Mining? p.

Motion for Computer Animation. Michael Gleicher Department of Computer Sciences University of Wisconsin, Madison

Final Project Discussion. Adam Meyers Montclair State University

Automatic Domain Partitioning for Multi-Domain Learning

Conversational Knowledge Graphs. Larry Heck Microsoft Research

Logistic Regression: Probabilistic Interpretation

Natural Language Processing

QMiner is a data analytics platform for processing large-scale real-time streams containing structured and unstructured data.

A Survey Of Different Text Mining Techniques Varsha C. Pande 1 and Dr. A.S. Khandelwal 2

Semantic image search using queries

Business Club. Decision Trees

Feature LDA: a Supervised Topic Model for Automatic Detection of Web API Documentations from the Web

A Hybrid Neural Model for Type Classification of Entity Mentions

Classification and Detection in Images. D.A. Forsyth

Characterizing Web Usage Regularities with Information Foraging Agents

Assignment 5: Collaborative Filtering

Shrey Patel B.E. Computer Engineering, Gujarat Technological University, Ahmedabad, Gujarat, India

Movie Recommender System - Hybrid Filtering Approach

Action recognition in videos

Mining Trusted Information in Medical Science: An Information Network Approach

CPSC 340: Machine Learning and Data Mining. Principal Component Analysis Fall 2017

Latent Aspect Rating Analysis. Hongning Wang

FACE DETECTION AND RECOGNITION OF DRAWN CHARACTERS HERMAN CHAU

COMP90042 LECTURE 3 LEXICAL SEMANTICS COPYRIGHT 2018, THE UNIVERSITY OF MELBOURNE

Use of KNN for the Netflix Prize Ted Hong, Dimitris Tsamis Stanford University

Annotation of Human Motion Capture Data using Conditional Random Fields

IBM Research - China

Copyr i g ht 2012, SAS Ins titut e Inc. All rights res er ve d. This slide is for video use only.

Part I: Data Mining Foundations

Classification of Sentimental Reviews. using Natural Language Processing Concepts. and Machine Learning Techniques

Facial Expression Classification with Random Filters Feature Extraction

Lecture 25: Review I

Empirical Evaluation of RNN Architectures on Sentence Classification Task

Classifying Entities into an Incomplete Ontology

Self-tuning ongoing terminology extraction retrained on terminology validation decisions

CSE 190, Spring 2015: Midterm

CSCI 5417 Information Retrieval Systems! What is Information Retrieval?

Texas Death Row. Last Statements. Data Warehousing and Data Mart. By Group 16. Irving Rodriguez Joseph Lai Joe Martinez

Cambridge Interview Technical Talk

Intersubjectivity and Sentiment: From Language to Knowledge

Information Retrieval CSCI

SCALABLE KNOWLEDGE BASED AGGREGATION OF COLLECTIVE BEHAVIOR

Latest development in image feature representation and extraction

Event: PASS SQL Saturday - DC 2018 Presenter: Jon Tupitza, CTO Architect

International Journal of Advance Engineering and Research Development. A Facebook Profile Based TV Shows and Movies Recommendation System

WordNet-based User Profiles for Semantic Personalization

Adaptive Dropout Training for SVMs

Introduction to Artificial Intelligence

KNOWLEDGE GRAPH: FROM METADATA TO INFORMATION VISUALIZATION AND BACK. Xia Lin College of Computing and Informatics Drexel University Philadelphia, PA

Semi-Supervised Abstraction-Augmented String Kernel for bio-relationship Extraction

Micro-blogging Sentiment Analysis Using Bayesian Classification Methods

ISO 800 1/40 sec. f/10 145mm lens

Creating an Encoding Independent Music Genre. Classifier

Solution to the example exam LT2306: Machine learning, October 2016

Lecture 3: Linear Classification

CPSC 340: Machine Learning and Data Mining. Principal Component Analysis Fall 2016

Metric Learning for Large-Scale Image Classification:

Large Scale Chinese News Categorization. Peng Wang. Joint work with H. Zhang, B. Xu, H.W. Hao

Mining Opinion Attributes From Texts using Multiple Kernel Learning

Fast Contextual Preference Scoring of Database Tuples

Classifying Twitter Data in Multiple Classes Based On Sentiment Class Labels

Data analysis case study using R for readily available data set using any one machine learning Algorithm

Multiple-Choice Questionnaire Group C

Web Product Ranking Using Opinion Mining

Opinion Mining by Transformation-Based Domain Adaptation

Relevance Feature Discovery for Text Mining

Context Encoding LSTM CS224N Course Project

Orange3 Data Fusion Documentation. Biolab

Transcription:

Author-Specific Sentiment Aggregation for Rating Prediction of Reviews Subhabrata Mukherjee 1 Sachindra Joshi 2 1 Max Planck Institute for Informatics 2 IBM India Research Lab LREC 2014

Outline Motivation Phrase Annotated Author-Specific Sentiment Ontology Tree Learning Experimental Evaluation Use Case : Thwarting Detection Conclusions

Objective Classify a piece of text as positive, negative or objective I saw a movie The direction was awesome However, the acting was not that good.

IMDB Review I bought a Canon EOS 7D (DSLR). It s very small, sturdy, and constructed well. The handling is quite nice with a powder-coated metal frame. It powers on quickly and the menus are fairly easy to navigate. The video modes are nice, too. It works great with my 8GB Eye-Fi SD card. A new camera isn t worth it if it doesn t exceed the picture quality of my old 5Mpixel SD400 and this one doesn t. The auto white balance is poor. I d need to properly balance every picture taken so far with the ELPH 300. With 12 Mpixels, you d expect pretty good images, but the problem is that the ELPH 300 compression is turned up so high that the sensor s acuity gets lost (softened) in compression. camera size, structure, easy use, video modes, SD support auto-white balance, high compression leading to sensor acuity overall review polarity

Camera Sentiment Ontology Tree Ontology is a knowledge base of structured list of concepts, relations and individuals Hierarchical relationship between product attributes are captured Overall polarity negative as facet polarities higher up the tree dominate those at a lower level

Camera Sentiment Ontology Tree Ontology is a knowledge base of structured list of concepts, relations and individuals Hierarchical relationship between product attributes are captured Overall polarity negative as facet polarities higher up the tree dominate those at a lower level

Why Author Specificity? [ This film is based on a true-life incident. It sounds like a great plot and the director makes a decent attempt in narrating a powerful story. ] [ However, the film does not quite make the mark due to sloppy acting. ] Rating varies for authors with different topic preferences Positive for those with preference for acting and narration Negative for acting Affective sentiment varies for authors How much negative is does not quite make the mark for me? Author-writing style helps associate facets and sentiments 1 E.g. topic switch, use of content and function words etc. The author makes a topic switch with the function word however Traditional works ignore author identity 1 Joint Author Sentiment Topic Model, Mukherjee S. et al., Siam Data Mining (SDM) 2014

Why Author Specificity? [ This film is based on a true-life incident. It sounds like a great plot and the director makes a decent attempt in narrating a powerful story. ] [ However, the film does not quite make the mark due to sloppy acting. ] Rating varies for authors with different topic preferences Positive for those with preference for acting and narration Negative for acting Affective sentiment varies for authors How much negative is does not quite make the mark for me? Author-writing style helps associate facets and sentiments 1 E.g. topic switch, use of content and function words etc. The author makes a topic switch with the function word however Traditional works ignore author identity 1 Joint Author Sentiment Topic Model, Mukherjee S. et al., Siam Data Mining (SDM) 2014

Why Author Specificity? [ This film is based on a true-life incident. It sounds like a great plot and the director makes a decent attempt in narrating a powerful story. ] [ However, the film does not quite make the mark due to sloppy acting. ] Rating varies for authors with different topic preferences Positive for those with preference for acting and narration Negative for acting Affective sentiment varies for authors How much negative is does not quite make the mark for me? Author-writing style helps associate facets and sentiments 1 E.g. topic switch, use of content and function words etc. The author makes a topic switch with the function word however Traditional works ignore author identity 1 Joint Author Sentiment Topic Model, Mukherjee S. et al., Siam Data Mining (SDM) 2014

Why Author Specificity? [ This film is based on a true-life incident. It sounds like a great plot and the director makes a decent attempt in narrating a powerful story. ] [ However, the film does not quite make the mark due to sloppy acting. ] Rating varies for authors with different topic preferences Positive for those with preference for acting and narration Negative for acting Affective sentiment varies for authors How much negative is does not quite make the mark for me? Author-writing style helps associate facets and sentiments 1 E.g. topic switch, use of content and function words etc. The author makes a topic switch with the function word however Traditional works ignore author identity 1 Joint Author Sentiment Topic Model, Mukherjee S. et al., Siam Data Mining (SDM) 2014

Why Author Specificity? [ This film is based on a true-life incident. It sounds like a great plot and the director makes a decent attempt in narrating a powerful story. ] [ However, the film does not quite make the mark due to sloppy acting. ] Rating varies for authors with different topic preferences Positive for those with preference for acting and narration Negative for acting Affective sentiment varies for authors How much negative is does not quite make the mark for me? Author-writing style helps associate facets and sentiments 1 E.g. topic switch, use of content and function words etc. The author makes a topic switch with the function word however Traditional works ignore author identity 1 Joint Author Sentiment Topic Model, Mukherjee S. et al., Siam Data Mining (SDM) 2014

Review as with any gen-x mtv movie (like last year s dead man on campus), the movie is marketed for a primarily male audience as indicated by its main selling points: sex and football. those two items are sure to snare a sizeable box office chunk initially, but sales will decline for two reasons. first, the football sequences are nothing new; the sports genre isn t mainstream and it s been retread to death. second, the sex is just bad. despite the appearance of a whipped cream bikini or the all-night strip-club party, there s nothing even remotely tantalizing. the acting is mostly mediocre, not including the fantastic jon voight. cultivating his usual sliminess, voight gives an unexpectedly standout performance as west canaan coyotes head coach bud kilmer... these elements ( as well as the heavy drinking and carousing ) might be more appropriate on a college campus - but mtv s core audience is the high school demographic. this focus is further emphasized by the casting: james van der beek, of tv s dawson s creek, is an understandable choice for the reluctant hero...

Cinema Ontology Tree JedFilm. (2014). Cinema ontology project, March.

Phrase Annotated Sentiment Ontology Tree Concept mapping from review to SOT using Wu-Palmer WordNet similarity measure Facet-Specific Opinion Extraction using Dependency Parsing (Mukherjee et al., CICLING 2012)

Phrase Annotated Author-Specific SOT T (V, E) V j =< f j, < p j i >, w j, d j > E j,k SOT Product Attribute Set Attribute relation connecting V j and V k f j Product Facet p j i <Phrases in the review with author opinion about f j > w j Author preference about f j d j Depth of f j in SOT O(p) Sentiment Predictor Function that maps polarity(p j i ) [ 1, 1] PASOT Equipped with (T a (V, E), O a (p)) for a given author a

Expected Sentiment Weight Expected sentiment weight (ESW) of a node in PASOT is defined as, ESW a (V j ) = facet depth {}}{{}}{ wj a 1 author-preference where O a (p j i ) [ 1, 1] phrase polarity {}}{ O a (p j i ) d j i }{{} self-information Review polarity given by ESW a (root) + k ESW a (V j,k ) }{{} children information (1) Computation of ESW requires learning < wj a > and O a for each author a

Review Polarity For each author a, every facet f j is associated with ESW a (V j ), where f j V j, computed using Equation 1 Let y i be the overall polarity of review i To learn overall review polarity as a function of author-specific facet preferences Formulate an L 2 -regularized logistic regression problem : 1 min w a 2 w at w a + C i log(1 + exp y i j w j a ESW a (V j ) ) (2) Trust region newton method (Lin et al., JMLR 2008) used to learn the weights in the above equation

Review Polarity For each author a, every facet f j is associated with ESW a (V j ), where f j V j, computed using Equation 1 Let y i be the overall polarity of review i To learn overall review polarity as a function of author-specific facet preferences Formulate an L 2 -regularized logistic regression problem : 1 min w a 2 w at w a + C i log(1 + exp y i j w j a ESW a (V j ) ) (2) Trust region newton method (Lin et al., JMLR 2008) used to learn the weights in the above equation

Review Polarity For each author a, every facet f j is associated with ESW a (V j ), where f j V j, computed using Equation 1 Let y i be the overall polarity of review i To learn overall review polarity as a function of author-specific facet preferences Formulate an L 2 -regularized logistic regression problem : 1 min w a 2 w at w a + C i log(1 + exp y i j w j a ESW a (V j ) ) (2) Trust region newton method (Lin et al., JMLR 2008) used to learn the weights in the above equation

Review Polarity For each author a, every facet f j is associated with ESW a (V j ), where f j V j, computed using Equation 1 Let y i be the overall polarity of review i To learn overall review polarity as a function of author-specific facet preferences Formulate an L 2 -regularized logistic regression problem : 1 min w a 2 w at w a + C i log(1 + exp y i j w j a ESW a (V j ) ) (2) Trust region newton method (Lin et al., JMLR 2008) used to learn the weights in the above equation

Sentiment Predictor Function O(p) L 2 -regularized L 2 -loss Support Vector Machine and bag-of-words unigram features trained over the movie review corpus in (Maas et al., ACL 2011).

Author-Specific Hierarchical Sentiment Aggregation Learn domain-specific ontology T (V, E) using KB Learn phrase polarity predictor O(p) using L 2 -reg. L 2 -loss SVM For each author Map each review to T (V, E) using Wu-Palmer Similarity Use Dependency Parsing Algorithm (Mukherjee S. et al., CICLING 2012) to extract feature-specific opinion < p i j > Construct PASOT for each review using O(p i j ) Apply Eqn 1 to PASOT bottom-up to find ESW (V j ) j Using 80% labeled reviews y i and < ESW a (V j ) >, learn author-specific facet-weights < w a j > using Equation 2 For each unseen review Construct PASOT using above steps and learnt w a Use Equation 1 to find < ESW a (V j ) > Review polarity given by Sign(ESW a (root))

Author-Specific Hierarchical Sentiment Aggregation Learn domain-specific ontology T (V, E) using KB Learn phrase polarity predictor O(p) using L 2 -reg. L 2 -loss SVM For each author Map each review to T (V, E) using Wu-Palmer Similarity Use Dependency Parsing Algorithm (Mukherjee S. et al., CICLING 2012) to extract feature-specific opinion < p i j > Construct PASOT for each review using O(p i j ) Apply Eqn 1 to PASOT bottom-up to find ESW (V j ) j Using 80% labeled reviews y i and < ESW a (V j ) >, learn author-specific facet-weights < w a j > using Equation 2 For each unseen review Construct PASOT using above steps and learnt w a Use Equation 1 to find < ESW a (V j ) > Review polarity given by Sign(ESW a (root))

Author-Specific Hierarchical Sentiment Aggregation Learn domain-specific ontology T (V, E) using KB Learn phrase polarity predictor O(p) using L 2 -reg. L 2 -loss SVM For each author Map each review to T (V, E) using Wu-Palmer Similarity Use Dependency Parsing Algorithm (Mukherjee S. et al., CICLING 2012) to extract feature-specific opinion < p i j > Construct PASOT for each review using O(p i j ) Apply Eqn 1 to PASOT bottom-up to find ESW (V j ) j Using 80% labeled reviews y i and < ESW a (V j ) >, learn author-specific facet-weights < w a j > using Equation 2 For each unseen review Construct PASOT using above steps and learnt w a Use Equation 1 to find < ESW a (V j ) > Review polarity given by Sign(ESW a (root))

Author-Specific Hierarchical Sentiment Aggregation Learn domain-specific ontology T (V, E) using KB Learn phrase polarity predictor O(p) using L 2 -reg. L 2 -loss SVM For each author Map each review to T (V, E) using Wu-Palmer Similarity Use Dependency Parsing Algorithm (Mukherjee S. et al., CICLING 2012) to extract feature-specific opinion < p i j > Construct PASOT for each review using O(p i j ) Apply Eqn 1 to PASOT bottom-up to find ESW (V j ) j Using 80% labeled reviews y i and < ESW a (V j ) >, learn author-specific facet-weights < w a j > using Equation 2 For each unseen review Construct PASOT using above steps and learnt w a Use Equation 1 to find < ESW a (V j ) > Review polarity given by Sign(ESW a (root))

Author-Specific Hierarchical Sentiment Aggregation Learn domain-specific ontology T (V, E) using KB Learn phrase polarity predictor O(p) using L 2 -reg. L 2 -loss SVM For each author Map each review to T (V, E) using Wu-Palmer Similarity Use Dependency Parsing Algorithm (Mukherjee S. et al., CICLING 2012) to extract feature-specific opinion < p i j > Construct PASOT for each review using O(p i j ) Apply Eqn 1 to PASOT bottom-up to find ESW (V j ) j Using 80% labeled reviews y i and < ESW a (V j ) >, learn author-specific facet-weights < w a j > using Equation 2 For each unseen review Construct PASOT using above steps and learnt w a Use Equation 1 to find < ESW a (V j ) > Review polarity given by Sign(ESW a (root))

Author-Specific Hierarchical Sentiment Aggregation Learn domain-specific ontology T (V, E) using KB Learn phrase polarity predictor O(p) using L 2 -reg. L 2 -loss SVM For each author Map each review to T (V, E) using Wu-Palmer Similarity Use Dependency Parsing Algorithm (Mukherjee S. et al., CICLING 2012) to extract feature-specific opinion < p i j > Construct PASOT for each review using O(p i j ) Apply Eqn 1 to PASOT bottom-up to find ESW (V j ) j Using 80% labeled reviews y i and < ESW a (V j ) >, learn author-specific facet-weights < w a j > using Equation 2 For each unseen review Construct PASOT using above steps and learnt w a Use Equation 1 to find < ESW a (V j ) > Review polarity given by Sign(ESW a (root))

Author-Specific Hierarchical Sentiment Aggregation Learn domain-specific ontology T (V, E) using KB Learn phrase polarity predictor O(p) using L 2 -reg. L 2 -loss SVM For each author Map each review to T (V, E) using Wu-Palmer Similarity Use Dependency Parsing Algorithm (Mukherjee S. et al., CICLING 2012) to extract feature-specific opinion < p i j > Construct PASOT for each review using O(p i j ) Apply Eqn 1 to PASOT bottom-up to find ESW (V j ) j Using 80% labeled reviews y i and < ESW a (V j ) >, learn author-specific facet-weights < w a j > using Equation 2 For each unseen review Construct PASOT using above steps and learnt w a Use Equation 1 to find < ESW a (V j ) > Review polarity given by Sign(ESW a (root))

Author-Specific Hierarchical Sentiment Aggregation Learn domain-specific ontology T (V, E) using KB Learn phrase polarity predictor O(p) using L 2 -reg. L 2 -loss SVM For each author Map each review to T (V, E) using Wu-Palmer Similarity Use Dependency Parsing Algorithm (Mukherjee S. et al., CICLING 2012) to extract feature-specific opinion < p i j > Construct PASOT for each review using O(p i j ) Apply Eqn 1 to PASOT bottom-up to find ESW (V j ) j Using 80% labeled reviews y i and < ESW a (V j ) >, learn author-specific facet-weights < w a j > using Equation 2 For each unseen review Construct PASOT using above steps and learnt w a Use Equation 1 to find < ESW a (V j ) > Review polarity given by Sign(ESW a (root))

Learnt PASOT for Example Review

Dataset Dataset Authors Avg Rev/ Author Movie Review* 312 7 Movie Review 65 23 Rev/ Rating Avg Rev Avg Words/ Length Rev Pos Neg Total 1000 1000 2000 32 746 Pos Neg Total 705 762 1467 32 711 Table: Movie Review Dataset Statistics (* denotes the original data, indicates processed data)

Baselines Support Vector Machines with L 2 -loss, L 2 -reg and unigram bag-of-words features (Pang and Lee, EMNLP 2002; Pang and Lee, ACL 2004; Mullen and Collier, EMNLP 2004) Author-specific facet preference using regression (Mukherjee et al., WWW 2013) Sentiment aggregation using ConceptNet ontology (Mukherjee and Joshi, IJCNLP 2013)

Accuracy Comparison with Baselines Model Author Overall Acc. Acc. Bag-of-words Support Vector Machine 80.23 78.49 (Pang and Lee, EMNLP 2002; Pang and Lee, ACL 2004; Mullen and Collier, EMNLP 2004) Author-Specific Analysis using Regression (Mukherjee et al., WWW 2013) Ontological Sentiment Aggregation (Mukherjee and Joshi, IJCNLP 2013) 79.31 79.07 81.4 79.51 PASOT 86.32 86.04

Comparison of Existing Models with PASOT in the IMDB Dataset Models Acc. Eigen Vector Clustering (Dasgupta et al., EMNLP 2009) 70.9 Semi Supervised, 40% doc. Label (Li et al., IJCNLP 2009) 73.5 LSM Unsupervised with prior info (Lin et al., CIKM 2009) 74.1 SO-CAL Full Lexicon (Taboada et al., Comp. Ling. 2011) 76.37 RAE Semi Supervised Recursive Auto Encoders with random word initialization 76.8 (Dasgupta et al., EMNLP 2009) WikiSent: Extractive Summarization with Wikipedia + Lexicon (Mukherjee et 76.85 al., ECML-PKDD 2012) Supervised Tree-CRF (Socher et al., EMNLP 2011) 77.3 RAE: Supervised Recursive Auto Encoders with 10% cross-validation 77.7 (Socher et al., EMNLP 2011) JST: Without Subjectivity Detection using LDA (Lin et al., CIKM 2009) 82.8 Supervised SVM (Pang et al., EMNLP 2002) 82.9 JST: With Subjectivity Detection (Lin et al., CIKM 2009) 84.6 PASOT 86.04 Supervised SVM (Kennedy et al., Comp. Intell. 2006) 86.2 Supervised Subjective MR, SVM (Pang et al., ACL 2004) 87.2 JAST: Joint Author Sentiment Topic Model (Mukherjee et al., SDM 2014) 87.69 Appraisal Group: Supervised (Whitelaw et al., CIKM 2005) 90.2

Use Case : Thwarting Detection Example of thwarting from (Pang and Lee, EMNLP 2002) : This film sounds like a great plot, the actors are first grade, and the supporting cast is good as well, and Stallone is attempting to deliver a good performance. However, it can t hold up. Overall review polarity different from that of majority opinion words in review. Dataset Positive Thwarted Negative Thwarted 1467 279 132 Model Thwarting Acc. Bag-of-words SVM 61.54 PASOT 73.07 Table: Thwarting Accuracy Comparison

Use Case : Thwarting Detection Example of thwarting from (Pang and Lee, EMNLP 2002) : This film sounds like a great plot, the actors are first grade, and the supporting cast is good as well, and Stallone is attempting to deliver a good performance. However, it can t hold up. Overall review polarity different from that of majority opinion words in review. Dataset Positive Thwarted Negative Thwarted 1467 279 132 Model Thwarting Acc. Bag-of-words SVM 61.54 PASOT 73.07 Table: Thwarting Accuracy Comparison

Variation of ESW of Facets with Review Rating for a Specific Author

Variation of ESW of Facets with Review Rating for 10 Authors

Conclusions Hierarchical sentiment aggregation performs better than flat classification models Proposed an approach to construct a Phrase Annotated Author-Specific Sentiment Ontology Tree (PASOT) Author-specific modeling of reviews better capture author intention and facet preference leading to better prediction models

Questions???