Understanding How People Interact with Web Search Results that Change in Real-Time Using Implicit Feedback

Size: px
Start display at page:

Download "Understanding How People Interact with Web Search Results that Change in Real-Time Using Implicit Feedback"

Transcription

1 Understanding How People Interact with Web Search Results that Change in Real-Time Using Implicit Feedback Jin Young Kim Microsoft Bing Bellevue, WA USA Mark Cramer Surf Canyon San Francisco, CA USA Jaime Teevan Microsoft Research Redmond, WA USA Dmitry Lagun Emory University Atlanta, GA USA ABSTRACT The way a searcher interacts with query results can reveal a lot about what is being sought. Considerable research has gone into using implicit relevance feedback to identify relevant content in real-time, but little is known about how to best present this newly identified relevant content to users. In this paper we compare a traditional search interface with one that dynamically re-ranks and recommends search results as the user interacts with it in order to build a picture of how and when users should be offered dynamically identified relevant content. We present several studies that compare logged behavior for hundreds of thousands of users and millions of queries as well as self-reported measures of success across the two interaction models. Compared to traditional web search, users presented with dynamically ranked results exhibit higher engagement and find information faster, particularly during exploratory tasks. These findings have implications for how search engines might best exploit implicit feedback in realtime in order to help users identify the most relevant results as quickly as possible. Categories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval---search process Keywords Interactive information retrieval, query log analysis, web search, dynamic ranked retrieval, implicit relevance feedback. 1. INTRODUCTION Searchers do not always find what they seek after a single query. Instead, they often issue multiple queries, incorporating what they learn from the results to iterate and refine how they express their information needs. While many search engines try to make this easier by providing reformulation suggestions and some searchers are able to quickly incorporate this feedback, others explore each set of search results exhaustively and fail to benefit from any new information identified during the search process [3]. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. CIKM 13, Oct. 27 Nov. 1, 2013, San Francisco, CA, USA. Copyright 2013 ACM /13/10 $ To address this, search engines have begun to incorporate session context into the search results they return for subsequent queries in a session [20]. For example, if a person issues the query apples and then clicks on a website about fruit, future queries in that session can be biased to return more fruit-related results versus companyrelated results. Different types of session context have been explored, including past queries [18], the topic or reading level [12] of clicked results and the snippet text [26]. The onus of reformulating the query, however, continues to rest upon the user. In cases where users search skills are limited [30], their knowledge of the subject is weak [29], or the search topic is difficult or exploratory [31], reformulation can be daunting and sometimes overwhelming. There is, however, an opportunity for search engines to exploit the implicit feedback users provide following a query to present the most relevant results immediately, without requiring a subsequent query. This can be done by dynamically altering the result set, in response to real-time implicit relevance feedback, as the user interacts with it. Little is currently known, however, about how people might best encounter and use newly identified content in the course of a single query. To provide insight into dynamic result set interaction, we explore how and when new content is useful to searchers in the course of a single query. We present several large-scale studies of people s interactions with Surf Canyon, a popular commercial search system that re-ranks search results in real-time following each user action. Contextually relevant results that initially might have been ranked deep within the result set have the opportunity to be promoted based on the user s implicit signals. An example for the query apples is shown in Figure 1. Upon returning to the result page after clicking on the second result (a health food website) the user is presented with a real-time recommendation based on this action, including a result about the dental benefits of apples. Because this search system is used daily by hundreds of thousands of users for millions of queries, the usage data paints a picture of how real-time, dynamically ranked results impact people s behavior. We present the results of two controlled studies designed to compare dynamic ranking with traditional, static web search. Our Figure 1. Real-time recommendations are presented inline based on the results the user clicks.

2 analysis reveals that users have a higher engagement in the search process when provided with a dynamic feedback experience, clicking on more results and spending a longer time searching. Participants aregiven exploratory tasks to complete on the two systems found more results while expending less time when given dynamic results as compared to a traditional search interface. 2. RELATED WORK There is growing interest in the IR community around understanding and supporting personalization [18] [26]. Researchers have explored short-term personalization based on session context, including previous searches and clicked results, to identify relevant results [21,9]. Teevan et al. [27] present a framework to identify queries that would benefit the most from personalization. In this paper we explore which queries benefit most from the real-time personalized results. Relevance feedback is the primary post-query method for automatically improving system representation of a searcher s information need and has been studied extensively. Explicit relevance feedback [13] allows users to select documents or terms to be used for query expansion. Explicit relevance feedback is rarely used, however, as it requires direct interaction, placing cognitive load on the user, and often results in the identification of irrelevant content [10]. To overcome some of the challenges with explicit relevance feedback, researchers have explored the use of implicit relevance feedback. With implicit relevance feedback, user behavior, such as clicking documents or scrolling, is unobtrusively monitored and used to expand the understanding of users information needs beyond the query [32]. In addition to both explicit and implicit relevance feedback, other adaptive search systems have been explored. Kaplan et al. [11], for example, describe a navigation scheme that adapts to user behavior by using associative matrices that encode user preferences. The term dynamic ranking and a theoretical justification for the approach was formally introduced by Brandt et al. [7] where they proposed a retrieval model that optimizes relevance of dynamic recommendations based on the user s choice of documents. Cramer et al. [17] present a preliminary study of user behavior with dynamic ranking based on log data. They find that dynamic ranking increases the click through rate of recommended URLs compared to a baseline system. We extend this line of research by conducting user centric evaluation and presenting additional analysis of searcher behavior during the interaction with a dynamic ranking enabled system. Search systems that rely on context and personalization are difficult to evaluate because they depart from the notion of global relevance that can be effectively assessed by judges to a notion of personalized relevance that is hard to measure by anyone other than the particular user. As such, Bennett et al. [4] suggest evaluating personalized search algorithms online by automatically identifying search result click behaviors that suggests satisfaction. Ageev et al. [1] propose a scalable, game-like framework for conducting remote user studies of searchers successes. We compare a system that dynamically ranks results to a baseline system by conducting a recall-based user study with similar tasks to those used by Ageev et al. [1]. Online controlled experiments (e.g., [14]) and crowdsourcing evaluation (e.g., [6]) employ similar approaches. To summarize, previous work into personalization, relevance feedback and adaptive search has resulted in promising ways to use context and implicit feedback to identify relevant results. Evaluating such systems, however, is difficult and little is known about how people actually interact with relevant content that is identified and presented to users in real-time. In this paper we conduct usercentric evaluations of a dynamic ranking system that provides helpful insights for researchers and practitioners. We begin by discussing the details of the dynamic ranking system we studied. 3. DYNAMIC RANKING SYSTEM To understand how people interact with dynamically ranked results, we study Surf Canyon, a popular commercial search system. Surf Canyon provides a browser extension (available for IE, Firefox, and Chrome) that applies an interactive dynamic layer on top the results returned by a number of existing popular search engines (e.g., Google, Bing, and Yahoo!) which re-ranks search results as the users interact with them. In this section we briefly describe how dynamically ranked results are identified and look more closely at how these results are then displayed. 3.1 Identifying New Relevant Results When a query is issued to a major search engine using a browser that has the Surf Canyon extension running, the system begins by fetching, in the background, the top results from the underlying search engine. The user is initially presented with the top 10 results, as identified by the underlying search engine, in a typical fashion. However, by then monitoring user actions, including link clicks, web page visits, scrolling, and back button clicks, Surf Canyon starts generating a real-time model of inferred intent. Intent is derived from the titles, snippets and URLs presented to the user, and, in some cases, the page content and metadata information of clicked and skipped results. All activity in the current information session is used to build the model. This model is used to expand upon the initial understanding of the user s information need which was derived only from the query and other data available prior to the user submitting the query. The real-time inferred intent model is then exploited immediately after each user action in order to re-rank the result set. 3.2 Displaying Relevant Results to the User Surf Canyon presents users with newly identified results by reranking the result set and then displaying content that has not yet been viewed. This is done in two ways: 1) as indented recommendations following a search result click, and 2) as an additional page of results following a request for more results. Only unseen content is displayed as previous research has shown that results that change as users interact with the search page can interfere with the ability to find information because results no longer appear where expected [25] Recommendations Following a Click After viewing a result, should the user s information need remains unsatisfied, for whatever reason, and the user clicks back to return to the search page, this is an opportunity to display re-ranked results. When users return to a result page while using Surf Canyon, newly identified results are displayed beneath the selected document. These results are drawn from a large set of unseen results that have been fetched from subsequent pages of results returned by the underlying search engine. Because changes to a search page that is actively being used can be disorienting [24], new results are indicated by indenting. Each time a user clicks a search result the real-time inferred intent model is updated, the entire result set is re-ranked and the most relevant previously unseen results are brought forward to the current search results page. This happens even when an indented reranked result is selected, in which case the recommendations are nested, which goes to a maximum of three levels deep. The Surf Canyon click-based recommendations are intended to add a new element to existing search interfaces while minimizing the disturbance to a user s normal workflow. In practice users are typically able use the re-ranking feature without instruction or guid-

3 ance. In this paper we focus on how people interact with dynamically re-ranking search results, but the notion of adding dynamically identified new content based on implicit behavior is quite general and may easily be extended to other types of information, such as relevant entities and contextual advertisement Additional Requested New Content With Surf Canyon, users are also able to interact with new, contextually relevant content when navigating to subsequent pages of results. Upon clicking More Results, the real-time inferred intent model is once again updated. Rather than show results 11 to 20 as initially computed at the beginning of the session, the system displays the ten most relevant results as computed by the model. Because this content has not been previously viewed by the user, there is no need to visually indicate that this content is new by indenting. 4. METHODOLOGY We explore how users interact with dynamically ranked results by studying the Surf Canyon logs. To compare the traditional static search experience with users experiences with dynamicallyranked content, we directed a portion of Surf Canyon s traffic to a traditional static search experience. Although this method allowed us to compare the behavior of users with and without the dynamic ranking, even in a controlled experimental log study like this it is not easy to control for tasks. For this reason we conducted an additional user study with controlled tasks with self-reported success. 4.1 Log Data with Controlled Traffic We studied user behaviors in a set of controlled traffic experiments in which different user groups were exposed to different configurations of the service. The data set we used consists of log data for major search engines collected from the Surf Canyon browser extension. The extension captures users interactions with the search result pages, including queries, clicks on results and various other pieces of information. The data sets also contained session IDs, defined using a period of inactivity for 30 minutes as a session boundary [28]. Anonymous user IDs were used to group queries into information sessions. Since many of the clicks were on dynamically ranked results or re-ranked results on pages beyond the first, we utilized metadata associated with each click to distinguish them from regular result clicks. Since the results on subsequent pages were all re-ranked, all clicks on the second page and beyond were considered clicks on dynamically ranked results. To quantify the differences in search behaviors caused by the two different ways dynamic results are displayed to the searcher, we ran two experiments for a large fraction of user traffic where we turned each off separately for 24 hours. To study the impact of dynamically ranked results, we turned off dynamic ranking for 20% of users. The other 80% of users received the standard dynamic ranking experience. To study the impact of dynamically ranked subsequent-page content, we turned off re-ranking for 50% of the users. The net result was 387,347 queries available to study the impact of click-based recommendations and 1,560,996 queries to study the impact of dynamically re-ranked next-page content. Although traffic in the controlled online experiment was randomly split, there could be some task-based variation. Previous research has shown that search behavior can vary significantly by task [2]. For this reason, we further filtered to only look at overlapping queries by first aggregating the data from each group by query and then taking the intersection of the traffic based on the query string. This yielded 11,655 unique queries in the case of the dynamically ranked results. As there were a limited number of overlapping queries in the next-page case, we did not do this additional analysis for this experiment. Navigational queries (targeted at navigating to a specific website) are common but search behavior surrounding these queries is known to be particularly different from other types of search behavior [2]. Conversely, people are more inclined to click many results following a query when their information need is exploratory in nature [31]. We hypothesized that dynamically ranked content is particularly likely to be useful for exploratory, open-ended Table 1: Query-level statistics of the controlled traffic for all queries and information queries only (bottom two rows) when click-based recommendations were selectively turned off. Experimental Condition All Queries Non-Navigational Queries N Query Duration Total Original Dynamic Next Page Dynamic 11,655 ** ** *** Static 11, *1.18 n/a 0.01 Dynamic 5, * ***0.07 *0.02 Static 5, *1.27 n/a 0.01 Table 2: Query-level statistics of the controlled traffic when subsequent page re-ranking was selectively turned off. Experimental % of All Clicks Condition n Total Original Dynamic Next Page Dynamic Next Page Dynamic 198, *** % ***3.83% Static 188, % 3.39% Table 3: Session-level statistics for the controlled traffic when click-based recommendations (top two rows) and subsequent page re-ranking (bottom two rows) were selectively turned off. Experimental Condition Click-Based Recommendations Next Page Re-Ranking Queries in a Session Session Duration Total Original Dynamic Next Page Dynamic 2.86 ** ***1.65 ***1.57 *** Static *** n/a 0.01 Dynamic ***1.87 * ***1.35 *** ***0.05 Static

4 searches and thus separate the data by task type in our analysis. To study the log data by task, we built a classifier to identify navigational queries based on methodologies in previous work [15]. 4.2 User Study To further control for task, we supplemented the log data with smaller-scale data where we provided users with tasks and asked for explicit feedback of success. To do this, we built upon previous work using an information search game for modeling success to evaluate different variants of interaction models. We used the UFindIt framework [1] to collect search behavior data from paid Amazon Mechanical Turk users. As in the original UFindIt game, we used Apache web server proxies to log all pages visited by users during the game. The searchers were given task descriptions as well as initial queries. While we pre-populated the search box with the initial query, searchers were allowed to change to whatever query terms they thought reasonable. The study was a 2x2 design where half of the participants interacted with dynamically ranked results while the other half did not. In each case, half of the participants were given fact finding questions and asked to find a single answer, while the other half were given search topics that were intended to be more exploratory and were asked to find five different, relevant URLs. The fact finding questions were drawn from the 18 original UFindIt questions, and included, for example, "What were the deadliest tornadoes in history?" The exploratory search topics were drawn from Web track of TREC 2010 [8]. We randomly selected 10 topics and used their subtopics as intent specific task descriptions while providing topic names as initial queries. In our analysis we used two definitions of search success: 1) selfreported success (i.e., whether any answer URL was submitted by the player), and 2) the correctness of the submitted URLs. Answer correctness was determined by examining the submitted URLs and checking if they contained correct answers for the task question. For this definition of success we obtained labels for 260 URLs submitted in factoid tasks as well 939 URLs submitted in exploratory tasks. The labeling process was crowdsourced through CrowdFlower, where each submitted URL was checked by independent assessors. To quantify agreement among the assessors we calculated Fleiss kappa. For exploratory question the kappa value was 0.6, which indicated reasonable agreement among the judges. These two definitions of success allowed us to evaluate the quantity and quality of the interaction. Users who completed less than two tasks were dropped to ensure trustworthiness. The final dataset included the results of 826 tasks (417 fact finding, 409 exploratory) by 91 users. 5. COMPARING INTERACTION MODELS In this section we compare people s interactive experiences with dynamic ranking versus the traditional search experience based on controlled log analysis and user study. 5.1 Controlled Traffic Analysis We analyzed the controlled traffic data to understand how search behavior differed when people were presented with dynamically ranked results or re-ranked content on subsequent result pages. Specifically, we looked at the average amount of time people spent on a query and the average number of clicks people made on the search result content, broken down by whether they clicked on the original results returned prior to dynamic ranking, the dynamically recommended results or results on subsequent result pages. A summary of these behaviors are shown in Tables 1 through 3. Statistically significant differences (based on t-tests) are marked with asterisks (* for p <.05, ** for p <.01, and *** for p <.001). Table 1 shows query-level results. We observe that users presented with dynamically ranked results spent more time searching and clicked on more results in total than the static results group. Users spent over 5 seconds more interacting with the result page when the results contained dynamic content ( seconds) compared to when they did not ( seconds). They also clicked more results (1.21 clicks compared with 1.19 clicks). Although people were less likely to click on results that were part of the original result set when presented with dynamic content, they more than made up for this by clicking on the dynamically ranked results. We hypothesized that dynamic content would be particularly useful for tasks that are more exploratory in nature. The bottom two rows of Table 1 show the behavior observed for non-navigational queries, where people tend to search longer and click more. In particular, we observed that they are more likely to interact with dynamically ranked results. (We will explore these differences in greater depth in Section 6 when we look at how the use of dynamic results is correlated with characteristics of the queries, sessions and users.) The differences between the non-navigational behavior with and without dynamically ranked results echo what was seen for general query traffic, except that they are uniformly larger in magnitude and percentage. Table 2 shows the results for the controlled study where subsequent page re-ranking was turned off for the control group. Behavior on subsequent pages is different when that content was generated dynamically during the course of the search. The percentage of clicks on subsequent page content is significantly higher when those results were re-ranked based on within-query user behavior than when they were not. Since users were exposed to the same interface, the difference in the percent of clicks may be attributed to a difference in the quality or content of the results. Note that the average number of results clicks is much higher in Table 1 than Table 2. We also looked at session-level behavior for both experimental conditions, with summary statistics presented in Table 3. Sessionlevel behavior is generally similar to query-level behavior. As was the case for the query-level statics, we observed that users who received click-based recommendations spent longer (by 10 seconds) searching than other users. They also clicked on significantly more results in total, with some of those clicks going to dynamic results at the expense of clicks on the originally presented results. Notably, people who received click-based recommendations issued fewer queries (2.86 vs. 2.91) during a session than people who did not. This may be because the dynamic nature of the page surfaced results that mitigated the need to re-query. While at a query level we only saw an interaction between the presence or absence of next-page re-ranking with people s next-page behavior in Table 2, at a session level we see significant differences in all types of behavior as a function of the experimental condition. People with dynamic subsequent page content not only clicked on more nextpage results, but also clicked on more results overall. The controlled log data suggests that the inclusion of dynamically ranked content increases people s engagement with the results and the amount of time they spend searching, while decreasing the number of queries they need to issue. While previous research suggests that increased engagement tends to indicate a positive change for users [3], we wanted to better understand the impact of these changes on people s ability to find what they are looking for. We also wanted to further explore the particularly large changes observed for non-navigational queries. For this reason, we look at the results of the controlled user study.

5 5.2 User Study Results As with the controlled log study, in the user study people were directed to a version of the search engine that either included dynamically ranked results or did not. In this study, however, users were performing directed tasks and asked to provide information about what they found so that their success could be measured as described in Section 4.2. Figure 2: Task completion time box plots for factoid tasks (left) and exploratory tasks (right). Results are summarized in Table 4, where we report task-level averages for completion time, success rate, number of clicks on original results, number of clicks on dynamically ranked results, number of clicks on subsequent page results and number of queries. Note that we have two measures of success which focuses on the quantity (self-reported success) and quality (correctness of the URLs) of the interaction. We begin our analysis of the results by looking at how successful people were at completing the two different types of search tasks: factoid and exploratory. Participants generally reported that they were very successful: 87% to 99% of all tasks were believed to be successfully completed. For factoid tasks there was no difference in reported success, but for exploratory tasks the users with dynamic recommendations felt significantly more successful (94% vs. 99%; p < 0.01 using a Proportions test). In actuality, however, people were much less successful than they believed, with questions being answered correctly only 38% to 41% of the time. For factoid tasks, participants in the dynamic ranking group completed 41.4% of the tasks correctly, while participants in the static ranking group had a very similar 41.0%. Likewise, for exploratory tasks the mean success rates were 38.7% and 38.9% respectively. In both cases, any observed differences were not statistically significant (p >.5 using a proportions test). The results indicate that dynamic ranking helps increase the quantity of the results although not necessarily the quality. This is understandable in that click-based recommendations allow users to explore more results than static search interface without query reformulation. It is also interesting to note that although people thought they were more successful for exploratory tasks than factoid tasks, they were less likely to answer correctly. Task completion time is another important metric that has been used in prior work [33] to provide an intuitive criterion for assessing utility of a search interface. Figure 2(a) shows task completion times broken down by the availability of dynamically ranked results for factoid tasks. On average, players in the dynamic ranking group spent about the same time searching for answers (107.7 seconds versus seconds) as the static ranking group. Differences between the means of the dynamic and static rankings are not statistically significant with a Student t-test (p =.94). Similarly, Figure 2(b) shows task completion times for exploratory tasks. In contrast to what we observed with factoid tasks, completion time varied significantly between the dynamic and static ranking groups for exploratory tasks. Participants in the dynamic ranking group were able to complete tasks in seconds on average whereas participants with static ranking had to spend seconds, or 40% more time. The difference was significant (p <.01). Another striking difference between factoid and exploratory tasks can be observed in the number of dynamically ranked results clicked for factoid (0.05) and exploratory (0.44) questions. This is consistent with the differences we observed in the logs for navigational and non-navigational queries (Table 1 and 5), again confirming our intuition that an interactive experience is best suited for more complex tasks. Looking at the impact of the dynamic ranking on the click and query count, we found that for factoid questions the average number of clicks for the dynamic ranking and static ranking groups was 1.79 and 2.48 respectively. For exploratory tasks it was 4.26 and 6.12 respectively. Differences in the number of clicks were not significant (p >.2, Wilcoxon-Mann-Whitney test). The number of queries remained relatively similar for groups and task types, although the dynamic ranking group issued slightly fewer queries. When comparing traditional search behavior with people s interactions with dynamically ranked content in the logs, we observed that people engaged more with search results when they were given dynamically ranked results as they searched, clicking more and searching longer. In contrast, when we controlled the tasks that Table 4. Task-level statistics for user study participants, broken down into fact finding and exploratory tasks. Factoid Tasks Exploratory Task n Report Success Answer Correctly Queries per Task Task Duration Original Dynamic Next Page Dynamic % 41.4% Static % 41.0% n/a 0.12 Dynamic 204 **99% 38.7% Static % 38.9% 1.37 ** n/a 0.14 Table 5 Overall statistics of how people interacted with the dynamically ranked results, broken down by query type. Percent of Clicks n Total Original Dynamic Next Page Dynamic Next Page All Queries 831, % 3.42% Navigational 646, % 0.89% Non-Navigational 185, % 8.02%

6 users completed, we found that their searches were faster and they clicked less frequently. Given the levels of success people achieved with dynamic content, it may be that for fixed tasks the dynamically ranked results reduce the effort required to complete the task, but for real world tasks that can evolve and grow beyond the initial target, the new content encourages additional interaction and exploration. The fact that we observed the opposite trend in task duration between controlled traffic analysis and user study stresses the importance of controlling the search task in IR research. 6. CONCLUSIONS We have compared dynamically ranking search results with traditional static web search. By studying user behaviors in a set of controlled traffic experiments, we found that dynamic content leads to higher user engagement in the search process. To ensure complete control of search task, we also conducted a user study with 91 participants and two types of search tasks, where the results showed that the click-based recommendation feature of dynamic ranking improves the user performance of exploratory search task significantly measured in task completion time and the number of the self-reported URL answers. While our study was based on the interaction model employed by Surf Canyon, we believe that many of our findings have implications in enabling rich user interactions for web search in general. Future work includes the extension of dynamic ranking by removing some of its restrictions. 7. REFERENCES [1] Mikhail Ageev, Qi Guo, Dmitry Lagun, and Eugene Agichtein, "Find it if you can: a game for modeling different types of web search success using interaction data," in SIGIR, New York, NY, USA, 2011, pp [2] Azin Ashkan, Charles L. Clarke, Eugene Agichtein, and Qi Guo, "Classifying and Characterizing Query Intent," in ECIR, Berlin, Heidelberg, 2009, pp [3] Anne Aula, Paivi Majaranta, and Kari-Jouko Raiha, "Eyetracking reveals the personal styles for search result evaluation," in Proceedings of the 2005 IFIP TC13, Berlin, Heidelberg, 2005, pp [4] Paul N. Bennett, Filip Radlinski, Ryen W. White, and Emine Yilmaz, "Inferring and using location metadata to personalize web search," in SIGIR, 2011, pp [6] Roi Blanco et al., "Repeatable and reliable search system evaluation using crowdsourcing," in SIGIR, New York, NY, USA, 2011, pp [7] Christina Brandt, Thorsten Joachims, Yisong Yue, and Jacob Bank, "Dynamic ranked retrieval," in WSDM, New York, NY, USA, 2011, pp [8] Nick Craswell, Dennis Fetterly, and Marc Najork, "Microsoft Research at TREC 2010 Web Track," [9] Mariam Daoud, Lynda Tamine-Lechani, and Mohand Boughanem, "Towards a graph-based user profile modeling for a session-based personalized search," Knowl. Inf. Syst., vol. 21, no. 3, pp , [10] Bernard J. Jansen, Amanda Spink, and Tefko Saracevic, "The Use of Relevance Feedback on the Web: Implications for Web IR System Design," in WebNet, 1999, pp [11] Craig A. Kaplan, Justine Fenwick, and James Chen, "Adaptive Hypertext Navigation Based On User Goals and Context," User Modeling and User-adapted Interaction, vol. 3, pp , [12] Jin Young Kim, Kevyn Collins-Thompson, Paul N. Bennett, and Susan T. Dumais, "Characterizing web content, user interests, and search behavior by reading level and topic," pp , [13] Jurgen Koenemann and Nicholas J. Belkin, "A Case For Interaction: A Study Of Interactive Information Retrieval Behavior And Effectiveness,", 1996, pp [14] Ron Kohavi, Roger Longbotham, Dan Sommerfield, and Randal M. Henne, "Controlled experiments on the web: survey and practical guide," Data Min. Knowl. Discov., vol. 18, no. 1, pp , Feb [15] Uichin Lee, Zhenyu Liu, and Junghoo Cho, "Automatic identification of user goals in Web search," in WWW, New York, NY, USA, 2005, pp [17] M. Wertheim, and D. Hardtke M. Cramer, "Demonstration of improved search result relevancy using real-time implicit relevance feedback," in SIGIR Workshop, [18] Nicolaas Matthijs and Filip Radlinski, "Personalizing web search using long term browsing history," in Web Search and Data Mining, 2011, pp [20] Barry Schwartz. (2012) Surviving Personalization With Bing & Google. [Online]. personal-bing-google html [21] Xuehua Shen, Bin Tan, and ChengXiang Zhai, "Contextsensitive information retrieval using implicit feedback," in SIGIR, New York, NY, USA, 2005, pp [24] Jaime Teevan, "How people recall, recognize, and reuse search results," ACM Transactions on Information Systems, vol. 26, pp. 1-27, [25] Jaime Teevan, Eytan Adar, Rosie Jones, and Michael A. S. Potts, "Information re-retrieval: repeat queries in Yahoo's logs," in SIGIR, New York, NY, USA, 2007, pp [26] Jaime Teevan, Susan T. Dumais, and Eric Horvitz, "Personalizing search via automated analysis of interests and activities," in SIGIR 2005, pp [27] Jaime Teevan, Susan T. Dumais, and Daniel J. Liebling, "To personalize or not to personalize: modeling queries with variation in user intent," in SIGIR 2008, pp [28] Ryen W. White, Paul N. Bennett, and Susan T. Dumais, "Predicting short-term interests using activity-based search context," in CIKM, 2010, pp [29] Ryen W. White, Susan T. Dumais, and Jaime Teevan, "Characterizing the influence of domain expertise on web search behavior," in WSDM, 2009, pp [30] Ryen W. White and Dan Morris, "Investigating the querying and browsing behavior of advanced search engine users," in Research and Development in Information Retrieval, 2007, pp [31] Ryen W. White and Resa A. Roth, Exploratory Search: Beyond the Query-Response Paradigm.: Morgan & Claypool, [32] Ryen W. White, Ian Ruthven, and Joemon M. Jose, "Finding relevant documents using top ranking sentences: an evaluation of two alternative schemes," in SIGIR, New York, NY, USA, 2002, pp [33] Ya Xu and David Mease, "Evaluating web search using task completion time," in SIGIR, New York, NY, USA, 2009, pp

Enhancing Cluster Quality by Using User Browsing Time

Enhancing Cluster Quality by Using User Browsing Time Enhancing Cluster Quality by Using User Browsing Time Rehab M. Duwairi* and Khaleifah Al.jada'** * Department of Computer Information Systems, Jordan University of Science and Technology, Irbid 22110,

More information

Enhancing Cluster Quality by Using User Browsing Time

Enhancing Cluster Quality by Using User Browsing Time Enhancing Cluster Quality by Using User Browsing Time Rehab Duwairi Dept. of Computer Information Systems Jordan Univ. of Sc. and Technology Irbid, Jordan rehab@just.edu.jo Khaleifah Al.jada' Dept. of

More information

Finding Relevant Documents using Top Ranking Sentences: An Evaluation of Two Alternative Schemes

Finding Relevant Documents using Top Ranking Sentences: An Evaluation of Two Alternative Schemes Finding Relevant Documents using Top Ranking Sentences: An Evaluation of Two Alternative Schemes Ryen W. White Department of Computing Science University of Glasgow Glasgow. G12 8QQ whiter@dcs.gla.ac.uk

More information

Towards Predicting Web Searcher Gaze Position from Mouse Movements

Towards Predicting Web Searcher Gaze Position from Mouse Movements Towards Predicting Web Searcher Gaze Position from Mouse Movements Qi Guo Emory University 400 Dowman Dr., W401 Atlanta, GA 30322 USA qguo3@emory.edu Eugene Agichtein Emory University 400 Dowman Dr., W401

More information

Wrapper: An Application for Evaluating Exploratory Searching Outside of the Lab

Wrapper: An Application for Evaluating Exploratory Searching Outside of the Lab Wrapper: An Application for Evaluating Exploratory Searching Outside of the Lab Bernard J Jansen College of Information Sciences and Technology The Pennsylvania State University University Park PA 16802

More information

Evaluating an Associative Browsing Model for Personal Information

Evaluating an Associative Browsing Model for Personal Information Evaluating an Associative Browsing Model for Personal Information Jinyoung Kim, W. Bruce Croft, David A. Smith and Anton Bakalov Department of Computer Science University of Massachusetts Amherst {jykim,croft,dasmith,abakalov}@cs.umass.edu

More information

Interpreting User Inactivity on Search Results

Interpreting User Inactivity on Search Results Interpreting User Inactivity on Search Results Sofia Stamou 1, Efthimis N. Efthimiadis 2 1 Computer Engineering and Informatics Department, Patras University Patras, 26500 GREECE, and Department of Archives

More information

PUTTING CONTEXT INTO SEARCH AND SEARCH INTO CONTEXT. Susan Dumais, Microsoft Research

PUTTING CONTEXT INTO SEARCH AND SEARCH INTO CONTEXT. Susan Dumais, Microsoft Research PUTTING CONTEXT INTO SEARCH AND SEARCH INTO CONTEXT Susan Dumais, Microsoft Research Overview Importance of context in IR Potential for personalization framework Examples Personal navigation Client-side

More information

Fighting Search Engine Amnesia: Reranking Repeated Results

Fighting Search Engine Amnesia: Reranking Repeated Results Fighting Search Engine Amnesia: Reranking Repeated Results ABSTRACT Milad Shokouhi Microsoft milads@microsoft.com Paul Bennett Microsoft Research pauben@microsoft.com Web search engines frequently show

More information

Query Modifications Patterns During Web Searching

Query Modifications Patterns During Web Searching Bernard J. Jansen The Pennsylvania State University jjansen@ist.psu.edu Query Modifications Patterns During Web Searching Amanda Spink Queensland University of Technology ah.spink@qut.edu.au Bhuva Narayan

More information

Automatic Generation of Query Sessions using Text Segmentation

Automatic Generation of Query Sessions using Text Segmentation Automatic Generation of Query Sessions using Text Segmentation Debasis Ganguly, Johannes Leveling, and Gareth J.F. Jones CNGL, School of Computing, Dublin City University, Dublin-9, Ireland {dganguly,

More information

Personalized Web Search

Personalized Web Search Personalized Web Search Dhanraj Mavilodan (dhanrajm@stanford.edu), Kapil Jaisinghani (kjaising@stanford.edu), Radhika Bansal (radhika3@stanford.edu) Abstract: With the increase in the diversity of contents

More information

Enabling Users to Visually Evaluate the Effectiveness of Different Search Queries or Engines

Enabling Users to Visually Evaluate the Effectiveness of Different Search Queries or Engines Appears in WWW 04 Workshop: Measuring Web Effectiveness: The User Perspective, New York, NY, May 18, 2004 Enabling Users to Visually Evaluate the Effectiveness of Different Search Queries or Engines Anselm

More information

University of Delaware at Diversity Task of Web Track 2010

University of Delaware at Diversity Task of Web Track 2010 University of Delaware at Diversity Task of Web Track 2010 Wei Zheng 1, Xuanhui Wang 2, and Hui Fang 1 1 Department of ECE, University of Delaware 2 Yahoo! Abstract We report our systems and experiments

More information

Interactive Campaign Planning for Marketing Analysts

Interactive Campaign Planning for Marketing Analysts Interactive Campaign Planning for Marketing Analysts Fan Du University of Maryland College Park, MD, USA fan@cs.umd.edu Sana Malik Adobe Research San Jose, CA, USA sana.malik@adobe.com Eunyee Koh Adobe

More information

CS6200 Information Retrieval. Jesse Anderton College of Computer and Information Science Northeastern University

CS6200 Information Retrieval. Jesse Anderton College of Computer and Information Science Northeastern University CS6200 Information Retrieval Jesse Anderton College of Computer and Information Science Northeastern University Major Contributors Gerard Salton! Vector Space Model Indexing Relevance Feedback SMART Karen

More information

Using Clusters on the Vivisimo Web Search Engine

Using Clusters on the Vivisimo Web Search Engine Using Clusters on the Vivisimo Web Search Engine Sherry Koshman and Amanda Spink School of Information Sciences University of Pittsburgh 135 N. Bellefield Ave., Pittsburgh, PA 15237 skoshman@sis.pitt.edu,

More information

Internet Usage Transaction Log Studies: The Next Generation

Internet Usage Transaction Log Studies: The Next Generation Internet Usage Transaction Log Studies: The Next Generation Sponsored by SIG USE Dietmar Wolfram, Moderator. School of Information Studies, University of Wisconsin-Milwaukee Milwaukee, WI 53201. dwolfram@uwm.edu

More information

Automatic Query Type Identification Based on Click Through Information

Automatic Query Type Identification Based on Click Through Information Automatic Query Type Identification Based on Click Through Information Yiqun Liu 1,MinZhang 1,LiyunRu 2, and Shaoping Ma 1 1 State Key Lab of Intelligent Tech. & Sys., Tsinghua University, Beijing, China

More information

Context based Re-ranking of Web Documents (CReWD)

Context based Re-ranking of Web Documents (CReWD) Context based Re-ranking of Web Documents (CReWD) Arijit Banerjee, Jagadish Venkatraman Graduate Students, Department of Computer Science, Stanford University arijitb@stanford.edu, jagadish@stanford.edu}

More information

In the Mood to Click? Towards Inferring Receptiveness to Search Advertising

In the Mood to Click? Towards Inferring Receptiveness to Search Advertising In the Mood to Click? Towards Inferring Receptiveness to Search Advertising Qi Guo Eugene Agichtein Mathematics & Computer Science Department Emory University Atlanta, USA {qguo3,eugene}@mathcs.emory.edu

More information

Inferring User Search for Feedback Sessions

Inferring User Search for Feedback Sessions Inferring User Search for Feedback Sessions Sharayu Kakade 1, Prof. Ranjana Barde 2 PG Student, Department of Computer Science, MIT Academy of Engineering, Pune, MH, India 1 Assistant Professor, Department

More information

A Task-Based Evaluation of an Aggregated Search Interface

A Task-Based Evaluation of an Aggregated Search Interface A Task-Based Evaluation of an Aggregated Search Interface No Author Given No Institute Given Abstract. This paper presents a user study that evaluated the effectiveness of an aggregated search interface

More information

PERSONALIZED SEARCH: POTENTIAL AND PITFALLS

PERSONALIZED SEARCH: POTENTIAL AND PITFALLS PERSONALIZED SEARCH: POTENTIAL AND PITFALLS Oct 26, 2016 Susan Dumais, Microsoft Research Overview Context in search Potential for personalization framework Examples Personal navigation Client-side personalization

More information

Search Costs vs. User Satisfaction on Mobile

Search Costs vs. User Satisfaction on Mobile Search Costs vs. User Satisfaction on Mobile Manisha Verma, Emine Yilmaz University College London mverma@cs.ucl.ac.uk, emine.yilmaz@ucl.ac.uk Abstract. Information seeking is an interactive process where

More information

Modeling Contextual Factors of Click Rates

Modeling Contextual Factors of Click Rates Modeling Contextual Factors of Click Rates Hila Becker Columbia University 500 W. 120th Street New York, NY 10027 Christopher Meek Microsoft Research 1 Microsoft Way Redmond, WA 98052 David Maxwell Chickering

More information

A Granular Approach to Web Search Result Presentation

A Granular Approach to Web Search Result Presentation A Granular Approach to Web Search Result Presentation Ryen W. White 1, Joemon M. Jose 1 & Ian Ruthven 2 1 Department of Computing Science, University of Glasgow, Glasgow. Scotland. 2 Department of Computer

More information

Beyond the Commons: Investigating the Value of Personalizing Web Search

Beyond the Commons: Investigating the Value of Personalizing Web Search Beyond the Commons: Investigating the Value of Personalizing Web Search Jaime Teevan 1, Susan T. Dumais 2, and Eric Horvitz 2 1 MIT, CSAIL, 32 Vassar St., G472, Cambridge, MA, 02139 USA teevan@csail.mit.edu

More information

Relevance and Effort: An Analysis of Document Utility

Relevance and Effort: An Analysis of Document Utility Relevance and Effort: An Analysis of Document Utility Emine Yilmaz, Manisha Verma Dept. of Computer Science University College London {e.yilmaz, m.verma}@cs.ucl.ac.uk Nick Craswell, Filip Radlinski, Peter

More information

A Task Level Metric for Measuring Web Search Satisfaction and its Application on Improving Relevance Estimation

A Task Level Metric for Measuring Web Search Satisfaction and its Application on Improving Relevance Estimation A Task Level Metric for Measuring Web Search Satisfaction and its Application on Improving Relevance Estimation Ahmed Hassan Microsoft Research Redmond, WA hassanam@microsoft.com Yang Song Microsoft Research

More information

The Open University s repository of research publications and other research outputs. Search Personalization with Embeddings

The Open University s repository of research publications and other research outputs. Search Personalization with Embeddings Open Research Online The Open University s repository of research publications and other research outputs Search Personalization with Embeddings Conference Item How to cite: Vu, Thanh; Nguyen, Dat Quoc;

More information

Estimating Credibility of User Clicks with Mouse Movement and Eye-tracking Information

Estimating Credibility of User Clicks with Mouse Movement and Eye-tracking Information Estimating Credibility of User Clicks with Mouse Movement and Eye-tracking Information Jiaxin Mao, Yiqun Liu, Min Zhang, Shaoping Ma Department of Computer Science and Technology, Tsinghua University Background

More information

Overview of the TREC 2013 Crowdsourcing Track

Overview of the TREC 2013 Crowdsourcing Track Overview of the TREC 2013 Crowdsourcing Track Mark D. Smucker 1, Gabriella Kazai 2, and Matthew Lease 3 1 Department of Management Sciences, University of Waterloo 2 Microsoft Research, Cambridge, UK 3

More information

Modern Retrieval Evaluations. Hongning Wang

Modern Retrieval Evaluations. Hongning Wang Modern Retrieval Evaluations Hongning Wang CS@UVa What we have known about IR evaluations Three key elements for IR evaluation A document collection A test suite of information needs A set of relevance

More information

Web document summarisation: a task-oriented evaluation

Web document summarisation: a task-oriented evaluation Web document summarisation: a task-oriented evaluation Ryen White whiter@dcs.gla.ac.uk Ian Ruthven igr@dcs.gla.ac.uk Joemon M. Jose jj@dcs.gla.ac.uk Abstract In this paper we present a query-biased summarisation

More information

A Comparison of Error Metrics for Learning Model Parameters in Bayesian Knowledge Tracing

A Comparison of Error Metrics for Learning Model Parameters in Bayesian Knowledge Tracing A Comparison of Error Metrics for Learning Model Parameters in Bayesian Knowledge Tracing Asif Dhanani Seung Yeon Lee Phitchaya Phothilimthana Zachary Pardos Electrical Engineering and Computer Sciences

More information

HUKB at NTCIR-12 IMine-2 task: Utilization of Query Analysis Results and Wikipedia Data for Subtopic Mining

HUKB at NTCIR-12 IMine-2 task: Utilization of Query Analysis Results and Wikipedia Data for Subtopic Mining HUKB at NTCIR-12 IMine-2 task: Utilization of Query Analysis Results and Wikipedia Data for Subtopic Mining Masaharu Yoshioka Graduate School of Information Science and Technology, Hokkaido University

More information

6th Annual 15miles/Neustar Localeze Local Search Usage Study Conducted by comscore

6th Annual 15miles/Neustar Localeze Local Search Usage Study Conducted by comscore 6th Annual 15miles/Neustar Localeze Local Search Usage Study Conducted by comscore Consumers are adopting mobile devices of all types at a blistering pace. The demand for information on the go is higher

More information

Learning to Match. Jun Xu, Zhengdong Lu, Tianqi Chen, Hang Li

Learning to Match. Jun Xu, Zhengdong Lu, Tianqi Chen, Hang Li Learning to Match Jun Xu, Zhengdong Lu, Tianqi Chen, Hang Li 1. Introduction The main tasks in many applications can be formalized as matching between heterogeneous objects, including search, recommendation,

More information

This literature review provides an overview of the various topics related to using implicit

This literature review provides an overview of the various topics related to using implicit Vijay Deepak Dollu. Implicit Feedback in Information Retrieval: A Literature Analysis. A Master s Paper for the M.S. in I.S. degree. April 2005. 56 pages. Advisor: Stephanie W. Haas This literature review

More information

An Attempt to Identify Weakest and Strongest Queries

An Attempt to Identify Weakest and Strongest Queries An Attempt to Identify Weakest and Strongest Queries K. L. Kwok Queens College, City University of NY 65-30 Kissena Boulevard Flushing, NY 11367, USA kwok@ir.cs.qc.edu ABSTRACT We explore some term statistics

More information

Evaluating Implicit Measures to Improve Web Search

Evaluating Implicit Measures to Improve Web Search Evaluating Implicit Measures to Improve Web Search STEVE FOX, KULDEEP KARNAWAT, MARK MYDLAND, SUSAN DUMAIS, and THOMAS WHITE Microsoft Corp. Of growing interest in the area of improving the search experience

More information

Understanding the Relationship between Searchers Queries and Information Goals

Understanding the Relationship between Searchers Queries and Information Goals Understanding the Relationship between Searchers Queries and Information Goals Doug Downey University of Washington Seattle, WA 9895 ddowney@cs.washington.edu Susan Dumais, Dan Liebling, Eric Horvitz Microsoft

More information

Obtaining Language Models of Web Collections Using Query-Based Sampling Techniques

Obtaining Language Models of Web Collections Using Query-Based Sampling Techniques -7695-1435-9/2 $17. (c) 22 IEEE 1 Obtaining Language Models of Web Collections Using Query-Based Sampling Techniques Gary A. Monroe James C. French Allison L. Powell Department of Computer Science University

More information

VIDEO SEARCHING AND BROWSING USING VIEWFINDER

VIDEO SEARCHING AND BROWSING USING VIEWFINDER VIDEO SEARCHING AND BROWSING USING VIEWFINDER By Dan E. Albertson Dr. Javed Mostafa John Fieber Ph. D. Student Associate Professor Ph. D. Candidate Information Science Information Science Information Science

More information

Knowledge Retrieval. Franz J. Kurfess. Computer Science Department California Polytechnic State University San Luis Obispo, CA, U.S.A.

Knowledge Retrieval. Franz J. Kurfess. Computer Science Department California Polytechnic State University San Luis Obispo, CA, U.S.A. Knowledge Retrieval Franz J. Kurfess Computer Science Department California Polytechnic State University San Luis Obispo, CA, U.S.A. 1 Acknowledgements This lecture series has been sponsored by the European

More information

Detecting Good Abandonment in Mobile Search

Detecting Good Abandonment in Mobile Search Detecting Good Abandonment in Mobile Search Kyle Williams, Julia Kiseleva, Aidan C. Crook Γ, Imed Zitouni Γ, Ahmed Hassan Awadallah Γ, Madian Khabsa Γ The Pennsylvania State University, University Park,

More information

Personalized Models of Search Satisfaction. Ahmed Hassan and Ryen White

Personalized Models of Search Satisfaction. Ahmed Hassan and Ryen White Personalized Models of Search Satisfaction Ahmed Hassan and Ryen White Online Satisfaction Measurement Satisfying users is the main objective of any search system Measuring user satisfaction is essential

More information

ITERATIVE SEARCHING IN AN ONLINE DATABASE. Susan T. Dumais and Deborah G. Schmitt Cognitive Science Research Group Bellcore Morristown, NJ

ITERATIVE SEARCHING IN AN ONLINE DATABASE. Susan T. Dumais and Deborah G. Schmitt Cognitive Science Research Group Bellcore Morristown, NJ - 1 - ITERATIVE SEARCHING IN AN ONLINE DATABASE Susan T. Dumais and Deborah G. Schmitt Cognitive Science Research Group Bellcore Morristown, NJ 07962-1910 ABSTRACT An experiment examined how people use

More information

Dissertation Research Proposal

Dissertation Research Proposal Dissertation Research Proposal ISI Information Science Doctoral Dissertation Proposal Scholarship (a) Description for the research, including significance and methodology (10 pages or less, double-spaced)

More information

Issues in Personalizing Information Retrieval

Issues in Personalizing Information Retrieval Feature Article: Garbriella Pasi 3 Issues in Personalizing Information Retrieval Gabriella Pasi Abstract This paper shortly discusses the main issues related to the problem of personalizing search. To

More information

Comment Extraction from Blog Posts and Its Applications to Opinion Mining

Comment Extraction from Blog Posts and Its Applications to Opinion Mining Comment Extraction from Blog Posts and Its Applications to Opinion Mining Huan-An Kao, Hsin-Hsi Chen Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan

More information

PERSONALIZED MOBILE SEARCH ENGINE BASED ON MULTIPLE PREFERENCE, USER PROFILE AND ANDROID PLATFORM

PERSONALIZED MOBILE SEARCH ENGINE BASED ON MULTIPLE PREFERENCE, USER PROFILE AND ANDROID PLATFORM PERSONALIZED MOBILE SEARCH ENGINE BASED ON MULTIPLE PREFERENCE, USER PROFILE AND ANDROID PLATFORM Ajit Aher, Rahul Rohokale, Asst. Prof. Nemade S.B. B.E. (computer) student, Govt. college of engg. & research

More information

Letter Pair Similarity Classification and URL Ranking Based on Feedback Approach

Letter Pair Similarity Classification and URL Ranking Based on Feedback Approach Letter Pair Similarity Classification and URL Ranking Based on Feedback Approach P.T.Shijili 1 P.G Student, Department of CSE, Dr.Nallini Institute of Engineering & Technology, Dharapuram, Tamilnadu, India

More information

Large-Scale Validation and Analysis of Interleaved Search Evaluation

Large-Scale Validation and Analysis of Interleaved Search Evaluation Large-Scale Validation and Analysis of Interleaved Search Evaluation Olivier Chapelle, Thorsten Joachims Filip Radlinski, Yisong Yue Department of Computer Science Cornell University Decide between two

More information

Mining long-lasting exploratory user interests from search history

Mining long-lasting exploratory user interests from search history Mining long-lasting exploratory user interests from search history Bin Tan, Yuanhua Lv and ChengXiang Zhai Department of Computer Science, University of Illinois at Urbana-Champaign bintanuiuc@gmail.com,

More information

Mining the Search Trails of Surfing Crowds: Identifying Relevant Websites from User Activity Data

Mining the Search Trails of Surfing Crowds: Identifying Relevant Websites from User Activity Data Mining the Search Trails of Surfing Crowds: Identifying Relevant Websites from User Activity Data Misha Bilenko and Ryen White presented by Matt Richardson Microsoft Research Search = Modeling User Behavior

More information

SEARCHMETRICS WHITEPAPER RANKING FACTORS Targeted Analysis for more Success on Google and in your Online Market

SEARCHMETRICS WHITEPAPER RANKING FACTORS Targeted Analysis for more Success on Google and in your Online Market 2018 SEARCHMETRICS WHITEPAPER RANKING FACTORS 2018 Targeted for more Success on Google and in your Online Market Table of Contents Introduction: Why ranking factors for niches?... 3 Methodology: Which

More information

Effect of log-based Query Term Expansion on Retrieval Effectiveness in Patent Searching

Effect of log-based Query Term Expansion on Retrieval Effectiveness in Patent Searching Effect of log-based Query Term Expansion on Retrieval Effectiveness in Patent Searching Wolfgang Tannebaum, Parvaz Madabi and Andreas Rauber Institute of Software Technology and Interactive Systems, Vienna

More information

Query Likelihood with Negative Query Generation

Query Likelihood with Negative Query Generation Query Likelihood with Negative Query Generation Yuanhua Lv Department of Computer Science University of Illinois at Urbana-Champaign Urbana, IL 61801 ylv2@uiuc.edu ChengXiang Zhai Department of Computer

More information

A Comparative Analysis of Cascade Measures for Novelty and Diversity

A Comparative Analysis of Cascade Measures for Novelty and Diversity A Comparative Analysis of Cascade Measures for Novelty and Diversity Charles Clarke, University of Waterloo Nick Craswell, Microsoft Ian Soboroff, NIST Azin Ashkan, University of Waterloo Background Measuring

More information

AN ENHANCED ATTRIBUTE RERANKING DESIGN FOR WEB IMAGE SEARCH

AN ENHANCED ATTRIBUTE RERANKING DESIGN FOR WEB IMAGE SEARCH AN ENHANCED ATTRIBUTE RERANKING DESIGN FOR WEB IMAGE SEARCH Sai Tejaswi Dasari #1 and G K Kishore Babu *2 # Student,Cse, CIET, Lam,Guntur, India * Assistant Professort,Cse, CIET, Lam,Guntur, India Abstract-

More information

An Empirical Evaluation of User Interfaces for Topic Management of Web Sites

An Empirical Evaluation of User Interfaces for Topic Management of Web Sites An Empirical Evaluation of User Interfaces for Topic Management of Web Sites Brian Amento AT&T Labs - Research 180 Park Avenue, P.O. Box 971 Florham Park, NJ 07932 USA brian@research.att.com ABSTRACT Topic

More information

Information Retrieval Spring Web retrieval

Information Retrieval Spring Web retrieval Information Retrieval Spring 2016 Web retrieval The Web Large Changing fast Public - No control over editing or contents Spam and Advertisement How big is the Web? Practically infinite due to the dynamic

More information

Project Report Ranking Web Search Results Using Personal Preferences

Project Report Ranking Web Search Results Using Personal Preferences Project Report Ranking Web Search Results Using Personal Preferences Shantanu Joshi Stanford University joshi4@cs.stanford.edu Pararth Shah Stanford University pararth@cs.stanford.edu December 14, 2013

More information

Review of. Amanda Spink. and her work in. Web Searching and Retrieval,

Review of. Amanda Spink. and her work in. Web Searching and Retrieval, Review of Amanda Spink and her work in Web Searching and Retrieval, 1997-2004 Larry Reeve for Dr. McCain INFO861, Winter 2004 Term Project Table of Contents Background of Spink 2 Web Search and Retrieval

More information

Influence of Vertical Result in Web Search Examination

Influence of Vertical Result in Web Search Examination Influence of Vertical Result in Web Search Examination Zeyang Liu, Yiqun Liu, Ke Zhou, Min Zhang, Shaoping Ma Tsinghua National Laboratory for Information Science and Technology, Department of Computer

More information

IJREAT International Journal of Research in Engineering & Advanced Technology, Volume 1, Issue 5, Oct-Nov, ISSN:

IJREAT International Journal of Research in Engineering & Advanced Technology, Volume 1, Issue 5, Oct-Nov, ISSN: IJREAT International Journal of Research in Engineering & Advanced Technology, Volume 1, Issue 5, Oct-Nov, 20131 Improve Search Engine Relevance with Filter session Addlin Shinney R 1, Saravana Kumar T

More information

Machine Learning Applications to Modeling Web Searcher Behavior Eugene Agichtein

Machine Learning Applications to Modeling Web Searcher Behavior Eugene Agichtein Machine Learning Applications to Modeling Web Searcher Behavior Eugene Agichtein Intelligent Information Access Lab (IRLab) Emory University Talk Outline Overview of the Emory IR Lab Intent-centric Web

More information

Extracting Visual Snippets for Query Suggestion in Collaborative Web Search

Extracting Visual Snippets for Query Suggestion in Collaborative Web Search Extracting Visual Snippets for Query Suggestion in Collaborative Web Search Hannarin Kruajirayu, Teerapong Leelanupab Knowledge Management and Knowledge Engineering Laboratory Faculty of Information Technology

More information

Joho, H. and Jose, J.M. (2006) A comparative study of the effectiveness of search result presentation on the web. Lecture Notes in Computer Science 3936:pp. 302-313. http://eprints.gla.ac.uk/3523/ A Comparative

More information

Evaluating Implicit Measures to Improve Web Search

Evaluating Implicit Measures to Improve Web Search Evaluating Implicit Measures to Improve Web Search Abstract Steve Fox, Kuldeep Karnawat, Mark Mydland, Susan Dumais, and Thomas White {stevef, kuldeepk, markmyd, sdumais, tomwh}@microsoft.com Of growing

More information

UNIT-V WEB MINING. 3/18/2012 Prof. Asha Ambhaikar, RCET Bhilai.

UNIT-V WEB MINING. 3/18/2012 Prof. Asha Ambhaikar, RCET Bhilai. UNIT-V WEB MINING 1 Mining the World-Wide Web 2 What is Web Mining? Discovering useful information from the World-Wide Web and its usage patterns. 3 Web search engines Index-based: search the Web, index

More information

Paper SAS Taming the Rule. Charlotte Crain, Chris Upton, SAS Institute Inc.

Paper SAS Taming the Rule. Charlotte Crain, Chris Upton, SAS Institute Inc. ABSTRACT Paper SAS2620-2016 Taming the Rule Charlotte Crain, Chris Upton, SAS Institute Inc. When business rules are deployed and executed--whether a rule is fired or not if the rule-fire outcomes are

More information

A Survey Of Different Text Mining Techniques Varsha C. Pande 1 and Dr. A.S. Khandelwal 2

A Survey Of Different Text Mining Techniques Varsha C. Pande 1 and Dr. A.S. Khandelwal 2 A Survey Of Different Text Mining Techniques Varsha C. Pande 1 and Dr. A.S. Khandelwal 2 1 Department of Electronics & Comp. Sc, RTMNU, Nagpur, India 2 Department of Computer Science, Hislop College, Nagpur,

More information

Enhanced Visualization for Web-Based Summaries

Enhanced Visualization for Web-Based Summaries Enhanced Visualization for Web-Based Summaries Brent Wenerstrom Computer Eng. and Computer Science Dept. Duthie Center for Engineering Louisville, Kentucky 40292 brent.wenerstrom@louisville.edu Mehmed

More information

Keywords APSE: Advanced Preferred Search Engine, Google Android Platform, Search Engine, Click-through data, Location and Content Concepts.

Keywords APSE: Advanced Preferred Search Engine, Google Android Platform, Search Engine, Click-through data, Location and Content Concepts. Volume 5, Issue 3, March 2015 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Advanced Preferred

More information

Web Searcher Interactions with Multiple Federate Content Collections

Web Searcher Interactions with Multiple Federate Content Collections Web Searcher Interactions with Multiple Federate Content Collections Amanda Spink Faculty of IT Queensland University of Technology QLD 4001 Australia ah.spink@qut.edu.au Bernard J. Jansen School of IST

More information

Retrieval Evaluation. Hongning Wang

Retrieval Evaluation. Hongning Wang Retrieval Evaluation Hongning Wang CS@UVa What we have learned so far Indexed corpus Crawler Ranking procedure Research attention Doc Analyzer Doc Rep (Index) Query Rep Feedback (Query) Evaluation User

More information

Characterizing Queries in Different Search Tasks

Characterizing Queries in Different Search Tasks 1 th Hawaii International Conference on System Sciences Characterizing Queries in Different Tasks Jeonghyun Kim University of North Texas Jeonghyun.Kim@unt.edu Ahmet Can University of North Texas AhmetCan@my.unt.edu

More information

Analyzing Web Multimedia Query Reformulation Behavior

Analyzing Web Multimedia Query Reformulation Behavior Analyzing Web Multimedia Query Reformulation Behavior Liang-Chun Jack Tseng Faculty of Science and Queensland University of Brisbane, QLD 4001, Australia ntjack.au@hotmail.com Dian Tjondronegoro Faculty

More information

arxiv: v3 [cs.dl] 23 Sep 2017

arxiv: v3 [cs.dl] 23 Sep 2017 A Complete Year of User Retrieval Sessions in a Social Sciences Academic Search Engine Philipp Mayr, Ameni Kacem arxiv:1706.00816v3 [cs.dl] 23 Sep 2017 GESIS Leibniz Institute for the Social Sciences,

More information

Adaptive Socio-Recommender System for Open Corpus E-Learning

Adaptive Socio-Recommender System for Open Corpus E-Learning Adaptive Socio-Recommender System for Open Corpus E-Learning Rosta Farzan Intelligent Systems Program University of Pittsburgh, Pittsburgh PA 15260, USA rosta@cs.pitt.edu Abstract. With the increase popularity

More information

International Journal of Innovative Research in Computer and Communication Engineering

International Journal of Innovative Research in Computer and Communication Engineering Optimized Re-Ranking In Mobile Search Engine Using User Profiling A.VINCY 1, M.KALAIYARASI 2, C.KALAIYARASI 3 PG Student, Department of Computer Science, Arunai Engineering College, Tiruvannamalai, India

More information

Shared Sensemaking: Enhancing the Value of Collaborative Web Search Tools

Shared Sensemaking: Enhancing the Value of Collaborative Web Search Tools Shared Sensemaking: Enhancing the Value of Collaborative Web Search Tools Meredith Ringel Morris Microsoft Research Redmond, WA, USA merrie@microsoft.com Saleema Amershi University of Washington Seattle,

More information

TREC 2017 Dynamic Domain Track Overview

TREC 2017 Dynamic Domain Track Overview TREC 2017 Dynamic Domain Track Overview Grace Hui Yang Zhiwen Tang Ian Soboroff Georgetown University Georgetown University NIST huiyang@cs.georgetown.edu zt79@georgetown.edu ian.soboroff@nist.gov 1. Introduction

More information

TREC 2016 Dynamic Domain Track: Exploiting Passage Representation for Retrieval and Relevance Feedback

TREC 2016 Dynamic Domain Track: Exploiting Passage Representation for Retrieval and Relevance Feedback RMIT @ TREC 2016 Dynamic Domain Track: Exploiting Passage Representation for Retrieval and Relevance Feedback Ameer Albahem ameer.albahem@rmit.edu.au Lawrence Cavedon lawrence.cavedon@rmit.edu.au Damiano

More information

A Tale of Two Studies Establishing Google & Bing Click-Through Rates

A Tale of Two Studies Establishing Google & Bing Click-Through Rates A Tale of Two Studies Establishing Google & Bing Click-Through Rates Behavioral Study by Slingshot SEO, Inc. using client data from January 2011 to August 2011 What s Inside 1. Introduction 2. Definition

More information

Jargon Buster. Ad Network. Analytics or Web Analytics Tools. Avatar. App (Application) Blog. Banner Ad

Jargon Buster. Ad Network. Analytics or Web Analytics Tools. Avatar. App (Application) Blog. Banner Ad D I G I TA L M A R K E T I N G Jargon Buster Ad Network A platform connecting advertisers with publishers who want to host their ads. The advertiser pays the network every time an agreed event takes place,

More information

Indexing and Query Processing

Indexing and Query Processing Indexing and Query Processing Jaime Arguello INLS 509: Information Retrieval jarguell@email.unc.edu January 28, 2013 Basic Information Retrieval Process doc doc doc doc doc information need document representation

More information

Interaction Model to Predict Subjective Specificity of Search Results

Interaction Model to Predict Subjective Specificity of Search Results Interaction Model to Predict Subjective Specificity of Search Results Kumaripaba Athukorala, Antti Oulasvirta, Dorota Glowacka, Jilles Vreeken, Giulio Jacucci Helsinki Institute for Information Technology

More information

SAS Visual Analytics 8.2: Getting Started with Reports

SAS Visual Analytics 8.2: Getting Started with Reports SAS Visual Analytics 8.2: Getting Started with Reports Introduction Reporting The SAS Visual Analytics tools give you everything you need to produce and distribute clear and compelling reports. SAS Visual

More information

CS 6740: Advanced Language Technologies April 2, Lecturer: Lillian Lee Scribes: Navin Sivakumar, Lakshmi Ganesh, Taiyang Chen.

CS 6740: Advanced Language Technologies April 2, Lecturer: Lillian Lee Scribes: Navin Sivakumar, Lakshmi Ganesh, Taiyang Chen. CS 6740: Advanced Language Technologies April 2, 2010 Lecture 15: Implicit Relevance Feedback & Clickthrough Data Lecturer: Lillian Lee Scribes: Navin Sivakumar, Lakshmi Ganesh, Taiyang Chen Abstract Explicit

More information

Exploiting On-Chip Data Transfers for Improving Performance of Chip-Scale Multiprocessors

Exploiting On-Chip Data Transfers for Improving Performance of Chip-Scale Multiprocessors Exploiting On-Chip Data Transfers for Improving Performance of Chip-Scale Multiprocessors G. Chen 1, M. Kandemir 1, I. Kolcu 2, and A. Choudhary 3 1 Pennsylvania State University, PA 16802, USA 2 UMIST,

More information

TERM BASED WEIGHT MEASURE FOR INFORMATION FILTERING IN SEARCH ENGINES

TERM BASED WEIGHT MEASURE FOR INFORMATION FILTERING IN SEARCH ENGINES TERM BASED WEIGHT MEASURE FOR INFORMATION FILTERING IN SEARCH ENGINES Mu. Annalakshmi Research Scholar, Department of Computer Science, Alagappa University, Karaikudi. annalakshmi_mu@yahoo.co.in Dr. A.

More information

white paper 4 Steps to Better Keyword Grouping Strategies for More Effective & Profitable Keyword Segmentation

white paper 4 Steps to Better Keyword Grouping Strategies for More Effective & Profitable Keyword Segmentation white paper 4 Steps to Better Keyword Grouping Strategies for More Effective & Profitable Keyword Segmentation 2009, WordStream, Inc. All rights reserved. WordStream technologies are protected by pending

More information

Adobe Marketing Cloud Data Workbench Controlled Experiments

Adobe Marketing Cloud Data Workbench Controlled Experiments Adobe Marketing Cloud Data Workbench Controlled Experiments Contents Data Workbench Controlled Experiments...3 How Does Site Identify Visitors?...3 How Do Controlled Experiments Work?...3 What Should I

More information

Efficient Multiple-Click Models in Web Search

Efficient Multiple-Click Models in Web Search Efficient Multiple-Click Models in Web Search Fan Guo Carnegie Mellon University Pittsburgh, PA 15213 fanguo@cs.cmu.edu Chao Liu Microsoft Research Redmond, WA 98052 chaoliu@microsoft.com Yi-Min Wang Microsoft

More information

Usability Evaluation of Cell Phones for Early Adolescent Users

Usability Evaluation of Cell Phones for Early Adolescent Users Yassierli*, Melati Gilang Industrial Management Research Division, Faculty of Industrial Technology, Bandung Institute of Technology Jl. Ganesa 10 Bandung 40134 Indonesia ABSTRACT:. The increasing number

More information

ASSETS: U: Web Accessibility Evaluation with the Crowd: Rapidly Coding User Testing Video

ASSETS: U: Web Accessibility Evaluation with the Crowd: Rapidly Coding User Testing Video ASSETS: U: Web Accessibility Evaluation with the Crowd: Rapidly Coding User Testing Video Mitchell Gordon University of Rochester Rochester, NY 14627 m.gordon@rochester.edu ABSTRACT User testing is an

More information