Today s lecture. Basic crawler operation. Crawling picture. What any crawler must do. Simple picture complications

Similar documents
Today s lecture. Information Retrieval. Basic crawler operation. Crawling picture. What any crawler must do. Simple picture complications

Crawling CE-324: Modern Information Retrieval Sharif University of Technology

CS47300: Web Information Search and Management

Web Crawling. Introduction to Information Retrieval CS 150 Donald J. Patterson

Introduction to Information Retrieval

CS47300: Web Information Search and Management

CS 572: Information Retrieval

Topic: Duplicate Detection and Similarity Computing

Crawling the Web. Web Crawling. Main Issues I. Type of crawl

Information Retrieval and Web Search

Text Technologies for Data Science INFR11145 Web Search Walid Magdy Lecture Objectives

Information Retrieval. Lecture 10 - Web crawling

Administrative. Web crawlers. Web Crawlers and Link Analysis!

Information Retrieval

Collection Building on the Web. Basic Algorithm

Administrivia. Crawlers: Nutch. Course Overview. Issues. Crawling Issues. Groups Formed Architecture Documents under Review Group Meetings CSE 454

Plan for today. CS276B Text Retrieval and Mining Winter Evolution of search engines. Connectivity analysis

Data Collection & Data Preprocessing

CS 347 Parallel and Distributed Data Processing

CS 347 Parallel and Distributed Data Processing

Crawling. CS6200: Information Retrieval. Slides by: Jesse Anderton

An introduction to Web Mining part II

Information Retrieval Spring Web retrieval

Shingling Minhashing Locality-Sensitive Hashing. Jeffrey D. Ullman Stanford University

Finding Similar Sets. Applications Shingling Minhashing Locality-Sensitive Hashing

US Patent 6,658,423. William Pugh

Crawling Assignment. Introduction to Information Retrieval CS 150 Donald J. Patterson

EECS 395/495 Lecture 5: Web Crawlers. Doug Downey

CHAPTER 4 PROPOSED ARCHITECTURE FOR INCREMENTAL PARALLEL WEBCRAWLER

CS6200 Information Retreival. Crawling. June 10, 2015

Information Retrieval May 15. Web retrieval

Relevant?!? Algoritmi per IR. Goal of a Search Engine. Prof. Paolo Ferragina, Algoritmi per "Information Retrieval" Web Search

Mining Web Data. Lijun Zhang

Search Engines. Information Retrieval in Practice

Informa(on Retrieval

Announcements. Reading. Project #1 due in 1 week at 5:00 pm Scheduling Chapter 6 (6 th ed) or Chapter 5 (8 th ed) CMSC 412 S14 (lect 5)

Memory hierarchy review. ECE 154B Dmitri Strukov

CSE 373 Autumn 2012: Midterm #2 (closed book, closed notes, NO calculators allowed)

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CSE 5243 INTRO. TO DATA MINING

Web Crawling. Advanced methods of Information Retrieval. Gerhard Gossen Gerhard Gossen Web Crawling / 57

Near Neighbor Search in High Dimensional Data (1) Dr. Anwar Alhenshiri

Information Retrieval. Lecture 9 - Web search basics

Router s Queue Management

Search Engines. Charles Severance

Advanced Computer Architecture

Web Crawling. Jitali Patel 1, Hardik Jethva 2 Dept. of Computer Science and Engineering, Nirma University, Ahmedabad, Gujarat, India

A crawler is a program that visits Web sites and reads their pages and other information in order to create entries for a search engine index.

CC PROCESAMIENTO MASIVO DE DATOS OTOÑO 2018

Distributed Web Crawling over DHTs. Boon Thau Loo, Owen Cooper, Sailesh Krishnamurthy CS294-4

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Corso di Biblioteche Digitali

File System Performance (and Abstractions) Kevin Webb Swarthmore College April 5, 2018

Corso di Biblioteche Digitali

Locality. CS429: Computer Organization and Architecture. Locality Example 2. Locality Example

Crawling and Mining Web Sources

CS 61C: Great Ideas in Computer Architecture. Direct Mapped Caches

CS 326: Operating Systems. CPU Scheduling. Lecture 6

How Does a Search Engine Work? Part 1

CC PROCESAMIENTO MASIVO DE DATOS OTOÑO Lecture 6: Information Retrieval I. Aidan Hogan

Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2016)

ECE468 Computer Organization and Architecture. Virtual Memory

Informa(on Retrieval and Web Search Engines Wolf- Tilo Balke and José Pinto Technische Universität Braunschweig 1

Information Retrieval and Web Search Engines

Searching the Web What is this Page Known for? Luis De Alba

Information Retrieval and Web Search Engines

ECE4680 Computer Organization and Architecture. Virtual Memory

Information Retrieval

BUbiNG. Massive Crawling for the Masses. Paolo Boldi, Andrea Marino, Massimo Santini, Sebastiano Vigna

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2014 Lecture 14

Lecture 15: Caches and Optimization Computer Architecture and Systems Programming ( )

SEARCH ENGINE INSIDE OUT

Web Characteristics CE-324: Modern Information Retrieval Sharif University of Technology

Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2017)

Memory management. Requirements. Relocation: program loading. Terms. Relocation. Protection. Sharing. Logical organization. Physical organization

World Wide Web has specific challenges and opportunities

Large Crawls of the Web for Linguistic Purposes

Program Design with Abstract Data Types

CS519: Computer Networks. Lecture 5, Part 5: Mar 31, 2004 Queuing and QoS

YIOOP FULL HISTORICAL INDEXING IN CACHE NAVIGATION

Random-Access Memory (RAM) Systemprogrammering 2007 Föreläsning 4 Virtual Memory. Locality. The CPU-Memory Gap. Topics

Announcements. CS 188: Artificial Intelligence Fall 2010

CS 572: Information Retrieval

CS 136: Advanced Architecture. Review of Caches

CS2506 Quick Revision

Random-Access Memory (RAM) Systemprogrammering 2009 Föreläsning 4 Virtual Memory. Locality. The CPU-Memory Gap. Topics! The memory hierarchy

Memory hier ar hier ch ar y ch rev re i v e i w e ECE 154B Dmitri Struko Struk v o

Problems with the Imperative 1 Method

Finding Similar Sets

Reducing Hit Times. Critical Influence on cycle-time or CPI. small is always faster and can be put on chip

15-740/ Computer Architecture Lecture 20: Main Memory II. Prof. Onur Mutlu Carnegie Mellon University

LECTURE 11. Memory Hierarchy

Memory Hierarchy. Slides contents from:

Last Class: RPCs and RMI. Today: Communication Issues

CS162 Operating Systems and Systems Programming Lecture 11 Page Allocation and Replacement"

CS 4100 // artificial intelligence

15-740/ Computer Architecture Lecture 19: Main Memory. Prof. Onur Mutlu Carnegie Mellon University

TDT Coarse-Grained Multithreading. Review on ILP. Multi-threaded execution. Contents. Fine-Grained Multithreading

Lecture 17 Introduction to Memory Hierarchies" Why it s important " Fundamental lesson(s)" Suggested reading:" (HP Chapter

Lecture 4: Memory Management & The Programming Interface

Transcription:

Today s lecture Introduction to Information Retrieval Web Crawling (Near) duplicate detection CS276 Information Retrieval and Web Search Chris Manning and Pandu Nayak Crawling and Duplicates 2 Sec. 20.2 Sec. 20.2 Basic crawler operation Crawling picture Begin with known seed s Fetch and parse them Extract s they point to Place the extracted s on a queue Fetch each on the queue and repeat Web s crawled and parsed Seed pages s frontier Unseen Web 3 4 Simple picture complications What any crawler must do Web crawling isn t feasible with one machine All of the above steps distributed Malicious pages Spam pages Spider traps incl dynamically generated Even non-malicious pages pose challenges Latency/bandwidth to remote servers vary Webmasters stipulations How deep should you crawl a site s hierarchy? Site mirrors and duplicate pages Politeness don t hit a server too often Be Robust: Be immune to spider traps and other malicious behavior from web servers Be Polite: Respect implicit and explicit politeness considerations 5 6 1

Sec. 20.2 Explicit and implicit politeness Robots.txt Explicit politeness: specifications from webmasters on what portions of site can be crawled robots.txt Implicit politeness: even with no specification, avoid hitting any site too often Protocol for giving spiders ( robots ) limited access to a website, originally from 1994 www.robotstxt.org/robotstxt.html Website announces its request on what can(not) be crawled For a server, create a file /robots.txt This file specifies access restrictions 7 8 Robots.txt example What any crawler should do No robot should visit any starting with "/yoursite/temp/", except the robot called searchengine": User-agent: * Disallow: /yoursite/temp/ Be capable of distributed operation: designed to run on multiple distributed machines Be scalable: designed to increase the crawl rate by adding more machines Performance/efficiency: permit full use of available processing and network resources User-agent: searchengine Disallow: 9 10 What any crawler should do Updated crawling picture Fetch pages of higher quality first Continuous operation: Continue fetching fresh copies of a previously fetched page Extensible: Adapt to new data formats, protocols s crawled and parsed Seed Pages Unseen Web frontier 11 Crawling thread 12 2

Sec. 20.2 frontier Processing steps in crawling Can include multiple pages from the same host Must avoid trying to fetch them all at the same time Must try to keep all crawling threads busy Pick a from the frontier Fetch the document at the Which one? Parse the Extract links from it to other docs (s) Check if has content already seen If not, add to indexes E.g., only crawl.edu, For each extracted obey robots.txt, etc. Ensure it passes certain filter tests Check if it is already in the frontier (duplicate elimination) 13 14 Sec. 20.2.2 Basic crawl architecture DNS (Domain Name Server) WWW DNS Fetch Parse Doc FP s Content seen? Frontier robots filters filter set Dup elim A lookup service on the internet Given a, retrieve its IP address Service provided by a distributed set of servers thus, lookup latencies can be high (even seconds) Common OS implementations of DNS lookup are blocking: only one outstanding request at a time Solutions DNS caching Batch DNS resolver collects requests and sends them out together 15 16 Parsing: normalization Content seen? When a fetched document is parsed, some of the extracted links are relative s E.g., http://en.wikipedia.org/wiki/main_page has a relative link to /wiki/wikipedia:general_disclaimer which is the same as the absolute http://en.wikipedia.org/wiki/wikipedia:general_disclaimer During parsing, must normalize (expand) such relative s Duplication is widespread on the web If the page just fetched is already in the index, do not further process it This is verified using document fingerprints or shingles Second part of this lecture 17 18 3

Filters and robots.txt Duplicate elimination Filters regular expressions for s to be crawled/not Once a robots.txt file is fetched from a site, need not fetch it repeatedly Doing so burns bandwidth, hits web server Cache robots.txt files For a non-continuous (one-shot) crawl, test to see if an extracted+filtered has already been passed to the frontier For a continuous crawl see details of frontier implementation 19 20 Distributing the crawler Communication between nodes Run multiple crawl threads, under different processes potentially at different nodes Geographically distributed nodes Partition hosts being crawled into nodes Hash used for partition How do these nodes communicate and share s? 21 WWW Output of the filter at each node is sent to the Dup Eliminator of the appropriate node DNS Fetch Parse Doc FP s Content seen? Frontier robots filters filter To other nodes Host splitter From other nodes set Dup elim 22 frontier: two main considerations Politeness challenges Politeness: do not hit a web server too frequently Freshness: crawl some pages more often than others E.g., pages (such as News sites) whose content changes often These goals may conflict with each other. (E.g., simple priority queue fails many links out of a page go to its own site, creating a burst of accesses to that site.) Even if we restrict only one thread to fetch from a host, can hit it repeatedly Common heuristic: insert time gap between successive requests to a host that is >> time for most recent fetch from that host 23 24 4

frontier: Mercator scheme Mercator frontier s Prioritizer s flow in from the top into the frontier Front queues manage prioritization K front queues Back queues enforce politeness Each queue is FIFO Biased front queue selector Back queue router B back queues Single host on each Back queue selector Crawl thread requesting 25 26 Front queues Front queues Prioritizer 1 K Prioritizer assigns to an integer priority between 1 and K Appends to corresponding queue Heuristics for assigning priority Refresh rate sampled from previous crawls Application-specific (e.g., crawl news sites more often ) Biased front queue selector Back queue router 27 28 Biased front queue selector Back queues When a back queue requests a (in a sequence to be described): picks a front queue from which to pull a This choice can be round robin biased to queues of higher priority, or some more sophisticated variant Can be randomized Biased front queue selector Back queue router 1 B 29 Back queue selector Heap 30 5

Back queue invariants Back queue heap Each back queue is kept non-empty while the crawl is in progress Each back queue only contains s from a single host Maintain a table from hosts to back queues Host name 3 Back queue 1 B 31 One entry for each back queue The entry is the earliest time t e at which the host corresponding to the back queue can be hit again This earliest time is determined from Last access to that host Any time buffer heuristic we choose 32 Back queue processing Number of back queues B A crawler thread seeking a to crawl: Extracts the root of the heap Fetches at head of corresponding back queue q (look up from table) Checks if queue q is now empty if so, pulls a v from front queues If there s already a back queue for v s host, append v to it and pull another from front queues, repeat Else add v to q When q is non-empty, create heap entry for it 33 Keep all threads busy while respecting politeness Mercator recommendation: three times as many back queues as crawler threads 34 Introduction to Information Retrieval Near duplicate document detection Duplicate documents The web is full of duplicated content Strict duplicate detection = exact match Not as common But many, many cases of near duplicates E.g., Last modified date the only difference between two copies of a page 35 6

Duplicate/Near-Duplicate Detection Computing Similarity Duplication: Exact match can be detected with fingerprints Near-Duplication: Approximate match Overview Compute syntactic similarity with an edit-distance measure Use similarity threshold to detect near-duplicates E.g., Similarity > 80% => Documents are near duplicates Not transitive though sometimes used transitively Features: Segments of a document (natural or artificial breakpoints) Shingles (Word N-Grams) a rose is a rose is a rose 4-grams are a_rose_is_a rose_is_a_rose is_a_rose_is a_rose_is_a Similarity Measure between two docs (= sets of shingles) Jaccard cooefficient: (Size_of_Intersection / Size_of_Union) Shingles + Set Intersection Computing exact set intersection of shingles between all pairs of documents is expensive Approximate using a cleverly chosen subset of shingles from each (a sketch) Estimate (size_of_intersection / size_of_union) based on a short sketch Doc A Doc B Shingle set A Shingle set B Sketch A Sketch B Jaccard Sketch of a document Create a sketch vector (of size ~200) for each document Documents that share t (say 80%) corresponding vector elements are deemed near duplicates For doc D, sketch D [ i ] is as follows: Let f map all shingles in the universe to 1..2 m (e.g., f = fingerprinting) Let p i be a random permutation on 1..2 m Pick MIN {p i (f(s))} over all shingles s in D Computing Sketch[i] for Doc1 Test if Doc1.Sketch[i] = Doc2.Sketch[i] Document 1 Document 1 Document 2 Start with 64-bit f(shingles) Permute on the number line with p i Pick the min value A B Are these equal? Test for 200 random permutations: p 1, p 2, p 200 7

However Document 1 Document 2 A A = B iff the shingle with the MIN value in the union of Doc1 and Doc2 is common to both (i.e., lies in the intersection) Why? Claim: This happens with probability Size_of_intersection / Size_of_union B Set Similarity of sets C i, C j Ci " C Jaccard(C i,c j) = C! C View sets as columns of a matrix A; one row for each element in the universe. a ij = 1 indicates presence of item i in set j Example C 1 C 2 i j j 0 1 1 0 1 1 Jaccard(C 1,C 2 ) = 2/5 = 0.4 0 0 1 1 0 1 Key Observation For columns C i, C j, four types of rows C i C j A 1 1 B 1 0 C 0 1 D 0 0 Overload notation: A = # of rows of type A Claim A Jaccard(C i,c j) = A + B + C Min Hashing Randomly permute rows Hash h(c i ) = index of first row with 1 in column C i Surprising Property [ h(c ) = h(c ) ] Jaccard ( C, C ) P = i j Why? Both are A/(A+B+C) Look down columns C i, C j until first non-type-d row h(c i ) = h(c j ) ßà type A row i j Random permutations Random permutations are expensive to compute Linear permutations work well in practice For a large prime p, consider permutations over {0,, p 1} drawn from the set: F p = {p a,b : 1 a p 1, 0 b p 1} where p a,b (x) = ax + b mod p Final notes Shingling is a randomized algorithm Our analysis did not presume any probability model on the inputs It will give us the right (wrong) answer with some probability on any input We ve described how to detect near duplication in a pair of documents In real life we ll have to concurrently look at many pairs See text book for details 47 48 8