Today s lecture. Information Retrieval. Basic crawler operation. Crawling picture. What any crawler must do. Simple picture complications

Similar documents
Today s lecture. Basic crawler operation. Crawling picture. What any crawler must do. Simple picture complications

Crawling CE-324: Modern Information Retrieval Sharif University of Technology

CS47300: Web Information Search and Management

Web Crawling. Introduction to Information Retrieval CS 150 Donald J. Patterson

Introduction to Information Retrieval

CS47300: Web Information Search and Management

Topic: Duplicate Detection and Similarity Computing

CS 572: Information Retrieval

Text Technologies for Data Science INFR11145 Web Search Walid Magdy Lecture Objectives

Information Retrieval. Lecture 10 - Web crawling

Administrative. Web crawlers. Web Crawlers and Link Analysis!

Information Retrieval and Web Search

Crawling the Web. Web Crawling. Main Issues I. Type of crawl

Information Retrieval

Plan for today. CS276B Text Retrieval and Mining Winter Evolution of search engines. Connectivity analysis

Collection Building on the Web. Basic Algorithm

Administrivia. Crawlers: Nutch. Course Overview. Issues. Crawling Issues. Groups Formed Architecture Documents under Review Group Meetings CSE 454

CS 347 Parallel and Distributed Data Processing

CS 347 Parallel and Distributed Data Processing

Data Collection & Data Preprocessing

Crawling. CS6200: Information Retrieval. Slides by: Jesse Anderton

An introduction to Web Mining part II

Information Retrieval Spring Web retrieval

CS6200 Information Retreival. Crawling. June 10, 2015

Information Retrieval May 15. Web retrieval

Crawling Assignment. Introduction to Information Retrieval CS 150 Donald J. Patterson

EECS 395/495 Lecture 5: Web Crawlers. Doug Downey

Search Engines. Information Retrieval in Practice

Mining Web Data. Lijun Zhang

CHAPTER 4 PROPOSED ARCHITECTURE FOR INCREMENTAL PARALLEL WEBCRAWLER

Shingling Minhashing Locality-Sensitive Hashing. Jeffrey D. Ullman Stanford University

US Patent 6,658,423. William Pugh

Informa(on Retrieval

Finding Similar Sets. Applications Shingling Minhashing Locality-Sensitive Hashing

Relevant?!? Algoritmi per IR. Goal of a Search Engine. Prof. Paolo Ferragina, Algoritmi per "Information Retrieval" Web Search

Information Retrieval. Lecture 9 - Web search basics

Search Engines. Charles Severance

Searching the Web What is this Page Known for? Luis De Alba

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Web Crawling. Advanced methods of Information Retrieval. Gerhard Gossen Gerhard Gossen Web Crawling / 57

Near Neighbor Search in High Dimensional Data (1) Dr. Anwar Alhenshiri

A crawler is a program that visits Web sites and reads their pages and other information in order to create entries for a search engine index.

How Does a Search Engine Work? Part 1

Announcements. Reading. Project #1 due in 1 week at 5:00 pm Scheduling Chapter 6 (6 th ed) or Chapter 5 (8 th ed) CMSC 412 S14 (lect 5)

Memory hierarchy review. ECE 154B Dmitri Strukov

BUbiNG. Massive Crawling for the Masses. Paolo Boldi, Andrea Marino, Massimo Santini, Sebastiano Vigna

CC PROCESAMIENTO MASIVO DE DATOS OTOÑO 2018

Web Characteristics CE-324: Modern Information Retrieval Sharif University of Technology

Web Crawling. Jitali Patel 1, Hardik Jethva 2 Dept. of Computer Science and Engineering, Nirma University, Ahmedabad, Gujarat, India

Distributed Web Crawling over DHTs. Boon Thau Loo, Owen Cooper, Sailesh Krishnamurthy CS294-4

Router s Queue Management

World Wide Web has specific challenges and opportunities

CC PROCESAMIENTO MASIVO DE DATOS OTOÑO Lecture 6: Information Retrieval I. Aidan Hogan

CS 326: Operating Systems. CPU Scheduling. Lecture 6

Large Crawls of the Web for Linguistic Purposes

YIOOP FULL HISTORICAL INDEXING IN CACHE NAVIGATION

Corso di Biblioteche Digitali

File System Performance (and Abstractions) Kevin Webb Swarthmore College April 5, 2018

Corso di Biblioteche Digitali

CS2506 Quick Revision

CS 572: Information Retrieval

CSE 373 Autumn 2012: Midterm #2 (closed book, closed notes, NO calculators allowed)

SEARCH ENGINE INSIDE OUT

Advanced Computer Architecture

Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2016)

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CSE 5243 INTRO. TO DATA MINING

Crawling and Mining Web Sources

CS6322: Information Retrieval Sanda Harabagiu. Lecture 8: Web search basics

Memory management. Requirements. Relocation: program loading. Terms. Relocation. Protection. Sharing. Logical organization. Physical organization

CS 61C: Great Ideas in Computer Architecture. Direct Mapped Caches

Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2017)

DATA MINING II - 1DL460. Spring 2014"

Multi-Level Feedback Queues

Announcements. CS 188: Artificial Intelligence Fall 2010

CS 136: Advanced Architecture. Review of Caches

Locality. CS429: Computer Organization and Architecture. Locality Example 2. Locality Example

Lecture 14: Cache Innovations and DRAM. Today: cache access basics and innovations, DRAM (Sections )

Exam Guide COMPSCI 386

Information Retrieval

Discussion 3: crawler4j

LECTURE 11. Memory Hierarchy

CS162 Operating Systems and Systems Programming Lecture 11 Page Allocation and Replacement"

CS 261 Fall Caching. Mike Lam, Professor. (get it??)

Lecture 4: Memory Management & The Programming Interface

15-740/ Computer Architecture Lecture 20: Main Memory II. Prof. Onur Mutlu Carnegie Mellon University

Desktop Crawls. Document Feeds. Document Feeds. Information Retrieval

Cache Coherence. CMU : Parallel Computer Architecture and Programming (Spring 2012)

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2014 Lecture 14

Informa(on Retrieval and Web Search Engines Wolf- Tilo Balke and José Pinto Technische Universität Braunschweig 1

Information Retrieval and Web Search Engines

Information Retrieval and Web Search Engines

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

Cache Memory COE 403. Computer Architecture Prof. Muhamed Mudawar. Computer Engineering Department King Fahd University of Petroleum and Minerals

15-740/ Computer Architecture Lecture 19: Main Memory. Prof. Onur Mutlu Carnegie Mellon University

EECS 750: Advanced Operating Systems. 01/29 /2014 Heechul Yun

CRAWLING THE WEB: DISCOVERY AND MAINTENANCE OF LARGE-SCALE WEB DATA

CS301 - Data Structures Glossary By

Web Crawling. Contents

wagtail-robots Documentation

Transcription:

Today s lecture Introduction to Information Retrieval Web Crawling (Near) duplicate detection CS276 Information Retrieval and Web Search Chris Manning, Pandu Nayak and Prabhakar Raghavan Crawling and Duplicates 2 Basic crawler operation Crawling picture Begin with known seed URLs Fetch and parse them Extract URLs they point to Place the extracted URLs on a queue Fetch each URL on the queue and repeat frontier 3 4 Simple picture complications Web crawling isn t feasible with one machine All of the above steps distributed Malicious pages Spam pages Spider traps incl dynamically generated Even non-malicious pages pose challenges Latency/bandwidth to remote servers vary Webmasters stipulations How deep should you crawl a site s URL hierarchy? Site mirrors and duplicate pages Politeness don t hit a server too often What any crawler must do Be Polite: Respect implicit and explicit politeness considerations Only crawl allowed pages Respect robots.txt (more on this shortly) Be Robust: Be immune to spider traps and other malicious behavior from web servers 5 6 1

What any crawler should do Be capable of distributed operation: designed to run on multiple distributed machines Be scalable: designed to increase the crawl rate by adding more machines Performance/efficiency: permit full use of available processing and network resources What any crawler should do Fetch pages of higher quality first Continuous operation: Continue fetching fresh copies of a previously fetched page Extensible: Adapt to new data formats, protocols 7 8 Updated crawling picture URL frontier Can include multiple pages from the same host Must avoid trying to fetch them all at the same time Must try to keep all crawling threads busy 9 10 Explicit and implicit politeness Explicit politeness: specifications from webmasters on what portions of site can be crawled robots.txt Implicit politeness: even with no specification, avoid hitting any site too often Robots.txt Protocol for giving spiders ( robots ) limited access to a website, originally from 1994 www.robotstxt.org/wc/norobots.html Website announces its request on what can(not) be crawled For a server, create a file /robots.txt This file specifies access restrictions 11 12 2

Robots.txt example No robot should visit any URL starting with "/yoursite/temp/", except the robot called searchengine": User-agent: * Disallow: /yoursite/temp/ User-agent: searchengine Disallow: Processing steps in crawling Pick a URL from the frontier Fetch the document at the URL Parse the URL Extract links from it to other docs (URLs) Check if URL has content already seen If not, add to indexes For each extracted URL Ensure it passes certain URL filter tests Check if it is already in the frontier (duplicate URL elimination) 13 14 Basic crawl architecture DNS (Domain Name Server) A lookup service on the internet Given a URL, retrieve its IP address Service provided by a distributed set of servers thus, lookup latencies can be high (even seconds) Common OS implementations of DNS lookup are blocking: only one outstanding request at a time Solutions DNS caching Batch DNS resolver collects requests and sends them out together 15 16 Parsing: URL normalization When a fetched document is parsed, some of the extracted links are relative URLs E.g., http://en.wikipedia.org/wiki/main_page has a relative link to /wiki/wikipedia:general_disclaimer which is the same as the absolute URL http://en.wikipedia.org/wiki/wikipedia:general_disclaimer During parsing, must normalize (expand) such relative URLs Content seen? Duplication is widespread on the web If the page just fetched is already in the index, do not further process it This is verified using document fingerprints or shingles Second part of this lecture 17 18 3

Filters and robots.txt Filters regular expressions for URLs to be crawled/not Once a robots.txt file is fetched from a site, need not fetch it repeatedly Doing so burns bandwidth, hits web server Cache robots.txt files Duplicate URL elimination For a non-continuous (one-shot) crawl, test to see if an extracted+filtered URL has already been passed to the frontier For a continuous crawl see details of frontier implementation 19 20 Distributing the crawler Run multiple crawl threads, under different processes potentially at different nodes Geographically distributed nodes Partition hosts being crawled into nodes Hash used for partition How do these nodes communicate and share URLs? Communication between nodes Output of the URL filter at each node is sent to the Dup URL Eliminator of the appropriate node 21 22 URL frontier: two main considerations Politeness: do not hit a web server too frequently Freshness: crawl some pages more often than others E.g., pages (such as News sites) whose content changes often These goals may conflict each other. (E.g., simple priority queue fails many links out of a page go to its own site, creating a burst of accesses to that site.) Politeness challenges Even if we restrict only one thread to fetch from a host, can hit it repeatedly Common heuristic: insert time gap between successive requests to a host that is >> time for most recent fetch from that host 23 24 4

URL frontier: Mercator scheme K Mercator URL frontier URLs flow in from the top into the frontier Front queues manage prioritization Back queues enforce politeness Each queue is FIFO B 25 26 Front queues Front queues K Prioritizer assigns to URL an integer priority between 1 and K Appends URL to corresponding queue Heuristics for assigning priority Refresh rate sampled from previous crawls Application-specific (e.g., crawl news sites more often ) 27 28 Biased front queue selector Back queues When a back queue requests a URL (in a sequence to be described): picks a front queue from which to pull a URL This choice can be round robin biased to queues of higher priority, or some more sophisticated variant Can be randomized B 29 30 5

Back queue invariants Each back queue is kept non-empty while the crawl is in progress Each back queue only contains URLs from a single host Maintain a table from hosts to back queues Back queue heap One entry for each back queue The entry is the earliest time t e at which the host corresponding to the back queue can be hit again This earliest time is determined from Last access to that host Any time buffer heuristic we choose B 31 32 Back queue processing A crawler thread seeking a URL to crawl: Extracts the root of the heap Fetches URL at head of corresponding back queue q (look up from table) Checks if queue q is now empty if so, pulls a URL v from front queues If there s already a back queue for v s host, append v to q and pull another URL from front queues, repeat Else add v to q When q is non-empty, create heap entry for it 33 Number of back queues B Keep all threads busy while respecting politeness Mercator recommendation: three times as many back queues as crawler threads 34 Duplicate documents Introduction to Information Retrieval Near duplicate document detection The web is full of duplicated content Strict duplicate detection = exact match Not as common But many, many cases of near duplicates E.g., Last modified date the only difference between two copies of a page 35 6

Duplicate/Near-Duplicate Detection Duplication: Exact match can be detected with fingerprints Near-Duplication: Approximate match Overview Compute syntactic similarity with an edit-distance measure Use similarity threshold to detect near-duplicates E.g., Similarity > 80% => Documents are near duplicates Not transitive though sometimes used transitively Computing Similarity Features: Segments of a document (natural or artificial breakpoints) Shingles (Word N-Grams) a rose is a rose is a rose 4-grams are a_rose_is_a rose_is_a_rose is_a_rose_is a_rose_is_a Similarity Measure between two docs (= sets of shingles) Set intersection Specifically (Size_of_Intersection / Size_of_Union) Shingles + Set Intersection Computing exact set intersection of shingles between all pairs of documents is expensive/intractable Approximate using a cleverly chosen subset of shingles from each (a sketch) Estimate (size_of_intersection / size_of_union) based on a short sketch Sketch of a document Create a sketch vector (of size ~200) for each document Documents that share t (say 80%) corresponding vector elements are deemed near duplicates For doc D, sketch D [ i ] is as follows: Let f map all shingles in the universe to 0..2 m (e.g., f = fingerprinting) Let p i be a random permutation on 0..2 m Pick MIN {p i (f(s))} over all shingles s in D Computing Sketch[i] for Doc1 Document 1 Test if Doc1.Sketch[i] = Doc2.Sketch[i] Document 1 Document 2 Start with 64-bit f(shingles) Permute on the number line with p i Pick the min value Are these equal? Test for 200 random permutations: p 1, p 2, p 200 7

However Document 1 Document 2 A = B iff the shingle with the MIN value in the union of Doc1 and Doc2 is common to both (i.e., lies in the intersection) Claim: This happens with probability Size_of_intersection / Size_of_union Set Similarity of sets C i, C j Ci C Jaccard(C i,c j) C C View sets as columns of a matrix A; one row for each element in the universe. a ij = 1 indicates presence of item i in set j Example C 1 C 2 i j j 0 1 1 0 1 1 Jaccard(C 1,C 2 ) = 2/5 = 0.4 0 0 1 1 0 1 Key Observation For columns C i, C j, four types of rows C i C j A 1 1 B 1 0 C 0 1 D 0 0 Overload notation: A = # of rows of type A Claim A Jaccard(C i,c j) A B C Min Hashing Randomly permute rows Hash h(c i ) = index of first row with 1 in column C i Surprising Property P h(c ) h(c ) Jaccard C, C i j Why? Both are A/(A+B+C) Look down columns C i, C j until first non-type-d row h(c i ) = h(c j ) type A row i j Final notes Shingling is a randomized algorithm Our analysis did not presume any probability model on the inputs It will give us the right (wrong) answer with some probability on any input We ve described how to detect near duplication in a pair of documents In real life we ll have to concurrently look at many pairs Use Locality Sensitive Hashing for this 47 8