Web Search. Web Spidering. Introduction

Similar documents
Information Retrieval and Web Search

12. Web Spidering. These notes are based, in part, on notes by Dr. Raymond J. Mooney at the University of Texas at Austin.

Administrivia. Crawlers: Nutch. Course Overview. Issues. Crawling Issues. Groups Formed Architecture Documents under Review Group Meetings CSE 454

Web Search. Introduction

Information Retrieval on the Internet (Volume III, Part 3, 213)

Web Search Basics. Berlin Chen Department of Computer Science & Information Engineering National Taiwan Normal University

Information Retrieval Spring Web retrieval

Search Engines. Information Technology and Social Life March 2, Ask difference between a search engine and a directory

Information Networks. Hacettepe University Department of Information Management DOK 422: Information Networks

Web Crawling. Jitali Patel 1, Hardik Jethva 2 Dept. of Computer Science and Engineering, Nirma University, Ahmedabad, Gujarat, India

Information Retrieval May 15. Web retrieval

The Influence of the Internet

SEARCH ENGINE INSIDE OUT

The Anatomy of a Large-Scale Hypertextual Web Search Engine

How Does a Search Engine Work? Part 1

Collection Building on the Web. Basic Algorithm

Directory Search Engines Searching the Yahoo Directory

Search Engine Technology. Mansooreh Jalalyazdi

Information Retrieval. Lecture 10 - Web crawling

Chapter 2: Literature Review

5 Choosing keywords Initially choosing keywords Frequent and rare keywords Evaluating the competition rates of search

= a hypertext system which is accessible via internet

Google Inc. The world s leading Internet search engine. MarketLine Case Study. Reference Code: ML Publication Date: March 2012

MUSIC & THE INTERNET MUMT 301

Impact. Course Content. Objectives of Lecture 2 Internet and WWW. CMPUT 499: Internet and WWW Dr. Osmar R. Zaïane. University of Alberta 4

CS47300: Web Information Search and Management

Agenda. 1 Web search. 2 Web search engines. 3 Web robots, crawler. 4 Focused Web crawling. 5 Web search vs Browsing. 6 Privacy, Filter bubble

FILTERING OF URLS USING WEBCRAWLER

Business Data Communications and Networking

Search Engine Survey. May 17, The World Wide Web (WWW) has become a huge information source whose content is increasing

CRAWLING THE WEB: DISCOVERY AND MAINTENANCE OF LARGE-SCALE WEB DATA

Internet and World Wide Web. The Internet. Computers late 60s & 70s. State of computers? Internet s. Personal Computing?

Search Engines. Charles Severance

Web Search Basics. Berlin Chen Department t of Computer Science & Information Engineering National Taiwan Normal University

Module 1: Internet Basics for Web Development (II)

Persistent systems. Traditional software: Data stored outside of program. Program

Administrative. Web crawlers. Web Crawlers and Link Analysis!

CS6200 Information Retreival. Crawling. June 10, 2015

Computer Fundamentals : Pradeep K. Sinha& Priti Sinha

Web Crawling. Introduction to Information Retrieval CS 150 Donald J. Patterson

CSC 551: Web Programming. Spring 2004

Information Retrieval (IR) Introduction to Information Retrieval. Lecture Overview. Why do we need IR? Basics of an IR system.

Unit 4 The Web. Computer Concepts Unit Contents. 4 Web Overview. 4 Section A: Web Basics. 4 Evolution

Searching the Deep Web

Web Mining. Data Mining and Text Mining (UIC Politecnico di Milano) Daniele Loiacono

FAQ: Crawling, indexing & ranking(google Webmaster Help)

An Overview of Search Engine. Hai-Yang Xu Dev Lead of Search Technology Center Microsoft Research Asia

A Balanced Introduction to Computer Science, 3/E David Reed, Creighton University 2011 Pearson Prentice Hall ISBN

The Internet and the Web. recall: the Internet is a vast, international network of computers

A Survey on Web Information Retrieval Technologies

Crawler. Crawler. Crawler. Crawler. Anchors. URL Resolver Indexer. Barrels. Doc Index Sorter. Sorter. URL Server

Crawling CE-324: Modern Information Retrieval Sharif University of Technology

USER MANUAL. SEO Hub TABLE OF CONTENTS. Version: 0.1.1

Crawling the Web. Web Crawling. Main Issues I. Type of crawl

DATA MINING - 1DL105, 1DL111

doc. RNDr. Tomáš Skopal, Ph.D. Department of Software Engineering, Faculty of Information Technology, Czech Technical University in Prague

Introduction to Bioinformatics

Introduction to Bioinformatics

Using the Internet and the World Wide Web

Web Search Ranking. (COSC 488) Nazli Goharian Evaluation of Web Search Engines: High Precision Search

Introduction. What do you know about web in general and web-searching in specific?

A web directory lists web sites by category and subcategory. Web directory entries are usually found and categorized by humans.

WEB TECHNOLOGIES CHAPTER 1

Crawling. CS6200: Information Retrieval. Slides by: Jesse Anderton

CHAPTER THREE INFORMATION RETRIEVAL SYSTEM

Computers and Media: Privacy

G64PMM - Lecture 4.1. What is Hypertext? Non-linearity! Hypertext I

Relevant?!? Algoritmi per IR. Goal of a Search Engine. Prof. Paolo Ferragina, Algoritmi per "Information Retrieval" Web Search

You got a website. Now what?

Multimedia Information Systems - Introduction

CS47300: Web Information Search and Management

Exam IST 441 Spring 2014

History and Backgound: Internet & Web 2.0

Information Retrieval

Lecture 9: I: Web Retrieval II: Webology. Johan Bollen Old Dominion University Department of Computer Science

From administrivia to what really matters

UNIT-V WEB MINING. 3/18/2012 Prof. Asha Ambhaikar, RCET Bhilai.

1.1 A Brief Intro to the Internet

Plan for today. CS276B Text Retrieval and Mining Winter Evolution of search engines. Connectivity analysis

CSC 121 Computers and Scientific Thinking

Graph and Web Mining - Motivation, Applications and Algorithms PROF. EHUD GUDES DEPARTMENT OF COMPUTER SCIENCE BEN-GURION UNIVERSITY, ISRAEL

Running Head: HOW A SEARCH ENGINE WORKS 1. How a Search Engine Works. Sara Davis INFO Spring Erika Gutierrez.

AN OVERVIEW OF SEARCHING AND DISCOVERING WEB BASED INFORMATION RESOURCES

EEC-682/782 Computer Networks I

Today we shall be starting discussion on search engines and web crawler.

Searching the Web [Arasu 01]

1 Introduction. 1.1 What is the World Wide Web?

5/19/2015. Objectives. JavaScript, Sixth Edition. Introduction to the World Wide Web (cont d.) Introduction to the World Wide Web

The Internet Advanced Research Projects Agency Network (ARPANET) How the Internet Works Transport Control Protocol (TCP)

I. INTRODUCTION. Fig Taxonomy of approaches to build specialized search engines, as shown in [80].

Web Information System. Truong Thi Dieu Linh, PhD Nguyen Hong Phuong, Msc.

Searching the Deep Web

CHAPTER 2 MARKUP LANGUAGES: XHTML 1.0

World Wide Web has specific challenges and opportunities

data analysis - basic steps Arend Hintze

A Brief History of the Internet

DATA MINING II - 1DL460. Spring 2014"

Developing a Basic Web Site

Searching. Outline. Copyright 2006 Haim Levkowitz. Copyright 2006 Haim Levkowitz

Objectives. Connecting with Computer Science 2

Transcription:

Web Search. Web Spidering Introduction 1

Outline Information Retrieval applied on the Web The Web the largest collection of documents available today Still, a collection Should be able to apply traditional IR techniques, with few changes Web Search Spidering 2

Web Search Using IR Web Spider Document corpus Query String IR System the spider represents the main difference compared to traditional IR 1. Page1 2. Page2 3. Page3.. Ranked Documents 3

The World Wide Web Developed by Tim Berners-Lee in 1990 at CERN to organize research documents available on the Internet. Combined idea of documents available by FTP with the idea of hypertext to link documents. Developed initial HTTP network protocol, URLs, HTML, and first web server. 4

Web Pre-History Ted Nelson developed idea of hypertext in 1965. Doug Engelbart invented the mouse and built the first implementation of hypertext in the late 1960 s at SRI. ARPANET was developed in the early 1970 s. For what main reason? The basic technology was in place in the 1970 s; but it took the PC revolution and widespread networking to inspire the web and make it practical. 5

Web Browser History Early browsers were developed in 1992 (Erwise, ViolaWWW). In 1993, Marc Andreessen and Eric Bina at UIUC NCSA developed the Mosaic browser and distributed it widely. Andreessen joined with James Clark (Stanford Prof. and Silicon Graphics founder) to form Mosaic Communications Inc. in 1994 (which became Netscape to avoid conflict with UIUC). Microsoft licensed the original Mosaic from UIUC and used it to build Internet Explorer in 1995. 6

Search Engine Early History By late 1980 s many files were available by anonymous FTP. In 1990, Alan Emtage of McGill Univ. developed Archie (short for archives ) Assembled lists of files available on many FTP servers. Allowed regex search of these file names. In 1993, Veronica and Jughead were developed to search names of text files available through Gopher servers. 7

Web Search History In 1993, early web robots (spiders) were built to collect URL s: Wanderer ALIWEB (Archie-Like Index of the WEB) WWW Worm (indexed URL s and titles for regex search) In 1994, Stanford grad students David Filo and Jerry Yang started manually collecting popular web sites into a topical hierarchy called Yahoo. 8

Web Search History (cont) In early 1994, Brian Pinkerton developed WebCrawler as a class project at U Wash. (eventually became part of Excite and AOL). A few months later, Fuzzy Maudlin, a grad student at CMU developed Lycos. First to use a standard IR system as developed for the DARPA Tipster project. First to index a large set of pages. In late 1995, DEC developed Altavista. Used many Alpha machines to quickly process large numbers of queries. Supported boolean operators, phrases. 9

Web Search Recent History In 1998, Larry Page and Sergey Brin, Ph.D. students at Stanford, started Google. Main advance is use of link analysis to rank results partially based on authority. Conclusions: Most popular search engines were developed by graduate students Don t wait too long to develop your own search engine! 10

Web Challenges for IR Distributed Data: Documents spread over millions of different web servers. Volatile Data: Many documents change or disappear rapidly (e.g. dead links). Large Volume: Billions of separate documents. Unstructured and Redundant Data: No uniform structure, HTML errors, up to 30% (near) duplicate documents. Quality of Data: No editorial control, false information, poor quality writing, typos, etc. Heterogeneous Data: Multiple media types (images, video, VRML), languages, character sets, etc. 11

Number of Web Servers 12

Number of Web Pages 13

Number of Web Pages Indexed SearchEngineWatch, Aug. 15, 2001 Assuming about 20KB per page, 1 billion pages is about 20 terabytes of data. 14

Growth of Web Pages Indexed SearchEngineWatch, Aug. 15, 2001 Google lists current number of pages searched. 15

Graph Structure in the Web http://www9.org/w9cdrom/160/160.html 16

Zipf s Law on the Web Length of web pages has a Zipfian distribution. Number of hits to a web page has a Zipfian distribution. 17

Manual Hierarchical Web Taxonomies Yahoo approach of using human editors to assemble a large hierarchically structured directory of web pages. http://www.yahoo.com/ Open Directory Project is a similar approach based on the distributed labor of volunteer editors ( net-citizens provide the collective brain ). Used by most other search engines. Started by Netscape. http://www.dmoz.org/ 18

Web Search Using IR Web Spider Document corpus Query String IR System the spider represents the main difference compared to traditional IR 1. Page1 2. Page2 3. Page3.. Ranked Documents 19

Spiders (Robots/Bots/Crawlers) Start with a comprehensive set of root URL s from which to start the search. Follow all links on these pages recursively to find additional pages. Index/Process all novel found pages in an inverted index as they are encountered. May allow users to directly submit pages to be indexed (and crawled from). 20

Search Strategies Breadth-first Search 21

Search Strategies (cont) Depth-first Search 22

Search Strategy Trade-Off s Breadth-first explores uniformly outward from the root page but requires memory of all nodes on the previous level (exponential in depth). Standard spidering method. Depth-first requires memory of only depth times branching-factor (linear in depth) but gets lost pursuing a single thread. Both strategies can be easily implemented using a queue of links (URL s). 23

Avoiding Page Duplication Must detect when revisiting a page that has already been spidered (web is a graph not a tree). Must efficiently index visited pages to allow rapid recognition test. Tree indexing (e.g. trie), Hashtable Index page using URL as a key. Must canonicalize URL s (e.g. delete ending / ) Not detect duplicated or mirrored pages. Index page using textual content as a key. Requires first downloading page. 24

Spidering Algorithm Initialize queue (Q) with initial set of known URL s. Until Q empty or page or time limit exhausted: Pop URL, L, from front of Q. If L is not to an HTML page (.gif,.jpeg,.ps,.pdf,.ppt ) continue loop. If already visited L, continue loop. Download page, P, for L. If cannot download P (e.g. 404 error, robot excluded) continue loop. Index P (e.g. add to inverted index or store cached copy). Parse P to obtain list of new links N. Append N to the end of Q. 25

Queueing Strategy How new links are added to the queue determines search strategy. FIFO (append to end of Q) gives breadthfirst search. LIFO (add to front of Q) gives depth-first search. Heuristically ordering the Q gives a focused crawler that directs its search towards interesting pages. 26

Restricting Spidering You can restrict spider to a particular site. Remove links to other sites from Q. You can restrict spider to a particular directory. Remove links not in the specified directory. Obey page-owner restrictions (robot exclusion). 27

Link Extraction Must find all links in a page and extract URLs. <a href= http://www.site.uottawa.ca/~diana/csi4107 > <frame src= site-index.html > Must complete relative URL s using current page URL: <a href= A1.htm > to http://www.site.uottawa.ca/~diana/csi4107/a1.htm <a href=../csi4107/paper-presentations.html > to http://www.site.uottawa.ca/~diana/csi4107/paperpresentations.html 28

URL Syntax A URL has the following syntax: <scheme>://<authority><path>?<query>#<fragment> A query passes variable values from an HTML form and has the syntax: <variable>=<value>&<variable>=<value> A fragment is also called a reference or a ref and is a pointer within the document to a point specified by an anchor tag of the form: <A NAME= <fragment> > 29

Link Canonicalization Equivalent variations of ending directory normalized by removing ending slash. http://www.site.uottawa.ca/~diana/ http://www.site.uottawa.ca/~diana Internal page fragments (ref s) removed: http://www.site.uottawa.ca/~diana/csi1102/index. html#eval http://www.site.uottawa.ca/~diana/csi1102/index. html 30

Anchor Text Indexing Extract anchor text (between <a> and </a>) of each link followed. Anchor text is usually descriptive of the document to which it points. Add anchor text to the content of the destination page to provide additional relevant keyword indices. <a href= http://www.microsoft.com >Evil Empire</a> <a href= http://www.ibm.com >IBM</a> 31

Anchor Text Indexing (cont d) Helps when descriptive text in destination page is embedded in image logos rather than in accessible text. Many times anchor text is not useful: click here Increases content more for popular pages with many in-coming links, increasing recall of these pages. May even give higher weights to tokens from anchor text. 32

Robot Exclusion Web sites and pages can specify that robots should not crawl/index certain areas. Two components: Robots Exclusion Protocol: Site wide specification of excluded directories. Robots META Tag: Individual document tag to exclude indexing or following links. 33

Robots Exclusion Protocol Site administrator puts a robots.txt file at the root of the host s web directory. http://www.ebay.com/robots.txt http://www.cnn.com/robots.txt File is a list of excluded directories for a given robot (user-agent). Exclude all robots from the entire site: User-agent: * Disallow: / 34

Robot Exclusion Protocol Examples Exclude specific directories: User-agent: * Disallow: /tmp/ Disallow: /cgi-bin/ Disallow: /users/paranoid/ Exclude a specific robot: User-agent: GoogleBot Disallow: / Allow a specific robot: User-agent: GoogleBot Disallow: 35

Robot Exclusion Protocol Details Only use blank lines to separate different User-agent disallowed directories. One directory per Disallow line. No regex patterns in directories. 36

Robots META Tag Include META tag in HEAD section of a specific HTML document. <meta name= robots content= none > Content value is a pair of values for two aspects: index noindex: Allow/disallow indexing of this page. follow nofollow: Allow/disallow following links on this page. 37

Special values: Robots META Tag (cont) all = index,follow none = noindex,nofollow Examples: <meta name= robots content= noindex,follow > <meta name= robots content= index,nofollow > <meta name= robots content= none > 38

Robot Exclusion Issues META tag is newer and less well-adopted than robots.txt. Standards are conventions to be followed by good robots. Companies have been prosecuted for disobeying these conventions and trespassing on private cyberspace. 39

Multi-Threaded Spidering Bottleneck is network delay in downloading individual pages. Best to have multiple threads running in parallel each requesting a page from a different host. Distribute URL s to threads to guarantee equitable distribution of requests across different hosts to maximize through-put and avoid overloading any single server. Early Google spider had multiple co-ordinated crawlers with about 300 threads each, together able to download over 100 pages per second. 40

Directed/Focused Spidering Sort queue to explore more interesting pages first. Two styles of focus: Topic-Directed Link-Directed 41

Topic-Directed Spidering Assume desired topic description or sample pages of interest are given. Sort queue of links by the similarity (e.g. cosine metric) of their source pages and/or anchor text to this topic description. Related to Topic Tracking and Detection 42

Link-Directed Spidering Monitor links and keep track of in-degree and out-degree of each page encountered. Sort queue to prefer popular pages with many in-coming links (authorities). Sort queue to prefer summary pages with many out-going links (hubs). Google s PageRank algorithm 43

Keeping Spidered Pages Up to Date Web is very dynamic: many new pages, updated pages, deleted pages, etc. Periodically check spidered pages for updates and deletions: Just look at header info (e.g. META tags on last update) to determine if page has changed, only reload entire page if needed. Track how often each page is updated and preferentially return to pages which are historically more dynamic. Preferentially update pages that are accessed more often to optimize freshness of popular pages. 44