A Novel Interface to a Web Crawler using VB.NET Technology

Similar documents
CRAWLING THE WEB: DISCOVERY AND MAINTENANCE OF LARGE-SCALE WEB DATA

A SURVEY ON WEB FOCUSED INFORMATION EXTRACTION ALGORITHMS

Web Crawling. Jitali Patel 1, Hardik Jethva 2 Dept. of Computer Science and Engineering, Nirma University, Ahmedabad, Gujarat, India

A crawler is a program that visits Web sites and reads their pages and other information in order to create entries for a search engine index.

Information Retrieval. Lecture 10 - Web crawling

INTRODUCTION (INTRODUCTION TO MMAS)

Web-Crawling Approaches in Search Engines

International Journal of Advanced Research in Computer Science and Software Engineering

1. Introduction. 2. Salient features of the design. * The manuscript is still under progress 1

Term-Frequency Inverse-Document Frequency Definition Semantic (TIDS) Based Focused Web Crawler

Administrivia. Crawlers: Nutch. Course Overview. Issues. Crawling Issues. Groups Formed Architecture Documents under Review Group Meetings CSE 454

Web Data Extraction and Generating Mashup

Web Crawling As Nonlinear Dynamics

Title: Artificial Intelligence: an illustration of one approach.

A GEOGRAPHICAL LOCATION INFLUENCED PAGE RANKING TECHNIQUE FOR INFORMATION RETRIEVAL IN SEARCH ENGINE

[Banjare*, 4.(6): June, 2015] ISSN: (I2OR), Publication Impact Factor: (ISRA), Journal Impact Factor: 2.114

A FAST COMMUNITY BASED ALGORITHM FOR GENERATING WEB CRAWLER SEEDS SET

Performance Evaluation of a Regular Expression Crawler and Indexer

FILTERING OF URLS USING WEBCRAWLER

CHAPTER 4 PROPOSED ARCHITECTURE FOR INCREMENTAL PARALLEL WEBCRAWLER

Collection Building on the Web. Basic Algorithm

Life Science Journal 2017;14(2) Optimized Web Content Mining

Search Engine Optimization (SEO) using HTML Meta-Tags

Self Adjusting Refresh Time Based Architecture for Incremental Web Crawler

CS47300: Web Information Search and Management

Weighted Page Rank Algorithm Based on Number of Visits of Links of Web Page

Design and Development of an Automatic Online Newspaper Archiving System

A Novel Architecture of Ontology based Semantic Search Engine

Site Content Analyzer for Analysis of Web Contents and Keyword Density

A Framework for adaptive focused web crawling and information retrieval using genetic algorithms

WEBTracker: A Web Crawler for Maximizing Bandwidth Utilization

WebParF:A Web Partitioning Framework for Parallel Crawler

Enhancement to PeerCrawl: A Decentralized P2P Architecture for Web Crawling

IJREAT International Journal of Research in Engineering & Advanced Technology, Volume 1, Issue 5, Oct-Nov, ISSN:

TIC: A Topic-based Intelligent Crawler

Ranking Techniques in Search Engines

A Hierarchical Web Page Crawler for Crawling the Internet Faster

Design and Implementation of Agricultural Information Resources Vertical Search Engine Based on Nutch

Deep Web Mining Using C# Wrappers

THE WEB SEARCH ENGINE

UNIT-V WEB MINING. 3/18/2012 Prof. Asha Ambhaikar, RCET Bhilai.

Information Retrieval Spring Web retrieval

Automatic Query Type Identification Based on Click Through Information

An Integrated Framework to Enhance the Web Content Mining and Knowledge Discovery

Proximity Prestige using Incremental Iteration in Page Rank Algorithm

Breadth-First Search Crawling Yields High-Quality Pages

Ontology Based Searching For Optimization Used As Advance Technology in Web Crawlers

Experimental study of Web Page Ranking Algorithms

Crawler with Search Engine based Simple Web Application System for Forum Mining

PROJECT REPORT (Final Year Project ) Project Supervisor Mrs. Shikha Mehta

Anatomy of a search engine. Design criteria of a search engine Architecture Data structures

Efficient extraction of news articles based on RSS crawling

Estimating Page Importance based on Page Accessing Frequency

INLS : Introduction to Information Retrieval System Design and Implementation. Fall 2008.

12. Web Spidering. These notes are based, in part, on notes by Dr. Raymond J. Mooney at the University of Texas at Austin.

A Cloud-based Web Crawler Architecture

I. INTRODUCTION. Fig Taxonomy of approaches to build specialized search engines, as shown in [80].

Bireshwar Ganguly 1, Rahila Sheikh 2

Research and implementation of search engine based on Lucene Wan Pu, Wang Lisha

Efficient extraction of news articles based on RSS crawling

Efficient Crawling Through Dynamic Priority of Web Page in Sitemap

Dynamic Visualization of Hubs and Authorities during Web Search

BUbiNG. Massive Crawling for the Masses. Paolo Boldi, Andrea Marino, Massimo Santini, Sebastiano Vigna

Smartcrawler: A Two-stage Crawler Novel Approach for Web Crawling

You Are Being Watched Analysis of JavaScript-Based Trackers

Administrative. Web crawlers. Web Crawlers and Link Analysis!

Site Audit Boeing

Simulation Study of Language Specific Web Crawling

Lecture 9: I: Web Retrieval II: Webology. Johan Bollen Old Dominion University Department of Computer Science

Keywords Web crawler; Analytics; Dynamic Web Learning; Bounce Rate; Website

Prof. Ahmet Süerdem Istanbul Bilgi University London School of Economics

A Novel Architecture of Ontology-based Semantic Web Crawler

A Large TimeAware Web Graph

Running Head: HOW A SEARCH ENGINE WORKS 1. How a Search Engine Works. Sara Davis INFO Spring Erika Gutierrez.

A SURVEY- WEB MINING TOOLS AND TECHNIQUE

Web Data mining-a Research area in Web usage mining

Information Retrieval May 15. Web retrieval

3 Media Web. Understanding SEO WHITEPAPER

Blackhat Search Engine Optimization Techniques (SEO) and Counter Measures

International Journal of Scientific & Engineering Research Volume 2, Issue 12, December ISSN Web Search Engine

Mining Web Data. Lijun Zhang

International Journal of Scientific & Engineering Research, Volume 4, Issue 11, November ISSN

Efficient World-Wide-Web Information Gathering. Tian Fanjiang Wang Xidong Wang Dingxing

A Crawljax Based Approach to Exploit Traditional Accessibility Evaluation Tools for AJAX Applications

Crawling CE-324: Modern Information Retrieval Sharif University of Technology

A Study of Focused Crawler Approaches

Site Audit Virgin Galactic

Search Engine Architecture. Hongning Wang

Effective Performance of Information Retrieval by using Domain Based Crawler

Patents and Publications Web Scraping

The influence of caching on web usage mining

A scalable lightweight distributed crawler for crawling with limited resources

Abstract. 1. Introduction

An Approach To Web Content Mining

COMP 3400 Programming Project : The Web Spider

Keywords: web crawler, parallel, migration, web database

5 Choosing keywords Initially choosing keywords Frequent and rare keywords Evaluating the competition rates of search

Evaluating the Usefulness of Sentiment Information for Focused Crawlers

Software Quality Improvement using Design Patterns

Module 1: Internet Basics for Web Development (II)

Transcription:

IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661, p- ISSN: 2278-8727Volume 15, Issue 6 (Nov. - Dec. 2013), PP 59-63 A Novel Interface to a Web Crawler using VB.NET Technology Deepak Kumar 1, Dr. Sushil Kumar 2, Narender Kumar 3 1 (Computer Science & Engineering, WCTM, Gurgaon, Haryana, India) 2 (Computer Science & Engineering, SGTIET, Gurgaon, Haryana, India) 3 (Computer Science & Engineering, WCTM, Gurgaon, Haryana, India) Abstract : The number of web pages is increasing into millions and trillions around the world. To make searching much easier for users, web search engines came into existence. Web Search engines are used to find specific information on the World Wide Web. Without search engines, it would be almost impossible to locate anything on the Web unless or until a specific URL address is known. This information is provided to search by a web crawler which is a computer program or software. Web crawler is an essential component of search engines, data mining and other Internet applications. Scheduling Web pages to be downloaded is an important aspect of crawling. Previous research on Web crawl focused on optimizing either crawl speed or quality of the Web pages downloaded. While both metrics are important, scheduling using one of them alone is insufficient and can bias or hurt overall crawl process. This paper is all about design a new Web Crawler using VB.NET Technology. Keywords: Web Crawler, Visual Basic Technology, Crawler Interface, Uniform Resource Locator. I. Introduction A web-crawler is a program/software or automated script which browses the World Wide Web in a methodical, automated manner. The structure of the World Wide Web is a graphical structure; the links given in a page can be used to open other web pages. Actually Internet is a directed graph, webpage as node and hyperlink as edge, so the search operation could be abstracted as a process of traversing directed graph. By following the linked structure of the Web, we can traverse a number of new web-pages starting from a starting webpage. Web crawlers are the programs or software that uses the graphical structure of the Web to move from page to page. Such programs are also called wanderers, robots, spiders, and worms. Web crawlers are designed to retrieve Web pages and add them or their representations to local repository/databases. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages that will help in fast searches. Web search engines work by storing information about many web pages, which they retrieve from the WWW itself. These pages are retrieved by a Web crawler (sometimes also known as a spider), which is an automated Web browser that follows every link it sees. Web are programs that exploit the graph structure of the web to move from page to page. It may be observed that `crawlers' itself doesn t indicate speed of these programs, as they can be considerably fast working programs. Web crawlers are software systems that use the text and links on web pages to create search indexes of the pages, using the HTML links to follow or crawl the connections between pages. II. Evaluation Of Crawlers In a general sense, a crawler may be evaluated on its ability to retrieve good pages. However, a major hurdle is the problem of recognizing these good pages. In an operational environment real users may judge the relevance of pages as these are crawled allowing us to determine if the crawl was successful or not. Unfortunately, meaningful experiments involving real users for assessing Web crawls are extremely problematic. For instance the very scale of the Web suggests that in order to obtain a reasonable notion of crawl effectiveness one must conduct a large number of crawls, i.e., involve a large number of users. Secondly, crawls against the live Web pose serious time constraints. Therefore crawls other than short-lived ones will seem overly burdensome to the user. We may choose to avoid these time loads by showing the user the results of the full crawl but this again limits the extent of the crawl. Next we may choose indirect methods such as inferring crawler strengths by assessing the applications that they support. However this assumes that the underlying crawlers are openly specified, and also prohibits the assessment of crawlers that are new. Thus we argue that although obtaining user based evaluation results remains the ideal, at this juncture it is appropriate and important to seek user independent mechanisms to assess crawl performance. Moreover, in the not so distant future, the majority of the direct consumers of information is more likely to be Web agents working on behalf of humans and other Web agents than humans themselves. Thus it is quite reasonable to explore crawlers in a context 59 Page

where the parameters of crawl time and crawl distance may be beyond the limits of human acceptance imposed by user based experimentation. Since we are not involving real users, we use topics instead of queries, each represented by a collection of seed URLs. It is clear that we are simplifying issues by moving from queries to topics. For example, we loose any clues to user context and goals that queries may provide. However, this approach of starting with seed URLs is increasingly common in crawler research. We assume that if a page is on topic then it is a good page. There are obvious limitations with this assumption. Topicality, although necessary, may not be a sufficient condition for user relevance. For example, a user who has already viewed a topical page may not consider it relevant since it lacks novelty. While we do not underrate these criteria, given the reasons stated above we choose to focus only on topicality as an indicator of relevance for the extent of this research. III. Basic Crawling Terminology Before we discuss the algorithms for web crawling, it is worth to explain some of the basic terminology that is related with crawlers. Seed Page: By crawling, we mean to traverse the Web by recursively following links from a starting URL or a set of starting URLs. This starting URL set is the entry point though which any crawler starts searching procedure. This set of starting URL is known as Seed Page. The selection of a good seed is the most important factor in any crawling process. Frontier: The crawling method starts with a given URL (seed), extracting links from it and adding them to an un-visited list of URLs. This list of un-visited links or URLs is known as, Frontier. Each time, a URL is picked from the frontier by the Crawler Scheduler. This frontier is implemented by using Queue, Priority Queue Data structures. The maintenance of the Frontier is also a major functionality of any Crawler. Parser: Once a page has been fetched, we need to parse its content to extract information that will feed and possibly guide the future path of the crawler. Parsing may imply simple hyperlink/url extraction or it may involve the more complex process of tidying up the HTML content in order to analyze the HTML tag tree. The job of any parser is to parse the fetched web page to extract list of new URLs from it and return the new unvisited URLs to the Frontier. 4.1 SIMULATOR DESIGN This section covers the high level design and detailed design of the Web Crawler. Next section presents the high level design of the Web Crawler in which summarised algorithmic view of proposed crawler is presented. And after high level section, next section describes the General architecture of the Web Crawler Simulator, technology and programming language used, user interface of simulator or crawler and performance metric concepts. 4.2 HIGH LEVEL DESIGN The proposed crawler simulator imitates the behaviour of various crawling scheduling algorithms. This section briefly describes the overall working of simulator in an algorithmic notation. Algorithm describes below presents the high level design of Web Crawler Simulator Step 1. First of all accept the URL and use this URL as the Seed or Acquire URL of processed web document from processing queue. Step 2. Add it to the Frontier. Step 3. Now pick the URL from the Frontier for Crawling. Step 4. Use this URL and Fetch the web page corresponding to that URL and store this web document. Step 5. Parse this web document s content and extract set of URL links. Step 6. Add all the newly found URLs into the Frontier. Step 7. Go to step 2 and repeat while the Frontier is not empty. Step 8. Output desired statistics. Step 9. Exit. Thus a crawler will recursively keep on adding newer URLs to the database repository of the search engine. So we can see that the main function of a crawler is to add new links into the frontier and to select a new URL from the frontier for further processing after each recursive step. V. General Architecture Of The Simulator Below figure shows the flow of the crawler simulation architecture [Ard o A]. The simulator is designed so that all the logic about any specific scheduling algorithm is encapsulated in a different module that can be easily plugged into the system specified at the configuration file. Besides the frontier, the simulator contains a queue. It is filled by the scheduling algorithm with the first k URLs of the frontier, where k is the size 60 Page

of the queue mentioned above, once the scheduling algorithm has been applied to the frontier. Each crawling loop involves picking the next URL from the queue, fetching the page corresponding to the URL from the local database that simulates the Web and determining whether the page is relevant or not. If the page is not in the database, the simulation tool can fetch this page from the real Web and store it into the local repository. If the page is relevant, the outgoing links of this page are extracted and added to the frontier, as long as they are not already in it. The crawling process stops once a certain end condition is fulfilled, usually when a certain number of pages have been crawled or when the simulator is ready to crawl another page and the frontier is empty. If the queue is empty, the scheduling algorithm is applied and fills the queue with the first k URLs of the frontier, as long as the frontier contains k URLs. If the frontier doesn t contain k URLs, the queue is filled with all the URLs of the frontier. Web Pages Scheduler URLs Multithreaded Downloader Queue URLs Fig.5.1 Database Text & Metadata All crawling modules share the data structures needed for the interaction with the simulator. The simulation tool maintains a list of unvisited URLs called the frontier. This is initialized with the seed URLs VI. Performance Metrics During the implementation process, we have taken some assumptions in to account just for simplifying algorithms implementation and results. The basic procedure executed by any web crawling algorithm takes a list of seed URLs as its input and repeatedly executes the following steps: 1. Remove a URL from the URL list 2. Determine the IP address of its host name 3. Download the corresponding document 4. Check the Relevance of Document 5. Extract any links contained in it 6. Add these links back to the URL list In this implementation of algorithms, we have made some assumptions related to steps (4) and (5) of the above procedure. 6.1 RELEVANCY CALCULATION In the algorithms, the relevancy is calculated by parsing the web page and matching its contents with the desired key-words. After matching, a Relevancy Value corresponding to that page is generated by the Relevancy Calculator component. 6.2 CRAWLER USER INTERFACE The foremost criterion for the evaluation of crawling algorithm is the number of relevant pages visited that are produced by each crawling algorithm under same set of Seed URLs. Than simulator has been designed to study the behavior pattern of different crawling algorithms for the same set of starting URLs. Page rank, relevant pages visited and the order in which a set of pages is downloaded are considered to evaluate the performance of crawling policy and crawler. During the implementation process, we have taken some assumptions in to account just for simplifying algorithms implementation and results. Figure shown below is the snapshot of the main user interface of the Web Crawler simulator, which is designed in the VB.NET using ASP.NET Window Application project type, for crawling a website or any web application using this crawler internet connection must required and as input use URL in a format like: http://www.google.com or http://google.com and set location and name of database for saving crawling results 61 Page

data in MS-Access database. Fig.6.2 At each simulation step, the scheduler chooses the topmost Website from the queue of Web sites and sends this site's information to a module that will simulate downloading pages from the Website. For this Simulator uses the different crawling scheduling policies and save data collected or downloaded in MS Access Database in a table with some data fields which are ID, URL and Data. CRAWLING RESULT The best way to compare the result of different policies is to present them in form of table depicting the result in the form of Rows and columns. Output of first simulated algorithm, Breadth First algorithm is shown below as a snapshot. Fig.7.1 The simulator uses the breadth first algorithm and crawled the website for the URL http://www.cdlu.edu.in. The working of any Breadth-First algorithm is very simple. It simply works of first come first serve. Crawling start with URL http://www.cdlu.edu.in. After processing this URL, its child link inserted into the Frontier. Again the next page is fetched from the Frontier and is processed, its children inserted into Frontier and so on. This procedure continues until the Frontier gets empty. Breadth-First is used here as a baseline crawler; since it does not use any knowledge about the topic, and its performance is considered to provide a lower bound for any of the more sophisticated algorithms. Second optimal algorithm is Best First; in this the preference of next page to be approached depends upon the relevancy of that page. Best First traversing from the same URL as Breadth First algorithm for http://www.cdlu.edu.in. Its relevancy is set highest (2 in this case). Now the relevancy of seed children comes out to be 0.1 and 1.0 respectively. Along with respective relevancies, seed children are inserted in frontier. Every time, the page with highest relevancy value is picked from the Frontier. The parent relevance decides 62 Page

which page will be selected next. The page with the highest parent relevance value is selected from the Frontier every time. Third crawling algorithm is Breadth First with time constraints and crawled result using this algorithm for the same seed URL. It is analysed from the produced results that each policy behaves differently with same seed URL. RESULT FORMAT Downloaded data page by page after crawling from the website http://www.cdlu.edu.in and store in the database table s Data Column field using the format shown below: <!DOCTYPE html PUBLIC -//W3C//DTD XHTML 1.0 Transitional//EN http://www.w3.org/tr/xhtml1/dtd/xhtml1-transitional.dtd > <html xmlns= http://www.w3.org/1999/xhtml > <head> <meta http-equiv= Content-Language content= en-us /> <meta http-equiv= X-UA-Compatible content= IE=7 /> <meta http-equiv= Content-Type content= text/html; charset=windows-1252 /> <meta name= verify-v1 content= unh2/lfavip3xi8n/livd63/1yquypwey zegouv80ls= /> <title>welcome to CDLU, Sirsa</title> <meta name= keywords content= Chaudhary Devi Lal University, CDLU, University in Sirsa, Distance education, Sirsa university, university in aryana, devi Lal university, CDLU Sirsa, Sirsa university, Tau aryana, choudhary Devi lal, university of aryana, MCA in sirsa, Mass communication in sirsa, M.A. in Sirsa, M.Com in sirsa /> </body> </html> This format shows the crawled data for the home page of the website. VII. Conclusions Web Crawler is one of the important components of any search engine. A number of web crawlers are available in the market. This paper also shows the user interface of a Web Crawler which are designed using ASP.NET Window application and VB language. The job of a web crawler is to navigate the web and extract new pages for the storage in the database of the search engines. It also involves the traversing, parsing and other considerable issues. REFERENCES [1] http://en.wikipedia.org/wiki/web_crawler#examples _of_web_crawlers. [2] http://www.chato.cl/papers/castillo04_scheduling_algorithms_web_crawling.pdf. [3] http://ieeexplore.ieee.org/iel5/2/34424/01642621.pdf. [4] http://citeseerx.ist.psu.edu/viewdoc/download?doi =10.1.1.1.9569&rep=rep1&type=pdf [5] http://dollar.biz.uiowa.edu/~pant/papers/crawling. pdf [6] Marc Najork, Allan Heydon SRC Research Report 173, High-Performance Web Crawling, published by COMPAQ systems research center on September 26, 2001. [7] Sergey Brin and Lawrence Page, Theanatomy of a large-scale hyper textual Web search engine, In Proceedings of the Seventh International World Wide Web Conference, pages 107 117, April 1998. [8] [Ard o A]. (2005). Combine Web crawler, Software package for general and focused Web-crawling. http://combine.it.lth.se/. [9] Brin, S. and Page, L. 1998. The anatomy of a large-scale hypertextual Web search engine. In Proceedings of the Seventh international Conference on World Wide Web 7, 107-117 [10] Heydon, A. and Najork, M. 1999. Mercator: A scalable, extensible Web crawler. World Wide Web 2, 4 (Apr.1999), 219-229 [11] Boldi, P., Codenotti, B., Santini, M., and Vigna, S. 2004. UbiCrawler: a scalable fully distributed web crawler. Softw. Pract. Exper. 34, 8 (Jul. 2004), 711-726 63 Page