Journal of Chemical and Pharmaceutical Research, 2014, 6(5): Research Article

Similar documents
Design and Implementation of Agricultural Information Resources Vertical Search Engine Based on Nutch

Research and Design of Key Technology of Vertical Search Engine for Educational Resources

Design and Implementation of Full Text Search Engine Based on Lucene Na-na ZHANG 1,a *, Yi-song WANG 1 and Kun ZHU 1

Research and implementation of search engine based on Lucene Wan Pu, Wang Lisha

Research on Full-text Retrieval based on Lucene in Enterprise Content Management System Lixin Xu 1, a, XiaoLin Fu 2, b, Chunhua Zhang 1, c

Full-Text Indexing For Heritrix

A New Model of Search Engine based on Cloud Computing

The Anatomy of a Large-Scale Hypertextual Web Search Engine

The principle of a fulltext searching instrument and its application research Wen Ju Gao 1, a, Yue Ou Ren 2, b and Qiu Yan Li 3,c

Information Retrieval May 15. Web retrieval

Implementation of a High-Performance Distributed Web Crawler and Big Data Applications with Husky

Evaluation of Meta-Search Engine Merge Algorithms

Information Retrieval Spring Web retrieval

Design and Realization of Agricultural Information Intelligent Processing and Application Platform

CRAWLING THE WEB: DISCOVERY AND MAINTENANCE OF LARGE-SCALE WEB DATA

Chapter 2. Architecture of a Search Engine

Analysis on the technology improvement of the library network information retrieval efficiency

Mining Web Data. Lijun Zhang

Research Article Mobile Storage and Search Engine of Information Oriented to Food Cloud

International Journal of Scientific & Engineering Research Volume 2, Issue 12, December ISSN Web Search Engine

Web Structure Mining using Link Analysis Algorithms

DATA MINING II - 1DL460. Spring 2014"

THE WEB SEARCH ENGINE

Research on the value of search engine optimization based on Electronic Commerce WANG Yaping1, a

An Intelligent Retrieval Platform for Distributional Agriculture Science and Technology Data

The Design of Model for Tibetan Language Search System

CC PROCESAMIENTO MASIVO DE DATOS OTOÑO Lecture 7: Information Retrieval II. Aidan Hogan

CS 347 Parallel and Distributed Data Processing

CS 347 Parallel and Distributed Data Processing

CHAPTER THREE INFORMATION RETRIEVAL SYSTEM

Open Access Research on the Prediction Model of Material Cost Based on Data Mining

Personalized Search for TV Programs Based on Software Man

Mining Web Data. Lijun Zhang

Information Retrieval (IR) Introduction to Information Retrieval. Lecture Overview. Why do we need IR? Basics of an IR system.

Implementation of Parallel CASINO Algorithm Based on MapReduce. Li Zhang a, Yijie Shi b

News Article Matcher. Team: Rohan Sehgal, Arnold Kao, Nithin Kunala

Collection Building on the Web. Basic Algorithm

Evaluatation of Integration algorithms for Meta-Search Engine

A SURVEY ON WEB FOCUSED INFORMATION EXTRACTION ALGORITHMS

A BFS-BASED SIMILAR CONFERENCE RETRIEVAL FRAMEWORK

Application of Individualized Service System for Scientific and Technical Literature In Colleges and Universities

Chapter 6: Information Retrieval and Web Search. An introduction

LAB 7: Search engine: Apache Nutch + Solr + Lucene

APPLICATION OF JAVA TECHNOLOGY IN THE REGIONAL COMPARATIVE ADVANTAGE ANALYSIS SYSTEM OF MAIN GRAIN IN CHINA

Research Article. Three-dimensional modeling of simulation scene in campus navigation system

Design document. Table of content. Introduction. System Architecture. Parser. Predictions GUI. Data storage. Updated content GUI.

Experimental study of Web Page Ranking Algorithms

Journal of Chemical and Pharmaceutical Research, 2015, 7(3): Research Article

Construction of SSI Framework Based on MVC Software Design Model Yongchang Rena, Yongzhe Mab

Link Analysis and Web Search

Design on Office Automation System based on Domino/Notes Lijun Wang1,a, Jiahui Wang2,b

Journal of Chemical and Pharmaceutical Research, 2014, 6(6): Research Article

Implementation of Enhanced Web Crawler for Deep-Web Interfaces

A Modified Algorithm to Handle Dangling Pages using Hypothetical Node

Automated Online News Classification with Personalization

SQL Query Optimization on Cross Nodes for Distributed System

A Test Sequence Generation Method Based on Dependencies and Slices Jin-peng MO *, Jun-yi LI and Jian-wen HUANG

A Fast and High Throughput SQL Query System for Big Data

Key Technology of Online Writing System Development Hongmei Zhao

SOURCERER: MINING AND SEARCHING INTERNET- SCALE SOFTWARE REPOSITORIES

The Design of Distributed File System Based on HDFS Yannan Wang 1, a, Shudong Zhang 2, b, Hui Liu 3, c

Information Push Service of University Library in Network and Information Age

Web Data mining-a Research area in Web usage mining

The Topic Specific Search Engine

Searching the Web for Information

Home Page. Title Page. Page 1 of 14. Go Back. Full Screen. Close. Quit

Lecture 9: I: Web Retrieval II: Webology. Johan Bollen Old Dominion University Department of Computer Science

An Adaptive Approach in Web Search Algorithm

Open Access Apriori Algorithm Research Based on Map-Reduce in Cloud Computing Environments

Web Crawling. Jitali Patel 1, Hardik Jethva 2 Dept. of Computer Science and Engineering, Nirma University, Ahmedabad, Gujarat, India

Department of Electronic Engineering FINAL YEAR PROJECT REPORT

10/10/13. Traditional database system. Information Retrieval. Information Retrieval. Information retrieval system? Information Retrieval Issues

CS6200 Information Retreival. Crawling. June 10, 2015

DATA MINING II - 1DL460. Spring 2017

A REUSE METHOD OF MECHANICAL PRODUCT DEVELOPMENT KNOWLEDGE BASED ON CAD MODEL SEMANTIC MARKUP AND RETRIEVAL

An improved PageRank algorithm for Social Network User s Influence research Peng Wang, Xue Bo*, Huamin Yang, Shuangzi Sun, Songjiang Li

Chapter 27 Introduction to Information Retrieval and Web Search

Discovering Advertisement Links by Using URL Text

EXTRACTION OF RELEVANT WEB PAGES USING DATA MINING

Design and Implementation of Inspection System for Lift Based on Android Platform Yan Zhang1, a, Yanping Hu2,b

An Application for Monitoring Solr

LIST OF ACRONYMS & ABBREVIATIONS

Improvements and Implementation of Hierarchical Clustering based on Hadoop Jun Zhang1, a, Chunxiao Fan1, Yuexin Wu2,b, Ao Xiao1

CSCI572 Hw2 Report Team17

Enhanced Performance of Search Engine with Multitype Feature Co-Selection of Db-scan Clustering Algorithm

The Design and Implementation of Disaster Recovery in Dual-active Cloud Center

E-Shop: A Vertical Search Engine for Domain of Online Shopping

A Web Service Monitoring Indicator and Model System and Performance

SQL Based Paperless Examination System

Anatomy of a search engine. Design criteria of a search engine Architecture Data structures

Research and Application of Unstructured Data Acquisition and Retrieval Technology

A Study on Load Balancing Techniques for Task Allocation in Big Data Processing* Jin Xiaohong1,a, Li Hui1, b, Liu Yanjun1, c, Fan Yanfang1, d

A P2P-based Incremental Web Ranking Algorithm

Learning to Match. Jun Xu, Zhengdong Lu, Tianqi Chen, Hang Li

Crawler with Search Engine based Simple Web Application System for Forum Mining

Improvement on PageRank Algorithm Based on User Influence

A Data Classification Algorithm of Internet of Things Based on Neural Network

BUPT at TREC 2009: Entity Track

An Improved PageRank Method based on Genetic Algorithm for Web Search

Reading Time: A Method for Improving the Ranking Scores of Web Pages

Transcription:

Available online www.jocpr.com Journal of Chemical and Pharmaceutical Research, 2014, 6(5):2057-2063 Research Article ISSN : 0975-7384 CODEN(USA) : JCPRC5 Research of a professional search engine system based on Lucene and Heritrix Ying Hong * and Chao Lv Computer Information Center, Beijing Institute of Fashion Technology, Beijing, China ABSTRACT In order to solve user s problem of searching professional information quickly and correctly, a professional search engine is designed and realized. In the first place, the web pages are collected by the means of extended Heritrix web crawler. The data extracted from web pages based on jsoup is saved to the local. In the second place, Chinese words segmentation, inverted index, index retrieval and improved web page ranking algorithm technology are taken to handle the collected data. At last, a professional search engine is designed and realized. The experimental results show that this professional search engine enhances accuracy and efficiency of web page information retrieval in great degree. Keywords: professional search engine; Lucene; Heritrix; information retrieval; PageRank; web page ranking; Chinese word segmentation; jsoup INTRODUCTION How to obtain the required information accurately in the massive level of internet resources is an important research question in computer science. To some extent, the appearance of general search engines has solved the problem of information retrieval from internet. Although general search engine is more powerful, its retrieval performance is not good in professional field. The ambiguity of Chinese vocabulary cause the results searching by general search engines now often cover the relevant network information in all fields including key words. For the user who wants to search the key information in relevant professional field accurately, the workload of filtering and searching the results is huge certainly. In this paper, we have analyzed the overall architecture of professional search engine system and designed the function of each module based on Heritrix and Lucene. At last, we have achieved a professional search engine system. INTRODUCTION General search engine and professional search engine The general search engine searches for information on the web with a certain degree of strategy. It presents the information to the user after building indexes, removing duplicates and sorting. Its work includes two parts: information crawling and information retrieval. The work of information crawling is completed by webpage capture program (spider, also known as network spider) and indexer. Network spider traversals and crawls the webpage information along the hyperlinks in the webpage on the basis of scanning strategy. The indexer processes the webpage information including word segmentation, removing duplicates and sorting. Then it will establish an index database. Information retrieval is work by the searcher and the user interface. When the user input query keywords, the searcher will delete meaningless characters and then find the 2057

relevant information from the index database to present to the user on the basis of sorting strategy. The user interface is an interactive interface between search engine and users. Its function is for users to input keywords and view the results. Professional search engine and general search engine have the same working principle basically but two different: (1)When the spider crawled the webpage information, firstly it will filter the information with the help of professional lexicon, and then establish an index database;(2)it needs to consider professional relevance of the search results in ranking algorithms[1]. Lucene Lucene is a free, open-source information retrieval library written in Java and supported by the Apache Software Foundation. It is suitable for any application which requires full-text indexing and search, and is a popular choice for consumer and business web applications, single-site searching, and enterprise search. It can index text from a range of content types including HTML, PDF, Microsoft Word and Excel, XML [2]. All the source code of Lucene is divided into 7 packages and each package will complete a specific function. The function of Lucene is more powerful, but the basic work is two main parts: (1) Use algorithm for the Chinese word segmentation on webpage information and then create index. (2) Match the information in the index tree according to the query keywords inputted by the user and then ranks the results properly. Heritrix Heritrix is the internet archive's open-source, extensible, web-scale, archival-quality web crawler project [3]. Heritrix has good scalability, so the developer can expand its components to realize their own crawl tactics. When Heritrix was starting, it selects a URI from URI queue and downloads files from remote server. Next, Heritrix will analyze page content and extract new URI to join URI queue for downloading. The architecture of Heritrix is shown in Fig. 1: Web Administrative Console CrawlOrder CrawlController Frontier URI Work Queues Aready Included URIs ServerCache Scope schedule(crawluri) next(crawluri) Prefetch Chain Preselector PreconditionEnforcer Fetch Chain FetchDNS FetchHTTP Extractor Chain ExtractorHTML ExtractorJS Write Chain ARCWriteProcessor Fetch Chain FetchDNS FetchHTTP URI Work Queues finished(crawluri) Fig. 1.The architecture of Heritrix There are some components included in Heritrix: (1)CrawlOrder. This component is starting point of grasping job. It can be set up with various methods. The simple way is setting by file order.xml. (2)CrawlController. It is the core component in grasping job. This component controls the beginning and the end of grasping job. It gets URI from component Frontier then transfers it to ToeThread component. (3)Frontier. The function of this component is providing URI for ToeThread component. It decides which URI will be put in process chain by using specific algorithm. (4)ToeThread and ToePool. ToeThread is a grasping thread. ToePool is thread pool which manage all grasping threads. Heritrix can grasp web page effectively by using multithreading. (5)Processor. The function of this component mainly is downloading web pages and extracting URI. 2058

Jsoup Jsoup is an html parser developed with java. It is a Java library for working with real-world HTML. It provides a very convenient API for extracting and manipulating data using the best of DOM, CSS and jquery-like methods. jsoup implements the WHATWG HTML5 specification, and parses HTML to the same DOM as modern browsers do[4]. Jsoup can scrape and parse HTML from a URL, file, or string. It find and extract data using DOM traversal or CSS selectors. The architecture of professional search engine system Considering the characteristic of searching for professional field and the operational efficiency of the system, we can realize the professional search engine system based on Lucene and Heritrix. The architecture of professional search engine system based on components is shown in Fig. 2: Internet user interface Webpage crawling Based on Heritrix Searcher Index files Information extraction Based on jsoup Local files Information index Based on Lucene Chinese words segmentation Fig. 2.The architecture of professional search engine system The function of webpage crawling module is realized by means of expanding Heritrix. Crawling strategy adopted first best algorithm. The gathered data are stored in local files. The function of information extraction module is realized based on jsoup. Information index module is operated based on Lucene. It creates index for documents adopting inverted index technology after Chinese word segmentation. The index files will be stored on local [5]. Searcher module searches information in index files according to key words input by users and returns the results to users after sorting. Strategy of focused crawling Focused web crawler filters links which has nothing to theme according to specific web analysis algorithm. It reserves the links related the topic and push them into the URL queue. Then, focused web crawler will select a URL for crawling on the basis of search strategy. This process is repeated until the end. Focused web crawler needs to solve four problems: (1) How to describe the professional topic? (2) How to decide the sequence of URLs which will be selected for crawling? (3) How to judge the correlation between a web page and professional topic? (4) How to improve the coverage of focused web crawler? Researchers have proposed some strategies of focused crawling and related algorithm. In this paper, we describe the professional topic and adopt best first search algorithm for focused crawling. Professional topic description In this paper, we describe professional topic with characteristic words which are selected by consulting the experts in this professional field. Each characteristic words is endowed with a weight used to represent its professional distinction and important degree. The sum of these weights equal 1. By this method, we have a reference document vector which represents the professional topic. Each component of the vector is the weight of every characteristic words and the dimension of vector is the number of characteristic words. 2059

Best first search algorithm The priority of a URL waiting for crawling is calculated with the correlation degree of web page p crawled and the professional topic. The higher the correlation degree is, the higher the priority of a URL which is directed by p is. It decides the crawling order of URLS in queues by this method. The calculation formula of correlation degree between web page p and professional topic is shown in equations (1): Sim( q, p) = kq k qi p k p f f 2 kp f k q kp f 2 kq Among them, q expresses the professional topic and p is the web page crawled. f kq expresses the occurrence frequency of words k appearing in q. f kp expresses the occurrence frequency of words k appearing in p. Web information extraction A web page includes many contents such as navigation information, content, copyright information usually. If we store and index the whole web page, the accuracy of retrieval results will be decreased. We must extract the useful content for correctly judging the web page. In this paper, we adopt jsoup component to extract information and URLs from a web page. The file jsoup-1.7.3.jar can be downloaded on its company website. We need to import it into eclipse. Jsoup is an open source toolkit developed with java. It not only can obtain HTML code of web page according to URL, but also provides many tools to extract useful information from HTML code. We extract title, content and URLs and at last save these to the folder by txt file. Chinese words segmentation The Chinese word segmentation machines of lucene named CJKAnalyser is difficult to achieve ideal effects. We must redesign the Chinese word segmentation module of system. At present, there are three mature word segmentation algorithms: based on string matching, based on language understanding and based on statistics. Each of them has its own advantages and disadvantages. In this paper, we adopted the combination of forward maximum matching algorithm and backward maximum matching algorithm for Chinese word segmentation. This algorithm matches the document positively and backwards and then it compares the results. It can eliminate the ambiguity effectively [6]. Forward maximum matching algorithm is a word segmentation algorithm based on the dictionary. It scans a string from the front to the back. For each word scanning, it finds the longest matching in the dictionary. Backward maximum matching algorithm is similar to forward maximum matching algorithm but it scans a string from the back to the front. The webpage document is segmented to words by two word segmentation algorithms respectively. Then the segmentation results are merged after removing duplicate. Authoritative dictionary has a direct impact on the effect of segmentation for the maximum matching segmentation algorithm based on dictionary. In this paper, the dictionary we used includes two parts: universal dictionary and professional dictionary. Design of searcher The function of searcher module includes key words pretreatment, synonym expansion, query optimization and lucene index retrieval [7]. First, pretreatment module will filter and split the query submitted by user. It removes meaningless character words. Then, searcher module will expand the synonyms of key words. At last, it realizes its search function basis for API which provided by Lucene. Default web page ranking algorithm of Lucene. Sorting the results of search is very important. Lucene scores results according to its own scoring mechanism [8]. For a given query q, the score of document d is calculated as shown in equations(3): Score d) = t d tf ( t d ) idf _ t boost( t. field d ) lengthnorm( t. field d ) ( (3) The meaning of different parameters as follow: Score(d): the score of document d. (1) 2060

tf(t d): frequency of query term t in query q appearing in document d. In Lucene, the value is square root of the real frequency. idf_t: the number of documents containing query term t. Usually idf_t is calculated by equations(4): numdocs _ t = log2 + 1 + 1 docfreq _ t idf (4) Among them, numdocs is the total number of indexed documents. docfreq_t is the total number of documents including query term t. boost(t.field d): It is a motivating factor for query term t in a field which default value is 1. It increased the importance of this field. At the same time also increased the importance of the document. lengthnorm(t.field d): It is a factor which reflects the size of document. The longer the document is, the lower the value of the factor and vice versa. We can find that the location of query in the document is not important in Lucene webpage ranking algorithm. The more the query appears in a document, the higher the score of this document is. The disadvantage of the algorithm is unable to reflect the importance of query position and the important values of web pages. Improvement of PageRank algorithm The PageRank algorithm represents important values of web pages by number. Its basic idea to determine the importance of web page is based on a web page must be a high-quality web page if it is linked from another high-quality web page [9]. Namely, a web page with higher PageRank value will be on the top of the retrieval results. PageRank value of the web page u is calculated by equations (5): i= 1 (( Ti ) ( ) n PR PR( u) = ( 1 d ) + d (5) C T i There in, PR(u) is PageRank value of the web page u. PR(T i ) is PageRank value of T i which links to u. C(T i ) is the number of links out of T i. d is damp coefficient, 0< d <1, and it is generally taken as 0.85[10]. From Eq. 5, we can find that the PageRank algorithm does not distinguish whether a web page and other web page linking in are in the same site. The PageRank algorithm is improved in this paper as shown in equations (6): (( Ti ) ( ) ( ) PR PR PR( u) = 1 d + d + ( ) 1 µ Ti V C T i i T j V C T o ( T ) ( ) j j µ (6) There in, V i is the set of web pages which link in u and are in the same site with u. V o is the set of web pages which link in u but are not in the same site with u. µ is weighting factor,0<µ<1 and it is generally taken as 0. 5. The initial PageRank values of all web pages are set to 1. The PageRank values of web pages will converge to a fixed value by about 20 times iterative calculation with Eq. 6. Improvement of Lucene web page ranking algorithm based on link analysis In this paper, we improved Lucene webpage ranking algorithm by the means of introducing PageRank algorithm. It is shown as equations (7): newscore( d) α * Score( d) + β * PageRank( d) = (7) There in, α and β are empirical coefficients and the exact values will be determined through many experiments. In our experiments, the empirical coefficients are set to 1. Analysis of experiment results We have verified the improved Lucene webpage ranking algorithm based on link analysis algorithm through designing twice comparative experiments. First, we adopt default web page ranking algorithm of Lucene and use 20 random key words to search information. Then, we adopt improved Lucene web page ranking algorithm based on link analysis and do the same search. At last, we calculate the numbers of eligible web pages in the first 100 records retrieved. The results 2061

are shown in Fig. 3. Experimental results show that precision ratio is higher when we adopt improved web page ranking algorithm. 100 90 Number of results meet the requirements 80 70 60 50 40 30 20 10 default Lucene algorithm improved Lucene algorithm 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Number of queries Fig. 3.The experimental results of improved algorithm contrasting with default algorithm For contrasting this system with general search engines, we input 20 random key words to search in baidu, Google and this system respectively. Then we calculate the numbers of eligible web pages in the first 100 records retrieved. The results are shown in Fig. 4. Experimental results show that this system has a higher precision ratio in professional field retrieval comparing with general search engine. 100 90 Number of results meet the requirements 80 70 60 50 40 30 20 10 baidu Google this system 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Number of queries Fig. 4.The experimental results of this system contrasting with general search engines CONCLUSION Professional search engine is becoming a research hotspot now in view of its precise search results. In this paper, we have analyzed the overall architecture of a professional search engine system and designed the function of each module after researching deeply in Heritrix and Lucene. We have extended Heritrix for collecting the web page accurately and efficiently. Not only that, we have analyzed default web page ranking algorithm of Lucene and improved it based on link analysis. At last, we have designed a professional search engine system and have realized searching rapidly. Experimental results show that this system has a higher precision in professional field retrieval. 2062

Acknowledgements This work was financially supported by the Foundation for Beijing teacher team construction-youth with outstanding ability project (No.YETP1414) and the scientific research program of Beijing institute of fashion technology (No.2012A-17). REFERENCES [1] Zhao Ke, Lu Peng, Li Yongqiang: Design and Implementation of Search Engine Based on Lucene. Computer Engineering, 2011, 37(16): 39-41(In Chinese) [2] Information on http://lucene.apache.org/ [3] Information on http://nutch.apache.org/ [4] Liu Qinchuang: Journal of Hanshan Normal University, 2008, 29(3): 22-25(In Chinese) [5] Wang Shuo, You Feng, Shan Lan, Zhao Hengyong: Computer Engineering and Applications, 2008, 44(19): 142-145(In Chinese) [6] Liu Weidong, Lu Ling:. Computer and Modernization, 2011, 191(7): 96-98(In Chinese) [7] Zhang Xian, Zhou Ya: Computer Systems and Applications, 2009, 18(2): 155-158(In Chinese) [8] Gu Wenli, Chen Wei, Chen Jiao, Lu Xiaoye: Computer Systems and Applications, 2012, 21(2): 214-217(In Chinese) [9] Wang Dong, Lei Jingsheng: An Improved Ranking Algorithm Based on PageRank. Microelectronics and Computer, 2009, 26(4): 210-213(In Chinese) [10] Liang Zhengyou, Pan Tao: Parallel Realization of PageRank Algorithm on Nutch. Computer Engineering and Design, 2010, 31(20): 4354-4356(In Chinese) 2063