A New Model of Search Engine based on Cloud Computing

Similar documents
Research Article Mobile Storage and Search Engine of Information Oriented to Food Cloud

The Design of Distributed File System Based on HDFS Yannan Wang 1, a, Shudong Zhang 2, b, Hui Liu 3, c

Huge Data Analysis and Processing Platform based on Hadoop Yuanbin LI1, a, Rong CHEN2

Design and Implementation of Agricultural Information Resources Vertical Search Engine Based on Nutch

A Fast and High Throughput SQL Query System for Big Data

Decision analysis of the weather log by Hadoop

Correlation based File Prefetching Approach for Hadoop

An Indian Journal FULL PAPER ABSTRACT KEYWORDS. Trade Science Inc. The study on magnanimous data-storage system based on cloud computing

A priority based dynamic bandwidth scheduling in SDN networks 1

Clustering Lecture 8: MapReduce

Journal of East China Normal University (Natural Science) Data calculation and performance optimization of dairy traceability based on Hadoop/Hive

Cloud Computing. Hwajung Lee. Key Reference: Prof. Jong-Moon Chung s Lecture Notes at Yonsei University

International Journal of Advance Engineering and Research Development. A Study: Hadoop Framework

Implementation of Parallel CASINO Algorithm Based on MapReduce. Li Zhang a, Yijie Shi b

Processing Technology of Massive Human Health Data Based on Hadoop

Research and Design of Key Technology of Vertical Search Engine for Educational Resources

Parallel Programming Principle and Practice. Lecture 10 Big Data Processing with MapReduce

The Establishment of Large Data Mining Platform Based on Cloud Computing. Wei CAI

Hadoop. copyright 2011 Trainologic LTD

HADOOP MAPREDUCE IN CLOUD ENVIRONMENTS FOR SCIENTIFIC DATA PROCESSING

An Improved Performance Evaluation on Large-Scale Data using MapReduce Technique

The Analysis and Implementation of the K - Means Algorithm Based on Hadoop Platform

Research and implementation of search engine based on Lucene Wan Pu, Wang Lisha

Research on Power Quality Monitoring and Analyzing System Based on Embedded Technology

Distributed Face Recognition Using Hadoop

Enhanced Hadoop with Search and MapReduce Concurrency Optimization

Batch Inherence of Map Reduce Framework

Big Data Analytics. Izabela Moise, Evangelos Pournaras, Dirk Helbing

Design and Implementation of Full Text Search Engine Based on Lucene Na-na ZHANG 1,a *, Yi-song WANG 1 and Kun ZHU 1

The Analysis Research of Hierarchical Storage System Based on Hadoop Framework Yan LIU 1, a, Tianjian ZHENG 1, Mingjiang LI 1, Jinpeng YUAN 1

Research and Improvement of Apriori Algorithm Based on Hadoop

A Study of Cloud Computing Scheduling Algorithm Based on Task Decomposition

New research on Key Technologies of unstructured data cloud storage

Cooperation between Data Modeling and Simulation Modeling for Performance Analysis of Hadoop

Open Access Apriori Algorithm Research Based on Map-Reduce in Cloud Computing Environments

HADOOP FRAMEWORK FOR BIG DATA

Dynamic processing slots scheduling for I/O intensive jobs of Hadoop MapReduce

MapReduce, Hadoop and Spark. Bompotas Agorakis

A New HadoopBased Network Management System with Policy Approach

Implementation of a High-Performance Distributed Web Crawler and Big Data Applications with Husky

Improvements and Implementation of Hierarchical Clustering based on Hadoop Jun Zhang1, a, Chunxiao Fan1, Yuexin Wu2,b, Ao Xiao1

Top 25 Hadoop Admin Interview Questions and Answers

MI-PDB, MIE-PDB: Advanced Database Systems

LITERATURE SURVEY (BIG DATA ANALYTICS)!

International Journal of Scientific & Engineering Research, Volume 7, Issue 2, February-2016 ISSN

Research on Mass Image Storage Platform Based on Cloud Computing

Lecture 7 (03/12, 03/14): Hive and Impala Decisions, Operations & Information Technologies Robert H. Smith School of Business Spring, 2018

CATEGORIZATION OF THE DOCUMENTS BY USING MACHINE LEARNING

Improvement on PageRank Algorithm Based on User Influence

A New Approach to Web Data Mining Based on Cloud Computing

High Performance Computing on MapReduce Programming Framework

An Intelligent Retrieval Platform for Distributional Agriculture Science and Technology Data

Hadoop محبوبه دادخواه کارگاه ساالنه آزمایشگاه فناوری وب زمستان 1391

Searching the Deep Web

Research on Load Balancing in Task Allocation Process in Heterogeneous Hadoop Cluster

CLIENT DATA NODE NAME NODE

SQL Query Optimization on Cross Nodes for Distributed System

Chapter 5. The MapReduce Programming Model and Implementation

Dynamic Data Placement Strategy in MapReduce-styled Data Processing Platform Hua-Ci WANG 1,a,*, Cai CHEN 2,b,*, Yi LIANG 3,c

The Design of Model for Tibetan Language Search System

Hadoop-PR Hortonworks Certified Apache Hadoop 2.0 Developer (Pig and Hive Developer)

Social Network Data Extraction Analysis

Top-k Keyword Search Over Graphs Based On Backward Search

Optimization Scheme for Storing and Accessing Huge Number of Small Files on HADOOP Distributed File System

Big Data for Engineers Spring Resource Management

A Security Audit Module for HBase

HDFS Federation. Sanjay Radia Founder and Hortonworks. Page 1

Journal of Chemical and Pharmaceutical Research, 2014, 6(5): Research Article

Forget about the Clouds, Shoot for the MOON

Chapter 2. Architecture of a Search Engine

Introduction to MapReduce

April Final Quiz COSC MapReduce Programming a) Explain briefly the main ideas and components of the MapReduce programming model.

Hortonworks HDPCD. Hortonworks Data Platform Certified Developer. Download Full Version :

The Optimization and Improvement of MapReduce in Web Data Mining

TOOLS FOR INTEGRATING BIG DATA IN CLOUD COMPUTING: A STATE OF ART SURVEY

PSON: A Parallelized SON Algorithm with MapReduce for Mining Frequent Sets

Getting Started with Spark

Introduction to Hadoop and MapReduce

An improved PageRank algorithm for Social Network User s Influence research Peng Wang, Xue Bo*, Huamin Yang, Shuangzi Sun, Songjiang Li

Lecture 11 Hadoop & Spark

A Cloud Computing Implementation of XML Indexing Method Using Hadoop

The Design and Implementation of Disaster Recovery in Dual-active Cloud Center

Research and Realization of AP Clustering Algorithm Based on Cloud Computing Yue Qiang1, a *, Hu Zhongyu2, b, Lei Xinhua1, c, Li Xiaoming3, d

Facilitating Consistency Check between Specification & Implementation with MapReduce Framework

Searching the Deep Web

Cloudera Exam CCA-410 Cloudera Certified Administrator for Apache Hadoop (CCAH) Version: 7.5 [ Total Questions: 97 ]

Performance Optimization for Short MapReduce Job Execution in Hadoop

Research Article Apriori Association Rule Algorithms using VMware Environment

Mochi: Visual Log-Analysis Based Tools for Debugging Hadoop

Research on Full-text Retrieval based on Lucene in Enterprise Content Management System Lixin Xu 1, a, XiaoLin Fu 2, b, Chunhua Zhang 1, c

Projected by: LUKA CECXLADZE BEQA CHELIDZE Superviser : Nodar Momtsemlidze

Hadoop MapReduce Framework

Gain Insights From Unstructured Data Using Pivotal HD. Copyright 2013 EMC Corporation. All rights reserved.

MAPREDUCE FOR BIG DATA PROCESSING BASED ON NETWORK TRAFFIC PERFORMANCE Rajeshwari Adrakatti

TP1-2: Analyzing Hadoop Logs

Key Technology of Online Writing System Development Hongmei Zhao

Project Design. Version May, Computer Science Department, Texas Christian University

Clustering and Correlation based Collaborative Filtering Algorithm for Cloud Platform

Web-Page Indexing Based on the Prioritized Ontology Terms

Efficient Algorithm for Frequent Itemset Generation in Big Data

Transcription:

A New Model of Search Engine based on Cloud Computing DING Jian-li 1,2, YANG Bo 1 1. College of Computer Science and Technology, Civil Aviation University of China, Tianjin 300300, China 2. Tianjin Key Lab for Advanced Signal Processing, Civil Aviation University of China, Tianjin 300300, China doi:10.4156/jdcta.vol5.issue6.28 Abstract With the rapid increase of websites and internet users, the traditional search engine will face great challenge in the real-time search, response speed and the storage of mass pages. However, the search engine deployed in the cloud can solve these shortcomings due to cloud computing with two major advantages in mass data processing and mass data storage. By analyzing the open-source cloud computing system Hadoop, a cloud platform search engine model is constructed and the core algorithm of search engine is optimized to improve the overall performance of search engine. Keywords: Cloud Computing, Hadoop, Search Engine, Model, Algorithm Optimization 1. Introduction In recent years, researchers generally research more on the vertical search engine [1,2] and has achieved a lot. However, these studies only focus on the application area of search engine. Meanwhile, with the great development of internet technology, combined with the tremendous development of 3G networks, the online population and web pages are rapidly increasing. Traditional architecture model of search engine can t adapt the development of network, and search engine is now facing the question how the mass data in the network are stored and how the mass data in the network is processed fast. The cloud computing technology [3] provides a new way to solve these problems, with two features of mass data storage and mass data processing. And there are a lot of open- source cloud computing projects. Hadoop, which is an open-source project of Apache Software Foundation, is widely used [4]. It can fully utilize the advantages between search engine and Hadoop to build the search engine on the Hadoop, and makes up for the shortcomings of search engine. 2. Hadoop Hadoop is an open-source distributed parallel computing platform, and it has the reliability, efficiency and scalability. It mainly consists of a parallel computing framework MapReduce [5] and a distributed file system HDFS [6], which ensure Hadoop efficiently parallel computing ability and mass data storage capacity. Hadoop is designed based on the concepts that the system failure is a normal, and makes cloud computing platform run reliably by maintaining multiple replications available and re-distributing the new node as fast as possible in case some nodes are failed. Hadoop uses master-slave structure. It has a simple master server (JobTracker) and some slave servers (TaskTracker) which are in a cluster. JobTracker is an interactive interface between users and the framework. When users submit the task to the JobTracker, JobTracker puts this task into the task queue and executes tasks according to the first come first served principle. JobTracker maintains the Map and Reduce tasks assigned to TaskTrackers. TaskTracker executes instructions that get from JobTracker, and simultaneously deals with the exchange of data between Map and Reduce. Each node will periodically report the completed work and updated status to TaskTracker. If a TaskTracker doesn t communicate with JobTracker for a long time (it should be specified), JobTracker records this node as dead and assigns this node s data to other nodes. - 236 -

2.1. MapReduce Computing Framework MapReduce mainly indicates the two aspects, Map and Reduce, and it completes Mapping operation and Reducing operation respectively. Each Mapping operation is independent, with a high degree of parallelism. Reducing operation receives the results of Mapping operation and merges the results. Meanwhile, Reducing operation is highly parallel, too. It is because of the highly parallel distributed computing of MapReduce so that the mass data can be efficiently processed on the cloud computing platform. MapReduce functions are as follows [7]: Map:(in_key,in_value) {key j, value j j=1 k} Reduce:(key, [value 1,, value m ]) (key, final_value) The input parameters of Map are in_key and in_value. The output of Map is a set of <key,value>. The input parameters of Reduce is (key, [value 1,..., value m ]). After receiving the parameters, Reduce is run to merge the data which were get from Map and output(key, final_value). MapReduce operation model is shown in figure 1: Fig 1. MapReduce Operation Model The MapReduce execution flow is as follows: 1: Input(File A); 2: Split(A)// Separate file A into m of data blocks sized between 16M to 64M; 3: The master control program distributes m of Maper and r of Reducer machines; 4: Maper 1,..., Maper m execute Map (in_key, in_value) at the same time AND the result will be stored in the local cache; 5: Reducer 1,..., Reducer r get results from Maper by getting through remote call and execute Reduce (key, [value 1,... value m ]); 6: Output(Final_file B); 2.2. HDFS (Hadoop Distributed File System) HDFS is designed with master-slave mode. It manages the entire data of cloud computing platform, which consists of two special nodes. HDFS includes one NameNode and lots of DataNodes. NameNode provides metadata services within HDFS. DataNode provides the storage blocks for HDFS. HDFS architecture is shown in figure 2. - 237 -

Fig 2. HDFS Architecture When user program accesses the HDFS, it visits NameNode firstly and gets metadata information. Then, it directly visits DataNodes and accesses data. This design of HDFS makes the control flow and data flow separate. There is only control flow between user program and NameNode, without data stream, thus it can greatly reduce the load of NameNode and can t become a bottleneck in system performance. There are direct data flows between DataNodes and user program. And because a file is divided into some data blocks for distributed storage and data backup, user program can access some DataNodes at the same time, which makes the I / O of whole system highly parallel so that the whole system performance is improved. 3. Overview of Search Engine Search engine is the most efficacious tool to discover useable information in World Wide Web. And Search engine has become a necessary to explore internet. Without Search Engine, there are no uses of information in websites, blog, etc; because without search engine, it is almost impossible to look for one by one websites just for searching information in internet. Search engine is a system based on certain strategies, using specific computer programs to collect information on internet, organizing the information and providing retrieval services for users. Search engine generally consists of five parts: fetcher (information collection devices), parser, indexer, retriever and user interface. The system structure is shown in figure 3: - 238 -

Fig 3. Search Engine Architecture Where, Fetcher is known as web crawler and its function is to find and collect information from internet. Parser analyzes the document to collect, and then provides them to the indexer. Indexer transforms the document as a form to easily retrieve and stored in the index database. By using the indexes stored in the index database created by indexer and keywords input by users, Retriever finds out the documents that match the keywords, and sort the results. User interface is provided for users to find information conveniently. Search engine workflow is as follows: 1) Prepare links. Add seed links to xml file or text file and submit it to the WebDB which is the local folder. Link preparing module will read one of the URLs and give it to Fetcher. 2) Crawl pages. After receiving a URL, Fetcher begins to crawl the pages with the breadth-first search strategy and stored all the pages in local files. 3) Analysis pages. After receiving crawled pages, Parser begins to analysis pages and extracts pages text and feature information, such as title, time, source and so on. There are two important tasks after parsing web pages for Parser. One of tasks is that storing the URL lists get from web pages to the local folder Segment and generating a new link list to crawl the pages for Fetcher. Another is that submitting the page text to Indexer. And Parser integrates and simplifies new link lists and saves them to local for assigning new crawling task easily. 4) Repeat the step 2 3 until it reaches the crawling depth set by user. 5) Create Index. After finishing the steps above, Indexer starts building an index, and stores the index to local. 6) User s search. When the user submits a query, the retrieval module starts to search the pages related to the topic in local according to the index, integrates the results of the query and show the results to the user from most relevant one to the least one. As we can see from the framework of the search engine mentioned above, the traditional search engine works in a focused manner so that it can t achieve efficient parallel operation. So the current search engine is difficult to deal with the mass data in network efficiently and provide users with searching services in time. These are issues search engine are facing, but also to be solved in this paper. 4. Search Engine Model Based on Cloud Computing Platform Through analyzing the distributed computing framework of Hadoop and the current search engine architecture above, it can be seen that Search engine is not good at dealing with mass data, - 239 -

but cloud computing can make up for the lack of search engine because of its efficient distributed computing framework MapReduce and distributed file system HDFS with mass data storage capacity. Building the search engine on Hadoop platform can solve the problems in mass data processing and mass data storage. And the search engine will be greatly improved in real-time search and response speed. The Search engine model based on cloud computing platform is shown in figure 4. Fig 4. Search Engine Model Based on Cloud Computing Platform As we can see from the figure 4, the bottom of search engine is cloud computing platform based on Hadoop. And there are two improvements on this model compared to the traditional search engine. One of them is that there are great changes on the computing manner of fetcher, parser, indexer and retriever. They run based on the framework of MapReduce in this model. Another is that index database is replaced by HDFS and the index is managed and maintained by the distributed file system with master-slave mode. The following is the workflow of this model. 1) Prepare links. Add seed links to xml file or text file and submit it to the WebDB the folder of HDFS. Link preparation module will read the URLs and split them into some of link blocks. 2) Crawl pages. After receiving link blocks, JobTracker mentioned above start Map/Reduce tasks and assigns the task of crawling pages to TaskTrackers. The TaskTracker (Fetcher) which received task of Map begins to crawl the pages with the breadth-first search strategy. The TaskTracker (Fetcher) which received the task of Reduce integrates and filters the pages crawled by TaskTracker which runs the task of Map, and stores all the pages in the HDFS. 3) Analysis pages. When Parser received crawled pages, MapReduce tasks begin. The TaskTracker (Parser) which received the task of Map analysis pages and extracts pages content and feature information, such as title, time, source and so on. There are two important tasks after parsing web page for Parser. One of them is that storing the URL lists get from web pages to Segment the folder of HDFS and generating a new link list for Fetcher to crawl the pages. Another is that submitting the page content to Indexer. TaskTracker (Parser) which received Reduce task integrates and simplifies new link list and submits them to HDFS to be managed together by NameNode for assigning new crawling task easily. 4) Repeat the step 2 3 until it reaches the crawling depth set by user. 5) Create Index. Indexer begins MapReduce task after crawling the pages. The TaskTracker - 240 -

(Indexer) received the task of Map start building an index, and store the index to local. The TaskTracker (Indexer) which received the tasks of Reduce submits the indexs stored in all TaskTracker (indexer) which run the tasks of Map to NameNode for the unified management and facilitating user s queries. 6) User s search. When the user submits a query, the retrieval module starts to perform MapReduce tasks. The TaskTracker (Retriever) which received the task of Map begins to search the pages related to the topic in local. The TaskTracker (Retriever) which received the task of Reduce integrates the results of the query and submits results to the user according to the relevance with the topic from high to the end. In addition, WebDB and Segment are data structure used in general search engine, and not to be repeated here. 5. Keywords Weighting Improvement The search engine model based on cloud computing platform is described above. However, keywords weighting needed to be optimized so as to get the best performance. The TF-IDF method is most widely used in keywords weighting. The importance in a single document and it in the entire data set of a keyword is considered simultaneously by this method. The advantage of this method is that it can make the weight of the keyword more reasonable which appears more often in a single document and the entire data set. For example, the frequency of "biological" is high in data sets related to biology, but the importance of this word to the document is less than "DNA" and "cell". It is significant to select the keywords in order to exclude nonessential information and reducing the indexing time. Inverted document frequency (IDF) and Term frequency (TF) are calculated respectively as (1), (2). [8] (1) (2) Where, is the number of data files (text), is the number of documents which contain the keyword, is the number of keyword appeared in document and is the number of all words in document. Considering the TF and IDF, the weight of the keyword k in the document is calculated as (3). As we can see from (1) (2), term frequency a statistical value is considered only in calculating the keywords weight. But there are lots of non-statistical values, for example, the keywords in title, the bold or italic keywords. Compared to normal pages content, these elements represent the characteristics of pages more. Moreover, it is very easy to be analyzed. The relevant parameters are easily obtained by distinguishing markup in HTML language. The improved method of TF-IDF is (4). (3) (4) Where, is parameter of pages. Generally, when keywords are in the subject, the value of is the maximum taking 10;when keywords are bold or italic in the document, the value of takes 5; when there is no tag, the value of takes 1. - 241 -

6. Experiment and Results Analyze Limited due to experimental environment, we use 8 PC running Ubuntu OS to build Cloud Computing Platform and install search engine system on this platform. Among them, one computer is NameNode and the remaining seven computers are DataNodes. To test the efficiency of search engine deployed in the cloud computing platform, we take the crawling depth as a data increasing index and capture data from my school intranet (www.cauc.edu.cn) from low depth to higher ones. Each layer crawling is carried out with three times and the result is the average of three sets of data. Experiment results are shown in table 1 and figure 5. Depth Platform Table 1. Crawling Time of Different Platform (UNIT: MIN) 1 2 3 4 5 6 7 8 9 10 Cloud Computing 8.52 13.42 16.08 20.57 23 26.62 30.72 34.38 36.93 38.77 Integration 5.32 10.12 13.92 17.25 23.28 27.7 32.42 37.8 44.95 49.83 Fig 5. Comparison of Different Platform in Crawling Time As we can see from table 1 and figure 5, the time used in searching by search engine deployed in cloud computing platform is more than it used by centralized search engine before depth 5. It is because that search engine based on cloud computing platform takes a larger proportion of the capture time to communicate with each other when the amount of data is small, which ultimately affects the entire crawling time. On the contrary, centralized search engine costs less time to crawl pages because there is no communication and there is a small amount of data. When the depth is higher than 5, with the increase in the amount of data, the advantage of processing mass data is gradually emerging out, and capture time is much less than the centralized search engine. Moreover, this advantage is more and more obvious with the increase in the amount of data. After finishing capture the data and searching the keywords through the client, we can get the satisfactory results in front of 20 records within 0.01ms ~0.49ms. By analyzing experiment results, we can see that the search engine deployed in the cloud computing platform can solve the problems that search engine deals with the mass data inefficiently, and the result displayed has been greatly improved. 7. Conclusion Based on the depth analysis between cloud computing and search engine, there is a good combination point found. That deploying Search engine to the cloud computing platform can deal with the problems existing in search engine, which are mass data processing and mass data mining. Moreover a search engine model based on open source cloud computing platform Hadoop is proposed and the two algorithms of search engine are improved. As we can see from the experiment results, the experiment results are satisfactory overall and achieve the expected - 242 -

results. But there are two shortcomings. On the one hand, experiments were carried out on the intranet but not internet. On the other hand, there is only one web sites with limited data sets, so the mass data processing capacity of cloud computing didn t perform well. Further work will focus on these two aspects so as to achieve better results. 8. Acknowledgement Foundation item: Project (2006AA12A106) supported by the National High Technology Research and Development Program (863); Project (60879015, 60572167) supported by the National Natural Science Foundation of China; Project (MHRD201013) supported by Civil Aviation Administration Science Foundation of China. 9. References [1] Dorin Carstoiu, Elena Lepadatu, Mihai Gaspar, "Hbase - non SQL Database, Performances Evaluation", IJACT, Vol. 2, No. 5, pp. 42 ~ 52, 2010. [2] Waralak V. Siricharoen, "Using Integrated Ontologies for Determining Objects towards Software Engineering Approach", AISS, Vol. 2, No. 4, pp. 61 ~ 70, 2010. [3] Hochul Jeon, Taehwan Kim, Joongmin Choi, "Personalized Information Retrieval by Using Adaptive User Profiling and Collaborative Filtering", AISS, Vol. 2, No. 4, pp. 134 ~ 142, 2010. [4] Omid Kashefi, Nina Mohseni, Behrouz Minaei, "Optimizing Document Similarity Detection in Persian Information Retrieval", JCIT, Vol. 5, No. 2, pp. 101 ~ 106, 2010. [5] Wang Ying, Liu Guangli, Bai Shengli, Yang Zhimin, "Attribute Extraction System for Agricultural SEM", JCIT, Vol. 5, No. 3, pp. 20 ~ 23, 2010. [6] Debajyoti Mukhopadhyay, Sukanta Sinha, "A Novel Approach for Domain Specific Lucky Web Search", JCIT, Vol. 5, No. 5, pp. 72 ~ 80, 2010. [7] Peng LIU, "Cloud Computing", "Electronic Industry Press", pp: 10-18, 2010. [8] Chang-yuan FENG, Jie-xin PU, "Research about Algorithm of Web Text Feather Selection", "Application Research of Computers", Vol.07, 2005, pp: 36-38,2005. [9] Tao WANG, Xiao-zhong FAN, "Design and implementation of topical crawler", "Computer Applications", Vol.24, 2004, pp: 270-272,2004. [10] Li-zhu ZHOU, Ling LIN, "Survey on the research of focused crawling technique", "Computer Applications", Vol.25 (9), 2005, pp: 1965-1969,2005. [11] Havelieala. "Topic-sensitive PageRank". Proceeding of the 11th International World Wide Web Conference, Hawaii, pp: 517-526, 2002. [12] Michael Armbrust, Armando Fox, and Rean Griffith, et al. "Above the Clouds: A Berkeley View of Cloud Computing", mimeo, UC Berkeley, RAD Laboratory, 2009. [13] Hadoop Distributed File System: Architecture and Design. [14] Hadoop Site. http://hadoop.apache.org. [15] Hadoop Map/Reduce tutorial. [16] http://hadoop.apache.org/common/docs/r0.18.2/cn/mapred_tutorial.html. - 243 -