AN EFFICIENT COLLECTION METHOD OF OFFICIAL WEBSITES BY ROBOT PROGRAM Masahito Yamamoto, Hidenori Kawamura and Azuma Ohuchi Graduate School of Information Science and Technology, Hokkaido University, Japan North 14, West 9, Kita-ku, Sapporo 060-0814, Japan masahito@complex.eng.hokudai.ac.jp ABSTRACT In this paper, we present the robot program that can gather a lot of Web pages and extract only official Web pages. Official Web pages are very useful for learning valuable local information because the pages contain much information about the facility. For example, in the case of accommodation, the official website contains some photos, various stay plans, some facilities (restaurants, spa, shops, etc.) and online reservation pages. We defined an official website as the website built by the owner of the facility, and a Web page as a page on the website. Although official websites are relevant for users, it is difficult to search for them in certain geographical areas. Existing search engines are not suitable for this purpose because they utilize only some keywords to extract the information. Our proposed robot program can extract a lot of websites in a certain region and detect whether a website is an official one. In this paper, we focus on official accommodation websites, because they become very important for many Internet users that want to reserve the rooms. The program can gather a lot of Web pages and extract only the Web pages of accommodation facilities among them, and then classify some accommodation websites by the names of facilities and rank them in order of probability that the website is an official one. Key Words: internet, information technology, tourism informatics 1. INTRODUCTION According to the rapid growth of the Internet environment, the World Wide Web (WWW) is regarded as a popular tool to get various kinds of information. Actually, we can get a lot of information about such as some news, food, fashion, politics, medicals, tourism, and so on. Particularly, there are many cases that we want to know some special plans or other up-todate information of certain facilities such as restaurants or hotels by their official website. An official website of a facility is defined as the website built by the owner of the facility in this paper. Currently, however, it is unfortunately impossible to exactly detect whether a website is an official one, because we cannot always know who actually build the website. For example, it is very useful for tourists to gather information about the area that they visit or to reserve a restaurant, hotels and some transportation by official websites. If these tourists decide to plan to make a trip, it would be useful for them to gather information about the area that they will visit by Web browsing. Particularly, for accommodations, official Web pages are very useful for learning valuable local information because the pages contain much information about the accommodation including some photos, various stay plans, some facilities (restaurants, spa, shops, etc.) and online reservation pages. 15.1.1
However, it is becoming increasingly difficult to find the official websites from the WWW due to its rapid growth, and especially, to find Web pages belonging to a certain category. Although official websites are relevant for users, it is difficult to search for them in certain geographical areas. Existing search engines are not suitable for this purpose because they utilize some keywords to extract the information and accommodation websites do not always contain the most common words, although, of course, it will be easy to find the official Web page if the name of the hotel is already known. In this paper, we develop an efficient collection method of official Web pages by a robot program that can extract only some official websites in a certain region automatically. The program can gather a lot of Web pages by using some Web pages as a seed and expand them by crawling a large amount of related Web pages and extract only the Web pages belonging to certain category among them, and then classify these websites by the names of facilities and rank them in order of probability that the website is an official one. To develop the robot program, we used Web crawling and data mining techniques. Therefore, the contribution of this paper is to improve the current Web mining techniques and provide a technique to develop a search engine that is very useful for the Internet users. 2. OFFICIAL WEBSITE As mentioned earlier, it is difficult to detect whether a Web page is an official one, because we cannot know the developer of a website precisely. Fortunately, however, there are many cases where it is relatively easy to determine that the site is an official one by manually browsing the pages on this site. For example, we decided that the website is an official by finding the contact telephone number, the address of the facility, an original stay plan, a reservation page, and so on. Therefore, we manually determined whether each site is an official one in order to evaluate the accuracy of the robot program proposed in this paper. Note that there are some accommodations that have more than two official websites, for example, a hotel which belongs to a large hotel group. The most straightforward method of searching for official accommodation websites in a certain region is to use a term-based search engine such as Google (http://www. google.com/ [September 20, 2003]) and Altavista (http://www.altavista.com/ [September 20, 2003]). Such search engines are based on a robot which is generally called a crawler or a spider. The crawler collects many Web pages in advance by using link analysis and keeps it as the database. For example, Google's crawler collects three billion Web pages. As the user sends a query string, Web pages including the query string are extracted and presented to the user. The extracted Web pages are ranked by the ranking algorithm. However, such search engines are not suitable for searching for official accommodation websites in an area. For example, it is expected that some Web pages on the official websites contain the word hotel in the text. Unfortunately, many official Web pages does not contain the word hotel. and generally, it is very difficult to find some words that are expected to appear in only official accommodation websites. If you input the word hotel in the query box of a term-based search engine, you can find that many Web pages are extracted that are not related to accommodation websites. Another candidate method is to use the Internet directories Yahoo! Japan (http:// www.yahoo.co.jp/ [September 20, 2003]) and ODP (http://dmoz.org/ [September 20, 2003]). An Internet directory is a database of official websites that are classified into some categories in advance. It is easy to find the accommodation websites in a certain region by selecting some categories and regions in the Internet directories, although, the number of registered Web pages is very small because the Web pages are classified by humans. As a result, it is difficult to search for many official accommodation websites in a certain region by 15.1.2
using Internet directories, although the quality of their results is far higher than that of the term-based searches. 3. COLLECTION ALGORITHM In this section, we present the proposed algorithm for searching for many official accommodation websites in a specific region. The algorithm is mainly divided into two parts: (a) collection of candidate Web pages, and (b) Extraction of official Web pages. To implement (a), Web crawling and link analysis techniques are used. In part (b), we apply some heuristics obtained from preliminary experiments. 3.1. Web Crawling and Link Analysis Techniques In order to gather many candidate official accommodation Web pages, the robot (a computer program called crawler or spider ) is developed and utilized. If the target official Web page is not collected in the phase (a) of the algorithm, the algorithm cannot search for the official Web page, even in the next phase (b). Therefore, a lot of Web pages related to accommodations in a given region have to be gathered in (a). For this purpose, the Kleinberg (Kleinberg, 1998) method is adopted to gather many candidate Web pages related to accommodation. The method obtains hyperlinks of the given Web pages from text information on the page and searches for further pages by utilizing that hyperlink information. Details are described in the next subsection. 3.2. Algorithm In this subsection, we describe the details of our proposed algorithm. For simplicity, suppose that a specific area name is given. Step 1: Collection of candidate Web pages. By using a telephone book, the program extracts some area codes of telephone numbers from the area name. The set of area codes is denoted by P. Some words that commonly appear in the official Web pages are extracted and registered with the program in advance. These words are determined according to the results of preliminary experiments. In this paper, we treat the set of Japanese words meaning hotel, pension, hostel, rodge, guest house, accommodation, reservation, fee, food, and hot spa. The set of these words is denoted byq. About n ( n = 200 in this paper) Web pages are selected among the Web pages including the telephone number with the area code and at least one words Q of by using search engines. The set of Web pages is called the root set and denoted by R. For each page P in R, all Web pages linked from the page are collected and the set of the collected Web pages are denoted by R + (P). A set R + is the union of R + (P) for all P in R. To do this, Web crawling techniques can be utilized, i.e., we developed and implemented a robot program that can detect the Web link and obtain the html files of the linked pages. For each page P in R, all Web pages linking to the page are collected, and the set of the collected Web pages are denoted by R (P). A set R is the union of R (P) for all P in R. The relation among the sets R, R + (P) and R (P) is shown in Fig. 1. The set + R R R is a candidate set S of official accommodation websites. 15.1.3
p R + ( p) R R ( p) Fig. 1. Collection of candidate Web pages based on the link analysis The above method is the part of Kleinberg's HITS (Kleinberg, 1998). There are two reasons for performing this operation. First, the candidate pages that are collected by using a term-based search may not contain all official accommodation websites. Therefore, it is necessary to expand R, in fact, R contains about 70% of official accommodation websites that the program can find. About 30% of the websites are collected by the above method. Because the link-connected Web pages often have the same topics, we expanded R by using a link structure. Another reason is that we need the set of pages connected mutually, since the link structure for extracting official websites is used in the next step. Furthermore, in order to confirm that the set Q of words is adequate, it was necessary to perform a preliminarily experiment. We checked 358 official accommodation websites manually registered in the So-net search engine in Hokkaido. Hokkaido is Japan s northern island; it features many attractive places and good hotels. Among these pages, all pages (100%, confirmed) include at least one element of Q. Furthermore, all pages (100%, confirmed) include at least one element of P, and 93.53% of the pages include both one element of P and one element ofq. From these results, it seems that Q is adequate. Step 2: Extraction of accommodation Web pages. Among the set S, any pages including a few telephone numbers (1-4 in the experiment presented in this paper) are all extracted, and the extracted set is denoted by S. All pages in S are classified into the set of the facilities i according to their telephone numbers. C i Finally, for each set C i, the program ranked in order of probability whether the website is an official one. To do this, we used the data mining technique based on the link analysis and some heuristics. The following measurements (1) in-degree of each Web page and (2) the length of URLs of each Web page are used in the main heuristics. Generally, official websites of accommodations are linked from many other Web pages such as hotel booking sites, some agents and self-governing bodies. Therefore, the program ranks each website by evaluating the in-degree of every site in C i. Furthermore, the length of URLs of the Web page including the telephone number of official websites is generally short in comparison with that of other hotel booking sites. The evaluation function is composed by these heuristics and the program ranks them by using the evaluation function. The heuristics were obtained through preliminary experiments. We collected automatically about 5,500 Web pages registered in a certain search engine site. The above heuristics were derived by extracting common features found in official accommodation websites from the result. For example, the average length of URLs of the Web pages including the telephone number of official websites is 46.9 characters, on the other hand, the average length of the Web pages including the telephone number of all candidate Web pages collected by the proposed program is 59.5 characters. Therefore, we confirmed that relatively short URLs are used in official Web pages. The reason why is that the domain name of official websites is 15.1.4
relatively more simple than that of other sites, and the URLs of the Web pages in hotel booking site or personal diary pages have more longer characters. 4. EXPERIRMENTS AND EVALUATIONS To evaluate the effectiveness of our proposed program, we applied the program to Kutchan, which is a small town in Hokkaido. This town includes the luxurious tourist resort Niseko- Hirahu, which is famous for ski its slopes tennis courts and various other outdoor sports facilities. Therefore, there are many accommodations, although almost all of them are smallscale. There are about 140 accommodations in Kutchan and we assume that about 55 have their own official website, based on preliminary experiments. Nevertheless, there are only fine registered websites in Yahoo!. The telephone numbers of Kutchan are 0136-2X- ABCD, where X is either 1, 2 or 3. A, B, C, and D can be any single-digit number. The results are shown in the following. Since the area code of Kutchan is 0136, a string of 0136 is used as the geographical term set P. The program collected 1,302 Web pages as the root set R. By employing the Kleinberg s method, the program made the set S; actually, the number of Web pages collected as the set S is 19,454. From all candidate pages (excluding the pages containing more than five telephone numbers per page) of the set S, the program extracted all telephone numbers. In this experiment, 189 telephone numbers were found. Some accommodations may have two or more telephone numbers; for example, one telephone number for the general office and one for reservations only. Our program doesn't exclude such a case (62 cases in this experiment), thus two or more pages about one accommodation may be extracted. To accurately evaluate our program, we removed such duplications and then evaluated 127 telephone numbers. The program detected that 60 telephone numbers out of these 127 numbers were those of accommodations. The correct rate was 93.3%. On the other hand, 15 accommodation numbers were included among 67 numbers, and therefore the correct rate was 77.6% in this case. As we mentioned earlier, since it is a very difficult task to extract only the official accommodation websites, we can evaluate the proposed program as very effective. The results are summarized in Fig. 2. Total: 127 Accommodation page 55.7% The program says Yes The program says No Total: 60 Total: 67 wrong 93.3% correct wrong 77.6% correct Fig. 2. The evaluation results of the diction of accommodation Web pages. Furthermore, the program found 46 official accommodation Web pages are ranked #1 out of 71 accommodation Web pages; nevertheless, only 55 accommodations have official websites. The rate of correctness was 83.6%, and even in three of the remaining cases, the #2- ranked page was an official one. Because these evaluation values are very high, it is clear that the effectiveness of the proposed program is also high. The result is summarized in Fig. 3. 15.1.5
No official websites Total:71 Official website 77.4% Not collected Ranks #2 5.5% Total:55 83.6% Rank #1 5. RELATED WORKS Fig. 3. The evaluation result of ranking of candidate Web pages. Here, we briefly explain some link-based Web page analysis. Two popular link-based Web page ranking algorithms are HITS and PageRank. These algorithm uses link topology to capture the notion of some average opinion of the Web page creator. The hyperlinks of these Web pages form a directed graph G = ( V, E), where V is the set of nodes pi representing a Web page, and E is the set of hyperlinks. The hyperlink topology of the web graph is contained in L = ( L ) the asymmetric adjacency matrix ij L = 1, where ij pi p if j L = 0 and ij otherwise. Kleinberg (Kleinberg, 1998) presented the HITS algorithm, which can identify hub and authority Web pages. A hub page has many links to other authority pages and an authority page is linked by many hub pages. The definitions of these pages are recursive and mutually reinforcing. In the algorithm, each Web page p has both a hub score y and an authority OP score. Here, L represents the idea that a good authority is indicated by many good hubs, x i OP and O represents the idea that a good hub points to many good authorities. Then, OP T OP X = L ( Y) = L Y, Y = O ( X ) = LX, T T where X = ( x1, x2,, x n ) and Y = ( y1, y2,, y n ) are vectors of the authority score and hub score, respectively, of each Web page. The final authority and hub scores of every Web page can be obtained through iterative processes that can represent the next expression, ( t+1) T ( t) ( t+1) T ( t) cx = L LX, cy = LL Y, (i) (i) where c is a normalization constant such that x = y = 1, while x, y respectively th t represent authority and hub scores at the iteration. Many improved algorithms also exist which compute authority and hub scores. The ARC algorithm by Chakrabarti (Chakrabarti, 1998) extends Kleinberg's algorithm with textual analysis. ARC computes a distance-2 neighborhood graph and weights edges. The weight of each edge is based on the match between the query terms and the text surrounding the hyperlink in the source document. This algorithm is aimed at making resource lists similar to those provided by the Internet directories Yahoo! or Infoseek. Their aim is similar to ours, however, because ARC depends on a textual analysis. Therefore, ARC cannot find Web i i 15.1.6
pages belonging to a certain category such as official accommodation websites, because official accommodation websites do not always contain the most common words. Brin and Page (Brin, 1998; Page, 1997) presented the PageRank algorithm, which is used in the search engine Google. PageRank uses an idea similar to HITS, in that a good Web page should connect to or be indicated by other good Web pages. However, instead of mutual reinforcement, it adopts a web surfing model based on a Markov process to determine the scores. Richardson (Richardson, 2002) presented a text-based expansion of PageRank, and C. Ding (Ding, 2001; Ding, 2001) presented an analysis of HITS and PageRank and their unified algorithms. A Web community can also be defined as another link-based Web page analysis. Flake (Flake, 2000) defines a Web community as a set of web pages that link in either direction to more Web pages in the community than to pages outside the community. Members of such a community can be efficiently identified in a maximum flow/ minimum cut framework. 6. CONCLUSION We have applied our proposed robot program to Hokkaido local accommodation websites in a certain area, although the proposed program can also be applied to other objects by changing the extraction rule. In this area, there are about 140 accommodation facilities such as hotels and pensions. It appears that about 55 of these facilities have an official website, although we found that only nine facilities are registered in the directory-type search engine So-net in this area during the preliminarily experiments. By using our proposed program, we could collect about 46 official websites of accommodation facilities in the area, and we have found that the site ranked #1 was an official one with high probability (about 85%), and even in some the remaining cases, the #2-ranked site was also an official one. The proposed robot program can collect official accommodation websites automatically, and can be extended to other facilities such as restaurants without changing the framework of the algorithm, although some heuristics may be changed. Using the proposed technique, we can easily construct an automatically generated portal site for official accommodation websites. REFERENCES Kleinberg, J. (1998). Authoritative Sources in a Hyperlinked Environment. Proceedings of the ACM-SIAM Symposium on Discrete Algorithms: 668-677. Chakrabarti, S., Dom, B., Raghavan, P., Rajagopalan, S., Gibson D. & Kleinberg, J. (1998). Automatic Resource Compilation by Analyzing Hyperlink Structure and Associated Text. Proceedings of 7th International World Wide Web Conference. Bharat, K. & Henzinger, M. (1998). Improved algorithms for topic distillation in ahyperlinked environment. Proceedings of ACM-SIGIR Conference. Cohn, D. & Chang, H. (2000). Learning to probabilistically identify authoritative documents. Proceedings of ICML 2000:167-174. Ng, A. Y., Zheng, A. X. & Jordan, M. I. (2001). Stable algorithms for link analysis. Proceedings of the 24th International Conference on Research and Development in Information Retrieval (SIGIR2001). Chang, H., Cohn, D. & McCallum, A. (2000). Creating Customized Authority Lists. Proceedings of 17th International Conference of Machine Learning. Gibson, D., Kleinberg. J. & Raghavan, P. (1998). Inferring Web Community from link Topology. Proceedings of the 9th ACM Conference on Hypertext and Hypermedia (HYPER-98): 225-234. Brin, S. & Page, L. (1998). The anatomy of a large-scale hypertextual web search engine. Proceedings of 7th World Wide Web Conference. 15.1.7
Page, L., Brin, S., Motowani, R. & Winograd, T. (1997). PageRank citation ranking: bring order to the web. Stanford Digital Library working paper: 1997-0072. Richardson, M. & Domingos, P. (2002). The Intelligent Surfer: Probabilistic Combination of Link and Content Information in PageRank. MIT Press, 14. Haveliwala, T. (2002). Topic-Sensitive PageRank. Proceedings of the 11th International World Wide Web Conference. Ding, C., He, X., Husbands, P., Zha, H. & Simon, H. (2001). PageRank, HITS and a Unified Framework for Link Analysis. LBNL Tech Report 49372. Ding, C., Zha, H., He, X., Husbands, P. & Simon, H. (2001). Link Analysis: Hubs and Authorities on the World Wide Web. LBNL Tech Report 47847. Flake, G., Lawrence, S. & Giles, C. L. (2000). Efficient Identification of Web Communities. Proceedings of 6th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Lawrence, S. & Giles, C. L. (1999). Accessibility of information on the web. Nature 400(6740): 107-109. Kosala, R & Blockeel, H. (2000). Web mining research: A survey. ACM SIGKDD Explorations: 1-15. 15.1.8