FOEval: Full Ontology Evaluation

Similar documents
Ontology Evaluation and Ranking using OntoQA

OntoCAT: An Ontology Consumer Analysis Tool and Its Use on Product Services Categorization Standards

What makes a good ontology? A case-study in fine-grained knowledge reuse

ORES-2010 Ontology Repositories and Editors for the Semantic Web

OntoMetrics: Putting Metrics into Use for Ontology Evaluation

WATSON: SUPPORTING NEXT GENERATION SEMANTIC WEB APPLICATIONS 1

SemSearch: Refining Semantic Search

Downloaded from jipm.irandoc.ac.ir at 5:49 IRDT on Sunday June 17th 2018

Searching and Ranking Ontologies on the Semantic Web

Constructing Virtual Documents for Keyword Based Concept Search in Web Ontology

Semantic Web. Ontology Engineering and Evaluation. Morteza Amini. Sharif University of Technology Fall 93-94

Refining Ontologies by Pattern-Based Completion

Ontology Refinement and Evaluation based on is-a Hierarchy Similarity

Dynamic Ontology Evolution

TERM BASED WEIGHT MEASURE FOR INFORMATION FILTERING IN SEARCH ENGINES

Semantic Web. Ontology Engineering and Evaluation. Morteza Amini. Sharif University of Technology Fall 95-96

Using Hash based Bucket Algorithm to Select Online Ontologies for Ontology Engineering through Reuse

Semantic Cloud Generation based on Linked Data for Efficient Semantic Annotation

SWSE: Objects before documents!

<is web> Information Systems & Semantic Web University of Koblenz Landau, Germany

Ranking Ontologies with AKTiveRank

International Journal of Engineering Research-Online A Peer Reviewed International Journal Articles available online

What can be done with the Semantic Web? An Overview of Watson-based Applications

Evolva: A Comprehensive Approach to Ontology Evolution

OntoQA: Metric-Based Ontology Quality Analysis

Extracting knowledge from Ontology using Jena for Semantic Web

ServOMap and ServOMap-lt Results for OAEI 2012

Accessing information about Linked Data vocabularies with vocab.cc

LODatio: A Schema-Based Retrieval System forlinkedopendataatweb-scale

A Flexible Biomedical Ontology Selection Tool

Towards Semantic Data Mining

Ontology Matching with CIDER: Evaluation Report for the OAEI 2008

IJCSC Volume 5 Number 1 March-Sep 2014 pp ISSN

Research Article. ISSN (Print) *Corresponding author Zhiqiang Wang

Ontology-Based Web Query Classification for Research Paper Searching

Enterprise Multimedia Integration and Search

Development of an Ontology-Based Portal for Digital Archive Services

GoNTogle: A Tool for Semantic Annotation and Search

Relevant Pages in semantic Web Search Engines using Ontology

Ontology Research Group Overview

Contributions to the Study of Semantic Interoperability in Multi-Agent Environments - An Ontology Based Approach

Chapter 5 Ontological Evaluation and Validation

MERGING BUSINESS VOCABULARIES AND RULES

Ontology Evaluation and Ranking using OntoQA

Semantic Web. Sumegha Chaudhry, Satya Prakash Thadani, and Vikram Gupta, Student 1, Student 2, Student 3. ITM University, Gurgaon.

A Novel Architecture of Ontology-based Semantic Web Crawler

A Lightweight Approach for Evaluating Sufficiency of Ontologies

A Survey on Ontology Evaluation Methods

SRS: A Software Reuse System based on the Semantic Web

Ontology Modularization for Knowledge Selection: Experiments and Evaluations

Computer-assisted Ontology Construction System: Focus on Bootstrapping Capabilities

VISO: A Shared, Formal Knowledge Base as a Foundation for Semi-automatic InfoVis Systems

A Semi-Automatic Ontology Extension Method for Semantic Web Services

Generation of Semantic Clouds Based on Linked Data for Efficient Multimedia Semantic Annotation

Development of Contents Management System Based on Light-Weight Ontology

Opus: University of Bath Online Publication Store

The HMatch 2.0 Suite for Ontology Matchmaking

GoNTogle: A Tool for Semantic Annotation and Search

Effect of log-based Query Term Expansion on Retrieval Effectiveness in Patent Searching

Open Research Online The Open University s repository of research publications and other research outputs

Metadata and the Semantic Web and CREAM 0

Reducing the Inferred Type Statements with Individual Grouping Constructs

Using the Semantic Web as Background Knowledge for Ontology Mapping

Measuring The Degree Of Similarity Between Web Ontologies Based On Semantic Coherence

An ontology-based approach for semantics ranking of the web search engines results

Collaborative Ontology Construction using Template-based Wiki for Semantic Web Applications

The Hoonoh Ontology for describing Trust Relationships in Information Seeking

Motivating Ontology-Driven Information Extraction

Thanks to our Sponsors

Requirements for Information Extraction for Knowledge Management

Experience of Developing a Meta-Semantic Search Engine

Domain-specific Concept-based Information Retrieval System

Ontology Creation and Development Model

Effective Page Refresh Policies for Web Crawlers

A Novel Categorized Search Strategy using Distributional Clustering Neenu Joseph. M 1, Sudheep Elayidom 2

Shrey Patel B.E. Computer Engineering, Gujarat Technological University, Ahmedabad, Gujarat, India

Semantic Web Technology Evaluation Ontology (SWETO): A test bed for evaluating tools and benchmarking semantic applications

Automation of Semantic Web based Digital Library using Unified Modeling Language Minal Bhise 1 1

Enabling Semantic Search in Large Open Source Communities

Efficient approximate SPARQL querying of Web of Linked Data

A Semantic Role Repository Linking FrameNet and WordNet

Internal project report T3.1 Damask Ontology

Ontologies and similarity

ImgSeek: Capturing User s Intent For Internet Image Search

EFFICIENT INTEGRATION OF SEMANTIC TECHNOLOGIES FOR PROFESSIONAL IMAGE ANNOTATION AND SEARCH

Semantic Web Technology Evaluation Ontology (SWETO): A Test Bed for Evaluating Tools and Benchmarking Applications

Semantic matching to achieve software component discovery and composition

Improving Ontology Recommendation and Reuse in WebCORE by Collaborative Assessments

A Method for Semi-Automatic Ontology Acquisition from a Corporate Intranet

OntoXpl Exploration of OWL Ontologies

PRIOR System: Results for OAEI 2006

An Improving for Ranking Ontologies Based on the Structure and Semantics

VIDEO SEARCHING AND BROWSING USING VIEWFINDER

An FCA Framework for Knowledge Discovery in SPARQL Query Answers

SEMANTIC SUPPORT FOR MEDICAL IMAGE SEARCH AND RETRIEVAL

A Loose Coupling Approach for Combining OWL Ontologies and Business Rules

NeOn Methodology for Building Ontology Networks: a Scenario-based Methodology

Image Similarity Measurements Using Hmok- Simrank

Building a biomedical ontology recommender web service

Gap analysis of ontology mapping tools and techniques

Transcription:

FOEval: Full Ontology Evaluation Model and Perspectives Abderrazak BACHIR BOUIADJRA Computer Science Departement Djilali Liabes University Sidi Bel Abbes, Algeria abbouiadjra@gmail.com Sidi-Mohamed BENSLIMANE Computer Science Departement Djilali Liabes University Sidi Bel Abbes, Algeria benslimane@univ-sba.dz Abstract In this research, a new evaluation model to choose adequate ontology that fit user requirements is proposed. The proposed model presents two main features distinct from previous research models: First, it enables users to select from a set of proposed metrics, those who they help in the ontology evaluation process; and to assign weights to each one based on assumed impacts on this process. Second, it enables users to evaluate locally stored ontologies, and/or request search engines for available ontologies. The main goal of this model is to ease the ontology evaluation task, for users wishing to reuse available ontologies, enabling them to choose the more adequate ontology to their requirements. Keywords: ontology, ontology evaluation, ontology ranking, I. INTRODUCTION Ontologies have been shown to be beneficial for representing domain knowledge, and are quickly becoming the backbone of the Semantic Web; this has lead to the development of many ontologies in different domains. Developed ontologies need to be evaluated, to ensure their correctness and quality during their construction process. Likewise, users facing a large number of available ontologies need to have a way of assessing them and deciding which one fits their requirements the best. The need for ontology evaluation approaches and tools is crucial as the ontology development and reuse becomes increasingly important. The rest of the present paper is organized as follows. Section 2 addresses a survey of ontology evolution approaches according to some evaluation criteria. Section 3 reviews different tools developed for ontology evaluation. In section 4, we present our ontology evaluation model FOEval: Full Ontology Evaluation. Finally, we conclude this paper and gives essential future researches in section 5. II. STATE OF THE ART Knowing that there is no single unifying definition of what constitutes ontology evaluation [4], we present in this section a review of the literature of different ontology evaluation approaches by answering four issues that will help us to classify them: A. What should be evaluated? A variety of researches of ontology evaluation have been established depends on the perspective of what should be evaluated. Most of them focus on the evaluation of the whole ontology; others focus on partial evaluation of the ontology, for reuse it in an ontology engineering task [16]. B. Why it should be evaluated? We divide different ontology evaluation goals on validity evaluation and quality evaluation. We define validity evaluation as the process that evaluates ontologies to guarantee its freeness from any formal or semantic error [2], [10]. We define quality evaluation as the process that evaluates the quality and the adequacy of an ontology from the point of view of a particular predefined criteria, for use in a specific context and purpose. A variety of metrics to the ontology quality evaluation have been proposed in the literature, from which: comprehensiveness, richness, completeness, interpretability, adaptability and re-usability [12]. C. When it should be evaluated? Ontology evaluation is an important issue that must be addressed during different ontology lifecycle process; we divide it into four principle steps: Before the ontology building process: to evaluate resources used to build the ontology [5]. During the ontology building process: to guarantee the ontology freeness from all errors [2], [20]. During the ontology evolution process: to assess the effect of changes and to verify whether the ontology quality was increased or decreased; specifically in an automatic or semi-automatic ontology engineering approach [13], [14], [17]. Before reusing the ontology: to choose amongst a set of available ontologies, the most appropriate to user needs [2], [12]. D. Based on what it should be evaluated? We divide the ontology evaluation basis on: Corpus-based evaluation: is used to estimate empirically the accuracy and the coverage of the ontology. Gold-Standard-based evaluation: that compares candidate ontologies to gold-standard ontology that serves as a reference. Task-based evaluation: looks at how the results of the ontology-based application are affected by the use of an ontology. Expert-based evaluation: where ontologies are presented to human experts who have to judge in how far the developed ontology is correct. Criteria-based evaluation: measures in how far an ontology adheres to desirable criteria. 464

III. EVALUATION TOOLS Several ontology evaluation tools have been developed during last years. They differ according to the issues described above. We present the most important from them below: Swoogle[3]: is an ontology search engine that offers a limited search facility that can be interpreted as topic coverage. Given a search keyword Swoogle can retrieve ontologies that contain a class or a relation matching (lexically) the given keyword. OntoKhoj [1]: is an ontology search engine that extends the traditional approach (keyword-based search) to consider word senses when ranking ontologies to cover a topic. It accommodates a manual sense disambiguation process, then, according to the sense chosen by the user, hypernyms and synonyms are selected from WordNet. Watson [11]: is an ontology search engine that has an efficient mechanism for finding the best ontologies taking into account the equivalent ontologies. The author considers that two ontologies describing the same vocabulary are semantically equivalent if they express the same meaning, even if they may be written differently from a syntactic point of view. Obtaining non-redundant results is a good way to increase efficiency and improve robustness. OntoQA [6]:OntoQA is a tool that measures the quality of ontology from the consumer perspective, using schema and instance metrics. It takes as input a crawled populated ontology or a set of user supplied search terms and ranks them according to some metrics related to various aspects of an ontology. OntoCAT [7]: OntoCAT provides a comprehensive set of metrics for use by the ontology consumer or knowledge engineer to assist in ontology evaluation for re-use. This evaluation process is focused on the ontology summaries that are based on size, structural, hub and root properties. AKTiveRank [9]: AKTiveRank is a tool that ranks ontologies using a set of ontology structure based metrics. It takes keywords as input, and queries Swoogle for the given keywords in order to extract candidate ontologies; then it applies measures based on the coverage and on the structure of the ontologies to rank them. Its shortcoming is that its measures are at the class level. OS_Rank [15]:OS_Rank is an ontology evaluation system that evaluates ontologies and ranks them based on class name, on detail degree of searched class, on number of semantic relation of searched class, on interest domain based on Wordnet to resolve different semantic problems. IV. FULL ONTOLOGY EVALUATION MODEL In this section, a new evaluation model is described. The main goal of this model is to ease the ontology evaluation task, for users wishing to reuse available ontologies, enabling them to choose the more adequate ontology to their requirements. The proposed model is considered as a ranking and selection tool that presents three main features distinct from other models: First, it enables users to select from a set of proposed metrics, those who they help in the ontology evaluation process; and to assign weights to each one based on assumed impacts on this process. Second, it enables users to evaluate locally stored and/or searched ontologies (from different search engines). Third, it has an advanced mechanism for capturing the structural and semantic information about the user-desired domain class and relations. A. FOEval Architecture: Figure 1 shows the current architecture of FOEval. Figure 1. FOEval Architecture The first step Prepare goal is to decide which ontologies will be evaluated and ranked: (local-stored ontologies and/or searched ontologies). The second step Metrics goal is to decide from a set of proposed metrics, which ones will be used in the evaluation process, and to assign weights to each used metric based on assumed impacts on this process. In the Evaluate step candidate ontologies are evaluated for each used metric, and given a numerical score. An overall score for the ontology is then computed as a weighted sum of its permetric scores. B. FOEval Prepare: The goal of this step is to decide which ontologies will be evaluated and ranked: introduced ontologies and/or searched ontologies. FOEval proposes an advanced ontology search mechanism basing on features below: - FOEval can evaluates only introduced ontologies, - FOEval can evaluates only searched ontologies, - FOEval can evaluates introduced and searched ontologies, - FOEval can request different search engines (Swoogle, Watson) for available ontologies. 465

- FOEval request can be only with keywords provided by the user; with user selected synonyms and hypernyms based on WordNet; and with extracted important class of introduced ontologies. We consider a class as important if it has a largest number of hierarchical and semantical relations, it can also considered as important if it has a largest number of other important class linked to it. C. FOEval Metrics: In this research, we propose to evaluate and rank candidate ontologies using a rich set of metrics that include: coverage, richness, detail-level, comprehensiveness, connectedness and computational efficiency. Coverage: coverage of terms consists of class coverage, and relation coverage. Class coverage represents: from searched keywords, how many class name match in the ontology; while relation coverage represents: from searched keywords, how many relation name match in the ontology [19]. COV(T,O) = w1.ccov(t,o) + w2. RCov(T,O) Detail-Level: This measure describes: DL(T,O) = w1.gdl(o) + w2.sdl(t,o) The global detail level (Gdl): is a good indication of how well knowledge is grouped into different categories and subcategories in the ontology. This measure can distinguish a horizontal (flat) ontology from a vertical ontology. Formally, we define global detail level Gdl as the average number of subclasses per class. sub(c,o) is the subclasses number of class c J(ci,cj,t)=1 if relation "t" between ci and cj exist else 0 c, ci, and cj are class, O is an ontology, t and T are searched terms Richness: ontology richness can be measured on three different levels: Relation richness: is the metric that reflects the diversity of relations and placement of relations in the ontology. An ontology that contains many relations other than hierarchical relations is richer than taxonomy with only hierarchical relationships. Attribute richness: is the average number of attributes that are defined for each class that can indicate the amount of information pertaining to instance data; the more attributes that are defined the more knowledge the ontology conveys. Formally, we define ontology richness (OR) as the sum of relationship richness (rr) and attribute richness (ar). The relationship richness (rr) is defined as the ratio of the number of non-hierarchical relationships sp defined in the ontology, divided by the number of all relationships P. The attribute richness (ar) is defined as the number attributes for all class att divided by the number of class C. The specific detail level (Sdl): is a good indication of the searched terms importance in the ontology. We consider that an ontology that contains a searched term student as class with many sub and upper classes is preferred than other ontology that contains the class student without any subclass. Formally, we define specific detail level Sdl as the the sum of four parameters. First, the average number of subclass and upper-class for searched class. Second, the number of relations for searched class. Third, the number of relations for sub-class of searched class. Fourth, the number of relations for upper-class of searched class. sub(t,o) is the subclasses number of class t upp(t,o) is the upper classes number of class t T is the searched terms number Comprehensiveness: is the metric that assess and evaluate content comprehensiveness of ontologies. Formally, we define ontology comprehensiveness (OC) as the average number of annotated class (Ac), the average number of annotated relations (Ar), and the average number of instance per class (Ic). 466

w1,w2,and w3 are sub-metric weights Ann(c,O)= 1 if the class c is annotated I(c,O) is the number of instance per class c Ann(ci,cj,O)=1 if a relation between ci and cj exist and it is annotated C is the number of all class in the ontology R is the number of all relations in the ontology Computational Efficiency: this principle prospects an ontology that can be successfully/easily processed, in particular the speed that reasoners need to fulfill the required tasks, be it query answering, classification, or consistency checking, etc. The size of the ontology, class and relations number, and other parameters affect the efficiency of the ontology [4]. Formally, we define Ontology Computational Efficiency (OCE) as the sum of: the average number of class (Anc), the average number of sub-class per class (Ansc), the average number of relations (Anr), the average number of relation per class (Anrc), and the average ontology size (Aos). Finally, FOEval complete the evaluation by computing an overall score of each candidate ontology, as a sum of its metric and sub-metric scores, which will be calculated using normalized values to avoid any disagreeable influence of any metric or sub-metric on another. Formally, we define FOEval ontology evaluation function as below: FOEval(Ok) = k1.ncov(ok) + k2.nor(ok) + k3.ndl(ok) + k4.noc(ok) + k5.noce(ok) k1,k2, k3, k4 and k5 are global per-metric weights Ok are candidate ontologies nmetric is the normalized value of this metric (min=0 & max=1) E. FOEval Results: The last step is to show the evaluation results. We consider this part as an important task that need to many works and enhancements, because users will take decision based on this output; for this, we propose to use a textual and a graphical representation of ranked ontologies, including some helpful information like: class and relations number, size, date, path, global and per-metric result. w1,w2, w3 and w4 are sub-metric weights C(O) is the class number of the evaluated ontology mc(o) is the biggest class number of a candidate ontology sc(c,o) is the subclasses number of all class of the ontology R(O) is the relations number of the evaluated ontology mr(o) is the biggest relations number of a candidate ontology Size(O) is the evaluated ontology size in kilobytes msize(o) is the maximum candidate ontology size in kilobytes D. FOEval Evaluation: The first FOEval evaluation feature is its specific evaluation of candidate ontologies, when users can take into the evaluation process any or all metrics depending on their needs; for each one it calculates a numerical score. The second FOEval evaluation feature is its global and specific metric weights; where each one is globally weighted to give more or less importance to a metric than another, basing on evaluation goals and on user needs. In addition, each submetric has a specific weight that helps users for example, to evaluate only the relation richness of candidate ontologies rather than the global richness that include in addition the attribute richness. The default weight value is one (1), and optionally the user can change it to another value between [0, 10], zero means that this metric or sub-metric is disabled, and ten means that it is very important in the evaluation process. Figure 2. FOEval Textual Result Figure 3. FOEval Graphical Result 467

The graphical representation display ontology summaries based on [18] and [21]. We add in this part, two main ideas: (1) Global Summary: that allows the user to show more or less detail degree of an ontology. (2) Partial Summary: that allows the user to show more or less detail degree on a specific part or on a specific class. These two last ideas help users in full or partial ontology evaluation. These two points can be very helpful for FOEval users, because, before taking decisions about the evaluation, its offer an advanced view and important information about what they need. V. CONCLUSION AND PERSPECTIVES In this paper we have addressed a novel classification of ontology evaluation approaches according to four issues. This classification summarized the main efforts performed in this area. We have presented the principal features of FOEval, which is tunable, requires minimal user involvement, and would be very useful in many ontology evaluation scenarios: - Evaluate local stored and/or searched ontologies from different search engines. - Evaluate ontologies to ensure their correctness and to assess their quality during their construction process. - Evaluate ontologies or versions of an ontology to assess the effect of changes during their evolution process. - Evaluate ontologies to choose the most appropriate to user needs before their reuse. FOEval offers several benefits: First, it strengthens the theoretical base for ontology evaluation by proposing a new model and rich metrics. Second, it can evaluate only available ontologies, available and some searched ontologies, and it can evaluate only searched ontology. Third, it requests search engines using searched terms, important class names of available ontologies, hypernyms and synonyms selected from WordNet according to the sense chosen by the user. Fourth, it avoids obtaining redundant results and equivalent ontologies basing on Watson search engine mechanisms [11]. In addition, it enables ontology users to evaluate ontologies easily; to decide which metric will be used in this process; and to assign weights to each used metric and sub-metric depending on their needs. We plan on making it a web-based tool, where users can evaluate ontologies quality using their file s path. We plan also on offering the possibility to introduce corpus or gold-standard ontology that serves as references in the evaluation process. Finally, we plan on adding other metrics, to enhance our model and tool, and to meet user requirements. In our opinion, future works in this area should focus particularly on quality evaluation, as the number of available ontologies is continuing to grow. VI. REFERENCES [1] Patel C., K. Supekar, Y. Lee, and E. K. Park. OntoKhoj: A Semantic Web Portal for Ontology Searching, Ranking and Classification. In Proceeding of the Workshop On Web Information And Data Management. ACM, 2003. [2] Gomez-Pérez. Ontology evaluation. In Steffen Staab and Rudi Studer, editors, Handbook on Ontologies, First Edition, chapter 13, pages 251-274. Springer, 2004. [3] Ding, L., Finin, T., Joshi, A., Pan, R., Scott Cost, R., Peng, Y., Reddivari, P., Doshi, V.C., and Sachs, J.: Swoogle: A Search and Metadata Engine for the Semantic Web. In Proceedings of the 13th CIKM, 2004. [4] Gangemi, A., Catenacci, C., Massimiliano, C. and Lehmann, J., Ontology Evaluation and Validation: An integrated formal model for the quality diagnostic task, 2005. [5] Tarhuni Marwa, Rodolphe Meyer and Cheikh Omar Bagayoko. Master s thesis, Paris V University, 2005. [6] Tartir, S. Arpinar, I.B., Moore, M., Sheth, A.P. and Aleman-Meza, B. OntoQA: Metric-Based Ontology Quality Analysis, IEEE Workshop on Knowledge Acquisition from Distributed, Autonomous, Semantically Heterogeneous Data and Knowledge Sources, Houston, TX, USA, 2005. [7] Cross V. and A. Pal: OntoCAT: An Ontology Consumer Analysis Tool and Its Use on Product Services Categorization Standards, In Proceedings of the First International Workshop on Applications and Business Aspects of the Semantic Web. 2006 [8] Sabou Marta, Lopez V., Motta E. and Uren V. Ontology Selection: Evaluation on the Real Semantic Web, Fourth International Evaluation of Ontologies for the Web Workshop (EON2006), UK, 2006. [9] Jones M. and Alani H., Content-based ontology ranking. In Proceedings of the 9th Int. Protege Conf. CA, 2006. [10] Fahad, M., Qadir, M.A., Noshairwan, W. Semantic Inconsistency Errors in Ontologies. Proc. of GRC 07, Silicon Valley USA. IEEE CS. 2007. [11] d'aquin M., Baldassarre C., Gridinoc L., Sabou M., Angeletou S., and Motta E. : Watson: Supporting next generation semantic web applications in WWW/Internet conference, Spain 2007. [12] Obrst Leo, Werner Ceusters, Inderjeet Mani, Steve Ray, and Barry Smith. The evaluation of ontologies. In Christopher J.O. Baker and Kei- Hoi Cheung, editors, Revolutionizing Knowledge Discovery in the Life Sciences, chapter 7, pages 139-158. Springer, 2007. [13] Dellschaft, K. and Staab, S. : Strategies for the Evaluation of Ontology Learning. In Buitelaar, P. and Cimiano, P., editors, Ontology Learning and Population: Bridging the Gap Between Text and Know l edge, p ages 253 272. I O S Press. 2008 [14] Djedidi Rim et Marie-Aude Aufaure : Patrons de gestion de changements OWL ; THESE préparée au sein du Département Informatique de Supélec, 3 rue Joliot-Curie, 91192 Gif-sur-Yvette Cedex, France, 2009. [15] Wei Y., J.Chen «Ranking Ontology based on Structure Analysis» Second International Symposium on Knowledge Acquisition and Modeling IEEE, 2009 [16] d'aquin M. and Lewen H.. Cupboard - a place to expose your ontologies to applications and the community. In The Semantic Web: Research and Applications, 6th European Semantic Web Conference, ESWC 2009. [17] Murdock J., Buckner C., Allen C.: «TWO METHODS FOR EVALUATING DYNAMIC ONTOLOGIES» Indiana University, Bloomington, 2010. [18] Li N., Motta E. and d Aquin M.: Ontology summarization: an analysis and an evaluation. The International Workshop on Evaluation of Semantic Technologies, Shanghai, China. IWEST 2010. [19] Sunju Oh; and Heon Y. Yeom: User-Centered Evaluation Model for Ontology Selection IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology 2010 [20] Ohta M., Kozaki K. and Mizoguchi R.: A Quality Assurance Framework for Ontology Construction and Refinement Proc. of 7th Atlantic Web Intelligence Conference. (Switzerland) AWIC 2011. [21] Cheng G., Ge W. and Qu Y.: Generating Summaries for Ontology Search in conference companion on World wide web, (India) 2011. 468