Ontology Testing - Methodology and Tool Support

Size: px
Start display at page:

Download "Ontology Testing - Methodology and Tool Support"

Transcription

1 Ontology Testing - Methodology and Tool Support Eva Blomqvist 12, Azam Seil Sepour 3, and Valentina Presutti 2 1 Linköping University, Linköping, Sweden eva.blomqvist@liu.se 2 Semantic Technologies Lab, ISTC-CNR, Italy 3 Jönköping University, Gjuterigatan 5, Jönköping, Sweden baran.gems@gmail.com Abstract. Ontology engineering is lacking methods for verifying that ontological requirements are actually fulfilled by an ontology. There is a need for practical and detailed methodologies and tools for carrying out testing procedures and storing data about a test case. In this paper we first discuss the notion of ontology testing, and describe a methodology for conducting ontology testing, as well as three instances of this methodology for testing specific types of requirements. Next, we describe a tool that practically supports the methodology, in particular the three instantiations, and as part of that tool we present an approach for test generation to reduce the manual labour of the tester. We conclude that there is a need to support users in this crucial part of ontology engineering, and that our proposed methodology is a step in this direction. Keywords: Ontology engineering, ontology evaluation, testing 1 Introduction A number of ontology engineering methodologies are available, both for general formal ontologies and application specific models, e.g. for Semantic Web applications, where we focus on the latter. Common ways of expressing ontological requirements include Competency Questions (CQs) [7] and descriptions of reasoning tasks, however, very few approaches focus on how to verify that such requirements are fulfilled. Developers need practical and detailed guidelines for carrying out testing and storing test cases. Section 1.1 motivates our problem, and the notion of ontology testing is discussed in Sect. 1.2, based on current literature in Sect. 2. The notion of a test case is characterized in Sect. 3, before describing our methodology in Sect. 4. Our novel tool for ontology testing is then presented in Sect. 5. The paper is concluded in Sect Motivation and Problem Although not always made explicit, some ontology engineering methodologies target specific aspects of ontologies, e.g. terminological coverage or general abstract notions, while others focus on application ontologies for performing a

2 specific task within some software, e.g. on the Semantic Web. Competency Questions (CQs) [7] have been used as requirements for all these types of ontologies, but the nature of the questions differ. For application ontologies, requirements are detailed and precise, e.g. representing actual queries. Similarly, if inferences are needed, this is known due to software requirements. In such cases, it is important to be able to check whether the ontology fulfills those requirements, i.e. can perform its intended task within the software. In addition, it is important to determine what input will be problematic to handle. Currently, there are as far as we are aware no detailed guidelines how to perform ontology requirement verification, and the notion of a test case is poorly investigated. 1.2 What is Ontology Testing? Ontology evaluation can be defined as the process of assessing an ontology with respect to certain criteria, using certain measures. The criteria can range from selection criteria for reuse, to detailed criteria of the internal logical structure. In this paper we are concerned with evaluating the functionality of an ontology, i.e. its functional perspective [4]. To evaluate the functionality of an ontology, we can use the ontological requirements as a specification of its intended task, hence, comparable to requirements verification in software engineering. We can thereby describe ontology testing as a task-focused evaluation of ontologies against a their ontological requirements, such as a set of competency questions. 2 Background and Related Work Ontology engineering is generally inspired by software engineering. In software engineering, verification focuses on checking that the software meets its specified requirements [16], our notion of ontology testing corresponds to verification. Software testing has two main purposes [16]; (i) demonstrating that the software fulfils its requirements, and (ii) detecting faults of the software, e.g. cases when the behaviour is inaccurate or undesirable. Ontology testing also needs to cover both cases. Unit testing focus on the functionality of a small piece of code, as opposed to integration and system testing. One of the most common software testing frameworks is the xunit family [8]. Each unit test is stored as one test case class in the respective programming language, together with metadata. Such best practices, e.g. small sets of requirements each with separate test case(s), are transferable to ontology engineering. Early waterfall-style ontology engineering methodologies include [6, 17, 3], while recent ones focus on collaboration [14], adopts other process models [13], or adopts an agile development style, e.g. extreme Design (XD)[15]. Ontology Design Patterns (ODPs) [5] support various ontology engineering tasks, where some ODPs are available as as small ontologies representing modeling best practices (i.e. Content ODPs), and are a focus of the XD methodology. The use of ODPs also introduces as natural modularization of the ontology, making the module a natural unit of the resulting ontology. In this paper module simply refers to an

3 ontology, whether a small part of the overall ontology or the complete ontology, but restricted in terms of the functionality it provides (i.e. its requirements). Most methodologies mention evaluation, either for ontology selection or for assessing the resulting ontology, but methods are rarely explained in detail. This paper focuses on ontology evaluation from a functional perspective [4], where metrics have classically focused on the terminology of the ontology. However, for application ontologies we need to evaluate the query- and inference capabilities, in addition to the terminology. Unit testing for ontologies was proposed by [18], with ideas similar to ours, e.g. queries as unit tests for ontologies, but they did not provide a comprehensive methodology and guidelines for ontology engineers. The OWL Unit Test Framework [19, 9] is one of the few testing tools available. It contains various types of tests, ranging from sanity tests, i.e. heuristics such as checking that a property s characteristics correspond with characteristics of its inverse, to detection of OWL-DL modelling problems. The tool attaches unit tests to individual named classes, through annotation properties. Although it is named unit testing the method focuses on syntactical and logical constructs, i.e. task-independent features. Similarly, there are a number of other ontology debugging tools, such as the XD Analyzer within XD Tools [1] (exploiting similar heuristics as [9]), the detection of missing or erroneous is-a structures in [10, 11], and the pattern-based debugging in [2]. These are, however, all focused on the structural level rather than the functional one, since they do not take application (or domain) specific requirements into account. 3 The notion of a test case A test case is a container for storing information about a test, analogous with software testing frameworks [8]. From software unit testing we note best practices such as keeping tests separate from the actual program code, and creating separate test cases for each unit of code and requirement to test. Hence, our proposal is to represent a test case as a separate OWL ontology, normally only an ABox relying on the TBox of the ontology to be tested and our test case metamodel as described below. A test case is then naturally identified through its unique URI. This constitutes a considerable difference compared to [9], where annotations were stored inside the ontology to be tested, and associated to single elements of the ontology rather than an ontology module. In order to realize this in practice we have created an OWL ontology containing the main concepts for describing tests 4, i.e. a test case metamodel, including properties for describing each test case. When creating a test case ontology, i.e. an instance of the TestCase class, the metamodel ontology is referenced and used to describe and annotate the test. A few of the main concepts of the metamodel ontology and their relations are illustrated in Figure 1. TestCase is the main concept, its instances are of type owl:ontology, and each test case is related to a requirement, and to a particular test procedure, e.g. a SPARQL query in case of CQ verification (see further below). A TestCase is a member of a TestSuite, 4 See

4 which is a collection of test cases for testing one ontology. However, an ontology is represented through a number of versions during its development, hence, the TestSuite relates to what we call an OntologyHistory, which is a collection of owl:ontology consisting of all the versions of that ontology. In addition, when a TestCase is ran over one specific version of the ontology, with one specific set of test data, at a certain time, that constitutes an instance of TestCaseRun, again an instance of type owl:ontology. For each time the test is ran, a new instance of TestCaseRun should be created and described. In total, the ontology provides the opportunity to store a number of pieces of information about test cases and test runs as seen in Table 1 (the example is explained in the following section), where the first four properties describe a TestCase and the rest a TestCaseRun. Fig. 1. An illustration of parts of the TestCase meta model in OWL. For unit testing, i.e. testing of single requirements, it is recommended to create one test case per requirement in order to separate concerns and ease the analysis of potential errors. By testing one functionality at a time it is easier to analyse the cause of any undesired behaviour. However, for integration testing and testing the overall ontology, a test case can contain larger sets of test procedures and potentially test data for the complete ontology, in order to also capture interactions between requirements and subsequently between test data and different ontology capabilities. An option is to create an incrementally growing test dataset for the project, separately from the test cases, to make sure the overall ontology can deal with the complete set of data in the end. 4 Methodology for Ontology Testing Based on related work and our own experience with both performing and teaching ontology engineering, a general methodology for performing ontology testing has emerged. In addition, we have observed three main instances, or variants, of this methodology; CQ verification, inference verification, and error provocation. The first two are each connected to verifying the correct implementation

5 Table 1. Properties describing a test case Property Intended use Example hasrequirement The requirement to be tested. Sub-properties for storing more The CQ Who are the children of this person? specific types of requirements. testcategory The type of test, e.g. inference CQ verification test or CQ verification. testcasedescription A textual description of the test, A text like Testing the CQ through hastest if needed. The actual test procedure. Sub-properties for specific types of tests, e.g. hassparqlquery for storing a SPARQL query. a SPARQL query SELECT * WHERE {?x a :Person.?y a :Person.?x :haschild?y} hasinputtestdata Pointing at the test data used for this test. The Person-instances Bob, Mary, and Jill hasexpectedresult The expected output.?x = Bob and?y = Jill hasactualresult The actual output. - executedby The person or organization who Ran by Tina Tester performed the test. hasexecutiondate The date the test was ran, i.e. the results produced. hasenvironment Information about the test environment, e.g. the reasoner used. Built-in SPARQL engine of the NeOn Toolkit executionresult Assessment of the test execution Test ran successfully but no instances are retrieved executioncomment Comments of a test execution, e.g. the reason for a failure, if known. Missing assertion in test data caused the negative result of the test run of a requirement, e.g. a CQ or an inference task. Testing each requirement, e.g. each CQ, can be considered as a unit test for the particular part of the ontology realizing that requirement, as proposed initially in [18]. Nevertheless, the same methodology as described below can also be used for integration testing, i.e. testing requirements that cross module boundaries, and testing the capabilities of the overall ontology, although this is not exemplified in this paper. The general methodology is described in Table 2. An important thing to note is that to make sure each task can be performed correctly given a correct set of input data is actually not enough. Since the aim of testing, c.f. software testing, is not only to review the successful cases but to find the cases where the ontology fails, additional tests are needed. In the following subsections, we explain how this general methodology can be instantiated to cover all test types. 4.1 Variant 1: CQ Verification One way for the ontology to practically answer questions, such as CQs, is to reformulate them as SPARQL queries. Assume that we are creating an OWL ontology about families, e.g. parents and children. One module of the ontology realizes the following CQs: (1) Who are the children of this person? and (2) Who is married to whom?. In step 2 of the methodology (c.f. Table 2) we select CQ 1 for our first unit test. The CQ is then reformulated into a SPARQL query (step 3), using elements from the module to be tested. A possible query is (disregarding namespaces): SELECT * WHERE {?x a :Person.?y a :Person.?x :haschild?y} Next, a test case OWL-file is created, as well as a first test run OWL-file (step 4), i.e. the latter being an empty ontology where we

6 Table 2. General overview of the methodology. 1 Gather requirements For a specific type of test, retrieve all the requirements of the current module that are relevant to this test type. 2 Select requirement Following the principle of unit testing, select one requirement to test in each test case. 3 Formulate test Determine how to test that particular requirement. procedure 4 Create test case Create the test case OWL-file, and an additional OWL-file for storing the first test run (importing the version of the ontology to be tested), and describe both using the test case metamodel and its properties. 5 Add test data Add the test data, needed to perform the procedure according to step 3, in the test run OWL-file. 6 Determine expected results Depending on the test data, what would be the output of a correct test run? 7 Run test Execute the procedure from step 3 on the test run OWL-file with its data from step 5, and record the results. 8 Compare results Verify the expected output (step 6) against the actual result from step 7. 9 Analyze unexpected result If the result is not the expected one, analyze why and document any change suggestions or issues. 10 Document Store all information about the test run and its related test case by using the properties of the test metamodel. 11 Iterate If there are more requirements of this module to test, return to step 2. import the module realizing the two CQs, and we document both by referring to the testing metamodel. Then test data (step 5) related to the query we have formulated is added to the test run file, i.e. in this case individuals of the Person class such as Bob, Mary, and Jill, where some of the instances are related through the haschild property, e.g. Bob - haschild - Jill. Test data may originate in scenarios documented with the requirements, or be provided by a domain expert. In step 6 it needs to be determined which of the instances of Person that should be retrieved by the query, i.e. only those related through haschild, in our case Bob and Jill. The SPARQL query is ran on the test run ontology (step 7) and the results can be automatically verified against the expected output (step 8 - confirming the expected output can be automated, however, any additional unexpected output would have to be checked manually). Results of the test run are then stored. In case there is something missing or undesired, the module is inspected to analyze the problem (step 9). Some errors are easily fixed by directly editing the module itself, but in other cases errors depend on related modules or design choices that have to be resolved on a project-wide level. Whether the test result was positive or negative, the test run is documented in the test run ontology file using the properties provided by the metamodel (step 10), e.g. according to the example in Table 1. Then we proceed to treat CQ 2 (step 11), or continue to treat this requirement (in case of a negative test result) by creating a new test run OWL-file related to the same test case, after fixing the problem discovered in the module. In many cases, problems are discovered as early as step 3, when formulating the query. These can also be considered as failed tests, and documented accordingly. For instance, a common mistake (according to previous experiments [1]) is to forget certain datatype properties. An additional CQ of our example could be What is the name of this person?. Arriving at step 3, we may discover that there is actually no datatype property for storing the name of a

7 person, hence we cannot query for it. The test case can be documented in a new ontology file, just like before, and if desired we can even create a new test run OWL-file in order to document our failed test run (step 9-10), e.g. by describing the fault we discovered using the executioncomment property. Despite this procedure being quite rigorous, in large ontology projects this may be a necessity in order to keep track of issues and change requests based on negative test results, when numerous ontology engineers are involved in the development. 4.2 Variant 2: Verifying Inferences Although CQs express what information the ontology should provide, they usually do not specify how this information needs to be produced, i.e. entered explicitly as assertions or derived from other facts through inferencing. Additional inference requirements can complement CQs, e.g. detailing complex CQs. Running a CQ verification test, according to above, might still return an expected result even if the desired inference mechanisms are not supported, simply by the tester asserting the expected inferences explicitly among the test data. By complementing such a test with another type of test, to verify that the inference mechanisms are in place, we are able to ensure the correct fulfillment of the inference requirement. For instance, using the same example as before we may define the class Parent and say that any person who has at least one child is a parent. If we are not expecting the information about being a parent to be explicitly asserted, it may instead be derived from the presence of haschild relations, assuming the ontology includes the appropriate axioms. Assume the same module as described above, with this additional inference requirement, i.e. Person-instances should be classified as also being instances of Parent, if they have some haschild relation. Assuming this as our selected requirement (step 2), in step 3 the test procedure is described as classification (commonly provided by OWL reasoners). Next, a new test case is created (step 4), as well as a first test run OWL-file importing the ontology to be tested. Test data is added (step 5), i.e. both data that will produce the inference and data that should not. In our case, instances of Person that has haschild relations and some that have none. In the example, we can use the same set of instances and facts about Bob, Mary, and Jill as above, saying that Bob - haschild - Jill. The expected result (step 6) is the set of Person-instances that have haschild relations, i.e. only Bob should be classified as a parent. An OWL reasoner is run over the test run OWL-file (step 7), results are verified against the expected ones (step 8 - again, confirming the coverage of the expected inferences can be automated, while checking any additional inferences cannot) and stored. The result of the test run is assessed as positive if it produces the expected inferences, and no unwanted side-effects. For any faults discovered, the module is analyzed (step 9) and undesired side-effects have to be assessed to determine whether they can be handled by changing the ontology. Finally, the test run is documented (step 10), and any remaining requirements are treated (step 11).

8 4.3 Variant 3: Error Provocation Apart from performing correct inferences and supporting queries, another important characteristic of an ontology is to allow as few erroneous facts or inferences as possible. Ideally the ontology allows exactly the desired inferences and queries, while avoiding any irrelevant or erroneous side-effects. This category of testing, error provocation or stress testing, is comparable to software testing when a system is fed random or incorrect data, or data considered as boundary values of the input, e.g. the extremes of value ranges, in order to check robustness. Detecting undesired effects in an ontology can be done in several ways, one way is consistency checking. Assume that another module of our family ontology models the notion of genders, dividing Person into the subclasses MalePerson and FemalePerson. Also assume that a requirement exists to enforce the common sense constraint that a person cannot be both male and female (step 1-2). In step 3 the test procedure is established as consistency checking using an OWL reasoner. The test case and test run OWL-files are creates (step 4), and instances of MalePerson and FemalePerson are added to the test data, where at least one is an instance of both (step 5). For instance we could add Bob and Mary again, and while we let Bob be an instance of MalePerson, we assert that Mary is an instance of both MalePerson and FemalePerson. The expected result is an inconsistency (step 6) since the test data violates our requirement. The OWL reasoner is run (step 7), and results verified automatically (step 8). The test is assessed as positive if the test run reports an inconsistency. If there is no inconsistency reported, we analyze the module (step 9), it might for instance be a missing disjointness axiom, i.e. in our example MalePerson and FemalePerson need to be disjoint. In more complex cases, other side-effects or shortcomings may be discovered, where some may even be an effect of the limitations set by the OWL language. Such cases can either be handled by the software using the ontology, or the overall modelling style has to be reconsidered. To conclude, we document the test run (step 10), and proceed (step 11). 5 Ontology Testing Tool Support XD Tools is an Eclipse plugin (perspective), compatible with the NeOn Toolkit ontology engineering environment. It mainly supports the XD ontology engineering methodology, but contains several components that can be useful for ontology engineering in general [1], e.g. the XD Analyzer. The ontology testing tool is implemented as a part of the XD Tools. 5.1 Gathering Requirements of Tool Support To investigate how the methodology is actually used, and to determine what tool support is needed, we have used the methodology (during its development) for teaching advanced ontology engineering in several settings, e.g. PhD courses 5 as 5 Course on Computational Ontologies %40 University of Bologna 2011

9 well as tutorials in other organizations, such as the Swedish Defence Research Agency 6. In addition to observing the use of the methodology, in one of the PhD courses we performed a small study. The procedures described above can be performed using standard components of ontology engineering environments, e.g. SPARQL and inference engines, which was the case during this study. Half of the group were given the methodology (including the three variants, but excluding the distinction between test runs and test cases) described above, and half were asked to test by inventing their own methodology. After the session they filled out a questionnaire recording their experiences and ideas. It turned out that both groups focused mostly on formulating SPARQL queries, while only the group given detailed guidelines also tested inferencing capabilities. In addition, most students applied some kind of code inspection, i.e. visually inspected the ontology in order to find errors and missing elements. This type of static testing (c.f. software engineering [16]) is however not the focus of this paper. Since the students had been introduced to ODPs prior to the study, some also used the ODPs as gold standards for manually verifying modelling choices. When asked, their main problem with the first methodology variant was how to formulate the right SPARQL query corresponding to a requirement. To some extent this may be a problem of learning the SPARQL syntax, however, discussions also arose around query writing being tedious, and how to save the SPARQL queries for later use. It was found quite unintuitive by the participants having to create their own annotation properties (at the time our metamodel was not available), or use general comments, for storing queries and other information. When done manually the process was perceived as quite complex. Based on this, we decided to develop tool support for the methodology. The user interface should in particular support the test case and test run creation and documentation, e.g. through menus and tailored forms. To remedy the feeling of tedious work, we decided to explore some automated support for test case generation, e.g. targeting the most simple and obvious cases that could be perceived as routine work, when the requirements are expressed as CQs. During tool development, a user study with 5 users on different levels of expertise (from ontology engineers to master students) was also performed, on an initial version of the CQ verification functionality. The users were given a small ontology and a fixed task; to within a given time find as many mistakes as possible in that ontology. The users were asked to talk aloud, and at the end the user filled out a questionnaire. Several suggestions were related to storing more information about the test case, e.g. the time, the expected and actual results, etc. This was not yet implemented in the tool functionality, but is supported by the metamodel (see Sect. 3) and is now supported by the new tool version. 6 Ontology Engineering at FOI

10 5.2 Ontology Testing Plug-in for XD Tools The testing functionality appears in the user interface as items in the contextual menus of XD Tools 7, i.e. when right-clicking on an ontology in the ontology navigator view. These options include to create a new test case, which will open a dialogue that in the end results in creating a new test case OWL-file, importing the test metamodel, and revealing the test overview tab. As a name for the test case, the tool proposes the same name (and namespace) as the ontology to be tested with the suffix -testx, where X is a number identifying unit tests of this particular ontology. The second menu option is to open as test case, which assumes that the marked ontology imports the test metamodel, i.e. is a test case, from which metadata will be loaded. Finally, a test case ontology can be fetched through the import test case option. Once a test case is opened, the test case view will appear at the bottom of the XD Tools perspective as a set of tabs. The first tab is the overview (see Figure 2), with general information such as the type of test and the requirement to be tested. Requirements can be entered through a text field. (a) Menu alternatives for testing. (b) The overview tab, with one CQ. Fig. 2. Creating a new test case using the context menu (a) and overview tab (b). Given that the user selects the CQ verification test variant, the tab following the overview gives the user the opportunity to assess the terminological overlap between the CQ and the classes and properties of the ontology, and another tab shows its use of ODPs (i.e. through imports). The final tab then provides an interface for writing and executing SPARQL queries corresponding to the CQ. A SPARQL query can be entered and saved (using the appropriate hassparqlquery property of the test metamodel). The user can enter expected results, and then the SPARQL engine can be ran after first producing all inferences, or only on the asserted model, and the results are displayed to the user. 7 Currently the software requires the use of NeOn Toolkit version To install the test functionality, first install XD Tools and then place the two jar-files in your plugin folder (replacing existing ones with the same name), files can be found at:

11 The actual results are verified against the expected ones automatically, except when it comes to additional unexpected results. If one of the other test variants is selected, only one additional tab is shown, where the user can enter expected results, run an inference engine and then (partly automatically) verify the actual results against the expected ones, e.g. see Figure 3. Fig. 3. The tab for reasoning tests, where expected results inserted through the topmost form are shown in green (middle) if confirmed in the actual results (bottom). Yellow indicates additional inferences. 5.3 SPARQL Suggestions Although experiments have shown that testing CQs through SPARQL queries assists users in finding simple mistakes (c.f. [1]), such as missing elements, manually writing SPARQL queries can be perceived as tedious routine work (see Section 5.1). In methodologies such as XD, it is suggested that tests are written already at design-time, and directly formalized as SPARQL queries, however, in practice this is not often the case. In our experience the most common way to document ontological requirements is informal natural language sentences. In order to make generation of SPARQL queries based on CQs feasible in practice we rely on two assumptions; (i) we assume that the user is building an ontology module, rather than a large monolithic ontology, and (ii) the terminology of the requirements, i.e. CQs, coincide roughly with the terminology of the ontology model. Since the method relies on analyzing RDF graphs, efficiency (mainly in terms of time) is reduced if applied to large ontologies, while under assumption (i) it is guaranteed to stay within reasonable limits (from a user perspective). The second assumption is reasonable since requirements are developed in collaboration with domain experts and usually reflect the intended terminology. The overall aim is to produce good suggestions that are useful to the user, i.e. we prefer correctness over completeness. Our proposed algorithm relies on the following main steps 8 : 8 The implementation is available as source code for download at

12 1. Analysing the CQ linguistically in order to form a set of linguistic query triples, and identifying the type of question (for details see [12]). 2. Matching the linguistic triples to the triples of the ontology, to find candidate classes and properties to be involved in the query. 3. Among those properties, finding structural paths that connect the most relevant classes. Path structure depending on the type of the query. 4. Selecting the best matching paths, based on ranking their length and linguistic matches to the CQ terminology. 5. Generating a query, based on selected classes and property path(s). Step 1 is realised through methods for question analysis from the AquaLog and PowerAqua systems[12]. As an example, when running this method over the OWL building block of the AgentRole ODP 9 containing the CQ Which agent does play this role?, the linguistic analysis results in the following linguistic triple: (agent ; play ; this role), where the first term is the main focus of the question, and the second is the relation between the first and third term. The following three steps aims at finding classes that correspond to the terms in the query triples from step 1, and then finding paths of properties that connect those classes, where paths that linguistically resemble the triples from step 1 are preferred (e.g. property label is similar). In our example, classes Agent and Role will be selected. Next, we start looking for paths that connect those classes. A path is defined as a set of connected class triples, where a class triple in this case constitutes two classes (or a class and a datatype range) and a property relating those classes (datatype or object property). Connected means that the triples form a chain through overlapping classes. Paths are constructed based on the logical axioms related to properties, i.e. domain and range, restrictions on classes, as well as subclass reasoning for inheriting such axioms. For our example, there are four object properties available in AgentRole (including all its imports); hasrole and isroleof, as well as classifies and isclassifiedby. All of them can connect instances of Agent and Role, according to their domain and range axioms, hence, four paths are found. The linguistic triples (including the terms) and the question type are used to select among candidates, and short paths are also preferred over long ones. Depending on the type of question, the method aims for a certain path structure, e.g. a descriptive question may result in a query for instances of one class, while a question with several conditions may result in a complex path structure. In our AgentRole example the paths are all of length one, resulting in the hasrole property being selected due to its string similarity with a term in the linguistic triple. Using the two classes and the hasrole property, the following SPARQL query is generated (for readability reasons we omit all namespace prefixes): SELECT * WHERE {?x a :Agent.?y a :Role.?x :hasrole?y} Evaluation. The SPARQL suggestions were evaluated over a number of ontologies to confirm that (i) erroneous suggestions are avoided, and (ii) the suggestions 9 Available at:

13 given are likely to be useful. The evaluation was performed over ontologies and repositories where CQs are readily available. The only judgement involved is to determine the correctness of the generated SPARQL query over the given ontology, with respect to the CQ. One set of ontologies was the Content ODP OWL building blocks present in the ODP portal 10 (only 36 of the 95 Content ODPs were annotated with CQs). In addition a repository of ontology modules created within the IKS project 11, in a use case concerning Ambient Intelligence systems 12 was used. Finally, we included two slightly larger ontologies from different domains, i.e. the FSDAS ontology network 13 of FAO which models the fish stock domain, and an ontology of small molecules within the biological domain 14. The suggestions given were classified into five categories; Correct complete (C) - The SPARQL is correct and covers the complete CQ, Correct incomplete (CI) - The SPARQL is correct, but does not cover the complete CQ, Partly correct (P) - At least one triple pattern of the SPARQL is correct, but it additionally contains triple patterns that are redundant or unrelated to the CQ, Incorrect (I) - The SPARQL is unrelated to the CQ, Missing (M) - There is no SPARQL suggestion produced. The results of this experiment can be seen in Table 3, where the values of the categories are given as the percentage of the total number of CQs of that set. The category values have also been summarized into useful correct suggestions - UC (the sum of C and CI), and useful suggestions - U (the sum of C, CI, and P, i.e. the suggestions that contain at least one triple pattern that is relevant for the query). Note that there are very few erroneous results (2.1% for the Table 3. Evaluation results Source #Ont #CQs C CI P I M UC U ODP Repository AmI Repository FSDAS SMO CP repository, and 0 for all the others), hence, our objective of not providing users with erroneous suggestions is definitely met. We also note that out of all suggestions, between two thirds and 100% of them are correct in all their parts (even if not complete in some cases). It seems much more difficult to match CQs in the ODP repository, than in the other datasets, which is not highly surprising since most of the ODPs are very general and at that level there is not a specific enough terminology. While in the most specific cases, i.e. the AmI repository and FSDAS, we are able to produce useful suggestions in over 60% of the cases For details see

14 When looking at the results more in detail, we see that the missing suggestions are not evenly distributed, but usually either an ontology has some suggestions for all its CQs or it has none at all. This can in many cases be attributed to the modelling style of that particular ontology, e.g. if no domain, range or other axioms are given for any of the properties our method will not find any paths. This highlights that providing SPARQL suggestions is not only a matter of convenience for the user, but the method also gives us important information about the ontology. For instance, if no matching classes and properties are found, the terminology of the model may be too far from the original requirements. On the other hand, if no connecting paths are found, this could indicate either that modelling best practices are not being followed (e.g. concerning axiomatization) or that there is some property missing. These observations point at the possibility of using our method as a debugging tool, rather than only for test generation, however, this is still future work. 6 Conclusions This paper has described the notion of ontology testing for application-oriented ontologies, analogous to software testing. Ontology testing can be viewed as the task-focused functional evaluation of an ontology, for verifying its requirements. We show that ontology test cases and test runs can be represented as ontologies themselves, described through a metamodel, in order to store both test data and metadata about the test procedure, execution, and test run results. We have presented a general test methodology and three variants; CQ verification, inference verification, and error provocation, which have emerged from our experience in ontology engineering, as well as a tool to support ontology testing. In addition, we have shown that automatic test generation, through suggesting SPARQL queries directly from requirement CQs, is feasible and can provide interesting insights even when no suggestion is found. The main contributions of this paper include the practical notion of a test case and the metamodel to describe it, the detailed description of the methodology, as well as the method for automatically generating test suggestions in the form of SPARQL queries. Future work includes to thoroughly evaluate the testing tool with a larger user base. Focusing on the details of the tool support, the next step is to add support for distinguishing between the test case and the test run, additionally we aim to include new ways to enter the requirement(s) to be tested (importing from a file or from annotations) and entering the SPARQL query (retrieve it from annotations, e.g. from an imported CP). With respect to the SPARQL suggestions, we will also investigate how to provide useful debugging-information ot the user even when no complete suggestion is available. An option to view the details of CP implementation of the imported CPs will also be added. So far the focus has been on support for unit testing. While the test case notion as such and the methods described can be applied to all kinds of tests, it still needs to be further investigated with respect to tool support and additional guidelines.

15 Acknowledgements Thanks to Vanessa Lopez, KMI (Open University), for providing the linguistic component for the SPARQL suggestions. This project was supported by the European Commission, through project IKS (FP7 ICT /No ), and The Swedish Foundation for International Cooperation in Research and Higher Education, project DEON (#IG ). References 1. Blomqvist, E., Presutti, V., Daga, E., Gangemi, A.: Experimenting with extreme Design. In: Proc. of EKAW 2010, Lisbon, Portugal, October 11-15, LNCS, vol. 6317, pp Springer (2010) 2. Djedidi, R., Aufaure, M.A.: Onto-Evoal an Ontology Evolution Approach Guided by Pattern Modelling and Quality Evaluation. In: Proc. of the 6th International Symposium on Foundations of Information and Knowledge Systems (FoIKS 2010). LNCS, vol Springer (2010) 3. Fernández, M., Gómez-Pérez, A., Juristo, N.: METHONTOLOGY: from Ontological Art towards Ontological Engineering. In: Proceedings of the AAAI97 Spring Symposium Series on Ontological Engineering (1997) 4. Gangemi, A., Catenacci, C., Ciaramita, M., Lehmann, J.: Modelling Ontology Evaluation and Validation. In: Proc. of the 3rd European Semantic Web Conference, Budva, Montenegro, June 11-14, LNCS, vol Springer (2006) 5. Gangemi, A., Presutti, V.: Ontology Design Patterns. In: Handbook on Ontologies, 2nd Ed. International Handbooks on Information Systems, Springer (2009) 6. Grüninger, M., Fox, M.: Methodology for the Design and Evaluation of Ontologies. In: Proceedings of IJCAI 95, Workshop on Basic Ontological Issues in Knowledge Sharing, April 13, 1995 (1995) 7. Gruninger, M., Fox, M.S.: The role of competency questions in enterprise engineering. In: Proc. of the IFIP WG5.7 Workshop on Benchmarking - Theory and Practice (1994) 8. Hamill, P.: Unit Test Frameworks - Tools for High-Quality Software Development. O Reilly Media (2004) 9. Horridge, M.: The OWL Unit Test Framework, loads/owlunittest/ 10. Liu, Q., Lambrix, P.: Debugging the missing is-a structure of networked ontologies. In: Proc. of ESWC 2010, Heraklion, Crete, Greece, May 30 - June 3, 2010, Part II. LNCS, vol. 6089, pp Springer (2010) 11. Liu, Q., Lambrix, P.: A System for Debugging is-a Structure in Networked Taxonomies. In: Poster and demo proc. - ISWC 2011, Bonn, Germany, 2011 (2011) 12. Lopez, V., Uren, V.S., Motta, E., Pasin, M.: Aqualog: An ontology-driven question answering system for organizational semantic intranets. Journal of Web Semantics 5(2), (2007) 13. Nicola, A.D., Missikoff, M., Navigli, R.: A software engineering approach to ontology building. Journal of Information Systems 34(2), (2009) 14. Pinto, H.S., Staab, S., Tempich, C.: DILIGENT: Towards a fine-grained methodology for DIstributed, Loosely-controlled and evolving Engineering of ontologies. In: Proceedings of the 16th European Conference on Artificial Intelligence (ECAI 2004). Valencia, Spain (2004) 15. Presutti, V., Daga, E., Gangemi, A., Blomqvist, E.: extreme Design with Content Ontology Design Patterns. In: Proc. of WOP 2009, collocated with ISWC vol CEUR Workshop Proceedings (2009)

16 16. Sommerville, I.: Software Engineering. Addison Wesley, 8 edn. (2007) 17. Uschold, M.: Building Ontologies: Towards a Unified Methodology. In: Proceedings of Expert Systems 96, the 16th Annual Conference of the British Computer Society Specialist Group on Expert Systems. Cambridge, UK (December 1996) 18. Vrandecic, D., Gangemi, A.: Unit tests for ontologies. In: OTM 2006 Workshops. LNCS, vol. 4278, pp Springer (2006) 19. Wang, H., Horridge, M., Rector, A.L., Drummond, N., Seidenberg, J.: Debugging OWL-DL Ontologies: A Heuristic Approach. In: Proc. of ISWC 2005, Galway, Ireland, November 6-10, LNCS, vol. 3729, pp Springer (2005)

extreme Design with Content Ontology Design Patterns

extreme Design with Content Ontology Design Patterns extreme Design with Content Ontology Design Patterns Valentina Presutti and Enrico Daga and Aldo Gangemi and Eva Blomqvist Semantic Technology Laboratory, ISTC-CNR Abstract. In this paper, we present extreme

More information

The Event Processing ODP

The Event Processing ODP The Event Processing ODP Eva Blomqvist 1 and Mikko Rinne 2 1 Linköping University, 581 83 Linköping, Sweden eva.blomqvist@liu.se 2 Department of Computer Science and Engineering, Aalto University, School

More information

NeOn Methodology for Building Ontology Networks: a Scenario-based Methodology

NeOn Methodology for Building Ontology Networks: a Scenario-based Methodology NeOn Methodology for Building Ontology Networks: a Scenario-based Methodology Asunción Gómez-Pérez and Mari Carmen Suárez-Figueroa Ontology Engineering Group. Departamento de Inteligencia Artificial. Facultad

More information

Ontology Refinement and Evaluation based on is-a Hierarchy Similarity

Ontology Refinement and Evaluation based on is-a Hierarchy Similarity Ontology Refinement and Evaluation based on is-a Hierarchy Similarity Takeshi Masuda The Institute of Scientific and Industrial Research, Osaka University Abstract. Ontologies are constructed in fields

More information

Methodologies, Tools and Languages. Where is the Meeting Point?

Methodologies, Tools and Languages. Where is the Meeting Point? Methodologies, Tools and Languages. Where is the Meeting Point? Asunción Gómez-Pérez Mariano Fernández-López Oscar Corcho Artificial Intelligence Laboratory Technical University of Madrid (UPM) Spain Index

More information

On Supporting HCOME-3O Ontology Argumentation Using Semantic Wiki Technology

On Supporting HCOME-3O Ontology Argumentation Using Semantic Wiki Technology On Supporting HCOME-3O Ontology Argumentation Using Semantic Wiki Technology Position Paper Konstantinos Kotis University of the Aegean, Dept. of Information & Communications Systems Engineering, AI Lab,

More information

An Ontology Based Question Answering System on Software Test Document Domain

An Ontology Based Question Answering System on Software Test Document Domain An Ontology Based Question Answering System on Software Test Document Domain Meltem Serhatli, Ferda N. Alpaslan Abstract Processing the data by computers and performing reasoning tasks is an important

More information

Evolva: A Comprehensive Approach to Ontology Evolution

Evolva: A Comprehensive Approach to Ontology Evolution Evolva: A Comprehensive Approach to Evolution Fouad Zablith Knowledge Media Institute (KMi), The Open University Walton Hall, Milton Keynes, MK7 6AA, United Kingdom f.zablith@open.ac.uk Abstract. evolution

More information

A Tagging Approach to Ontology Mapping

A Tagging Approach to Ontology Mapping A Tagging Approach to Ontology Mapping Colm Conroy 1, Declan O'Sullivan 1, Dave Lewis 1 1 Knowledge and Data Engineering Group, Trinity College Dublin {coconroy,declan.osullivan,dave.lewis}@cs.tcd.ie Abstract.

More information

Extracting knowledge from Ontology using Jena for Semantic Web

Extracting knowledge from Ontology using Jena for Semantic Web Extracting knowledge from Ontology using Jena for Semantic Web Ayesha Ameen I.T Department Deccan College of Engineering and Technology Hyderabad A.P, India ameenayesha@gmail.com Khaleel Ur Rahman Khan

More information

Automation of Semantic Web based Digital Library using Unified Modeling Language Minal Bhise 1 1

Automation of Semantic Web based Digital Library using Unified Modeling Language Minal Bhise 1 1 Automation of Semantic Web based Digital Library using Unified Modeling Language Minal Bhise 1 1 Dhirubhai Ambani Institute for Information and Communication Technology, Gandhinagar, Gujarat, India Email:

More information

Semantic Web Test

Semantic Web Test Semantic Web Test 24.01.2017 Group 1 No. A B C D 1 X X X 2 X X 3 X X 4 X X 5 X X 6 X X X X 7 X X 8 X X 9 X X X 10 X X X 11 X 12 X X X 13 X X 14 X X 15 X X 16 X X 17 X 18 X X 19 X 20 X X 1. Which statements

More information

Ontology Matching with CIDER: Evaluation Report for the OAEI 2008

Ontology Matching with CIDER: Evaluation Report for the OAEI 2008 Ontology Matching with CIDER: Evaluation Report for the OAEI 2008 Jorge Gracia, Eduardo Mena IIS Department, University of Zaragoza, Spain {jogracia,emena}@unizar.es Abstract. Ontology matching, the task

More information

KARL HAMMAR & VALENTINA PRESUTTI TEMPLATE-BASED CONTENT ODP INSTANTIATION

KARL HAMMAR & VALENTINA PRESUTTI TEMPLATE-BASED CONTENT ODP INSTANTIATION KARL HAMMAR & VALENTINA PRESUTTI TEMPLATE-BASED CONTENT ODP INSTANTIATION OVERVIEW Established methods of CODP instantiation. Our experiences of using CODPs in projects. The alternative: template-based

More information

Common Pitfalls in Ontology Development

Common Pitfalls in Ontology Development Common Pitfalls in Ontology Development María Poveda, Mari Carmen Suárez-Figueroa, Asunción Gómez-Pérez Ontology Engineering Group. Departamento de Inteligencia Artificial. Facultad de Informática, Universidad

More information

Annotation for the Semantic Web During Website Development

Annotation for the Semantic Web During Website Development Annotation for the Semantic Web During Website Development Peter Plessers and Olga De Troyer Vrije Universiteit Brussel, Department of Computer Science, WISE, Pleinlaan 2, 1050 Brussel, Belgium {Peter.Plessers,

More information

extreme Design with Content Ontology Design Pa5erns

extreme Design with Content Ontology Design Pa5erns extreme Design with Content Ontology Design Pa5erns Valen7na Presu9 and Eva Blomqvist Lecture 5 @ Corso di Do5orato 2011 Dipar7mento di Scienze dell Informazione Bologna, Italy Method and tool support

More information

Statistical Knowledge Patterns for Characterising Linked Data

Statistical Knowledge Patterns for Characterising Linked Data Statistical Knowledge Patterns for Characterising Linked Data Eva Blomqvist 1, Ziqi Zhang 2, Anna Lisa Gentile 2, Isabelle Augenstein 2, and Fabio Ciravegna 2 1 Department of Computer and Information Science,

More information

Verifying Description Logic Ontologies based on Competency Questions and Unit Testing

Verifying Description Logic Ontologies based on Competency Questions and Unit Testing Verifying Description Logic Ontologies based on Competency Questions and Unit Testing Camila Bezerra 1,2, Fred Freitas 1 1 Centro de Informática Universidade Federal de Pernambuco (UFPE) Recife - PE Brazil

More information

Instances of Instances Modeled via Higher-Order Classes

Instances of Instances Modeled via Higher-Order Classes Instances of Instances Modeled via Higher-Order Classes douglas foxvog Digital Enterprise Research Institute (DERI), National University of Ireland, Galway, Ireland Abstract. In many languages used for

More information

Semantic Web. Ontology Pattern. Gerd Gröner, Matthias Thimm. Institute for Web Science and Technologies (WeST) University of Koblenz-Landau

Semantic Web. Ontology Pattern. Gerd Gröner, Matthias Thimm. Institute for Web Science and Technologies (WeST) University of Koblenz-Landau Semantic Web Ontology Pattern Gerd Gröner, Matthias Thimm {groener,thimm}@uni-koblenz.de Institute for Web Science and Technologies (WeST) University of Koblenz-Landau July 18, 2013 Gerd Gröner, Matthias

More information

PROJECT PERIODIC REPORT

PROJECT PERIODIC REPORT PROJECT PERIODIC REPORT Grant Agreement number: 257403 Project acronym: CUBIST Project title: Combining and Uniting Business Intelligence and Semantic Technologies Funding Scheme: STREP Date of latest

More information

An Ontology-Based Methodology for Integrating i* Variants

An Ontology-Based Methodology for Integrating i* Variants An Ontology-Based Methodology for Integrating i* Variants Karen Najera 1,2, Alicia Martinez 2, Anna Perini 3, and Hugo Estrada 1,2 1 Fund of Information and Documentation for the Industry, Mexico D.F,

More information

Automating Instance Migration in Response to Ontology Evolution

Automating Instance Migration in Response to Ontology Evolution Automating Instance Migration in Response to Ontology Evolution Mark Fischer 1, Juergen Dingel 1, Maged Elaasar 2, Steven Shaw 3 1 Queen s University, {fischer,dingel}@cs.queensu.ca 2 Carleton University,

More information

A Lightweight Language for Software Product Lines Architecture Description

A Lightweight Language for Software Product Lines Architecture Description A Lightweight Language for Software Product Lines Architecture Description Eduardo Silva, Ana Luisa Medeiros, Everton Cavalcante, Thais Batista DIMAp Department of Informatics and Applied Mathematics UFRN

More information

Ontology Modularization for Knowledge Selection: Experiments and Evaluations

Ontology Modularization for Knowledge Selection: Experiments and Evaluations Ontology Modularization for Knowledge Selection: Experiments and Evaluations Mathieu d Aquin 1, Anne Schlicht 2, Heiner Stuckenschmidt 2, and Marta Sabou 1 1 Knowledge Media Institute (KMi), The Open University,

More information

EQuIKa System: Supporting OWL applications with local closed world assumption

EQuIKa System: Supporting OWL applications with local closed world assumption EQuIKa System: Supporting OWL applications with local closed world assumption Anees Mehdi and Jens Wissmann Institute AIFB, Karlsruhe Institute of Technology, DE anees.mehdi@kit.edu Forschungszentrum Informatik

More information

Human Error Taxonomy

Human Error Taxonomy Human Error Taxonomy The Human Error Taxonomy (HET) provides a structure for requirement errors made during the software development process. The HET can be employed during software inspection to help

More information

Presented By Aditya R Joshi Neha Purohit

Presented By Aditya R Joshi Neha Purohit Presented By Aditya R Joshi Neha Purohit Pellet What is Pellet? Pellet is an OWL- DL reasoner Supports nearly all of OWL 1 and OWL 2 Sound and complete reasoner Written in Java and available from http://

More information

RaDON Repair and Diagnosis in Ontology Networks

RaDON Repair and Diagnosis in Ontology Networks RaDON Repair and Diagnosis in Ontology Networks Qiu Ji, Peter Haase, Guilin Qi, Pascal Hitzler, and Steffen Stadtmüller Institute AIFB Universität Karlsruhe (TH), Germany {qiji,pha,gqi,phi}@aifb.uni-karlsruhe.de,

More information

Towards Rule Learning Approaches to Instance-based Ontology Matching

Towards Rule Learning Approaches to Instance-based Ontology Matching Towards Rule Learning Approaches to Instance-based Ontology Matching Frederik Janssen 1, Faraz Fallahi 2 Jan Noessner 3, and Heiko Paulheim 1 1 Knowledge Engineering Group, TU Darmstadt, Hochschulstrasse

More information

Template-Based Content ODP Instantiation

Template-Based Content ODP Instantiation Template-Based Content ODP Instantiation Karl Hammar 1,2 and Valentina Presutti 3 1 Department of Computer Science and Informatics, Jönköping University, Sweden 2 Department of Computer and Information

More information

University of Huddersfield Repository

University of Huddersfield Repository University of Huddersfield Repository Olszewska, Joanna Isabelle, Simpson, Ron and McCluskey, T.L. Appendix A: epronto: OWL Based Ontology for Research Information Management Original Citation Olszewska,

More information

Annotation Component in KiWi

Annotation Component in KiWi Annotation Component in KiWi Marek Schmidt and Pavel Smrž Faculty of Information Technology Brno University of Technology Božetěchova 2, 612 66 Brno, Czech Republic E-mail: {ischmidt,smrz}@fit.vutbr.cz

More information

An Annotation Tool for Semantic Documents

An Annotation Tool for Semantic Documents An Annotation Tool for Semantic Documents (System Description) Henrik Eriksson Dept. of Computer and Information Science Linköping University SE-581 83 Linköping, Sweden her@ida.liu.se Abstract. Document

More information

Ontology Development and Engineering. Manolis Koubarakis Knowledge Technologies

Ontology Development and Engineering. Manolis Koubarakis Knowledge Technologies Ontology Development and Engineering Outline Ontology development and engineering Key modelling ideas of OWL 2 Steps in developing an ontology Creating an ontology with Protégé OWL useful ontology design

More information

KNOWLEDGE MANAGEMENT AND ONTOLOGY

KNOWLEDGE MANAGEMENT AND ONTOLOGY The USV Annals of Economics and Public Administration Volume 16, Special Issue, 2016 KNOWLEDGE MANAGEMENT AND ONTOLOGY Associate Professor PhD Tiberiu SOCACIU Ștefan cel Mare University of Suceava, Romania

More information

Ontology Extraction from Heterogeneous Documents

Ontology Extraction from Heterogeneous Documents Vol.3, Issue.2, March-April. 2013 pp-985-989 ISSN: 2249-6645 Ontology Extraction from Heterogeneous Documents Kirankumar Kataraki, 1 Sumana M 2 1 IV sem M.Tech/ Department of Information Science & Engg

More information

Refining Ontologies by Pattern-Based Completion

Refining Ontologies by Pattern-Based Completion Refining Ontologies by Pattern-Based Completion Nadejda Nikitina and Sebastian Rudolph and Sebastian Blohm Institute AIFB, University of Karlsruhe D-76128 Karlsruhe, Germany {nikitina, rudolph, blohm}@aifb.uni-karlsruhe.de

More information

2 Which Methodology for Building Ontologies? 2.1 A Work Still in Progress Many approaches (for a complete survey, the reader can refer to the OntoWeb

2 Which Methodology for Building Ontologies? 2.1 A Work Still in Progress Many approaches (for a complete survey, the reader can refer to the OntoWeb Semantic Commitment for Designing Ontologies: A Proposal Bruno Bachimont 1,Antoine Isaac 1;2, Raphaël Troncy 1;3 1 Institut National de l'audiovisuel, Direction de la Recherche 4, Av. de l'europe - 94366

More information

TDDonto2: A Test-Driven Development Plugin for arbitrary TBox and ABox axioms

TDDonto2: A Test-Driven Development Plugin for arbitrary TBox and ABox axioms TDDonto2: A Test-Driven Development Plugin for arbitrary TBox and ABox axioms Kieren Davies 1, C. Maria Keet 1, and Agnieszka Lawrynowicz 2 1 Department of Computer Science, University of Cape Town, South

More information

Ontologies and similarity

Ontologies and similarity Ontologies and similarity Steffen Staab staab@uni-koblenz.de http://west.uni-koblenz.de Institute for Web Science and Technologies, Universität Koblenz-Landau, Germany 1 Introduction Ontologies [9] comprise

More information

An Approach to Evaluate and Enhance the Retrieval of Web Services Based on Semantic Information

An Approach to Evaluate and Enhance the Retrieval of Web Services Based on Semantic Information An Approach to Evaluate and Enhance the Retrieval of Web Services Based on Semantic Information Stefan Schulte Multimedia Communications Lab (KOM) Technische Universität Darmstadt, Germany schulte@kom.tu-darmstadt.de

More information

Applying the Semantic Web Layers to Access Control

Applying the Semantic Web Layers to Access Control J. Lopez, A. Mana, J. maria troya, and M. Yague, Applying the Semantic Web Layers to Access Control, IEEE International Workshop on Web Semantics (WebS03), pp. 622-626, 2003. NICS Lab. Publications: https://www.nics.uma.es/publications

More information

Making Ontology Documentation with LODE

Making Ontology Documentation with LODE Proceedings of the I-SEMANTICS 2012 Posters & Demonstrations Track, pp. 63-67, 2012. Copyright 2012 for the individual papers by the papers' authors. Copying permitted only for private and academic purposes.

More information

Spemmet - A Tool for Modeling Software Processes with SPEM

Spemmet - A Tool for Modeling Software Processes with SPEM Spemmet - A Tool for Modeling Software Processes with SPEM Tuomas Mäkilä tuomas.makila@it.utu.fi Antero Järvi antero.jarvi@it.utu.fi Abstract: The software development process has many unique attributes

More information

Supporting Documentation and Evolution of Crosscutting Concerns in Business Processes

Supporting Documentation and Evolution of Crosscutting Concerns in Business Processes Supporting Documentation and Evolution of Crosscutting Concerns in Business Processes Chiara Di Francescomarino supervised by Paolo Tonella dfmchiara@fbk.eu - Fondazione Bruno Kessler, Trento, Italy Abstract.

More information

On the Reduction of Dublin Core Metadata Application Profiles to Description Logics and OWL

On the Reduction of Dublin Core Metadata Application Profiles to Description Logics and OWL On the Reduction of Dublin Core Metadata Application Profiles to Description Logics and OWL Dimitrios A. Koutsomitropoulos High Performance Information Systems Lab, Computer Engineering and Informatics

More information

Genetic Programming. and its use for learning Concepts in Description Logics

Genetic Programming. and its use for learning Concepts in Description Logics Concepts in Description Artificial Intelligence Institute Computer Science Department Dresden Technical University May 29, 2006 Outline Outline: brief introduction to explanation of the workings of a algorithm

More information

Software Language Engineering of Architectural Viewpoints

Software Language Engineering of Architectural Viewpoints Software Language Engineering of Architectural Viewpoints Elif Demirli and Bedir Tekinerdogan Department of Computer Engineering, Bilkent University, Ankara 06800, Turkey {demirli,bedir}@cs.bilkent.edu.tr

More information

Guiding System Modelers in Multi View Environments: A Domain Engineering Approach

Guiding System Modelers in Multi View Environments: A Domain Engineering Approach Guiding System Modelers in Multi View Environments: A Domain Engineering Approach Arnon Sturm Department of Information Systems Engineering Ben-Gurion University of the Negev, Beer Sheva 84105, Israel

More information

An Architecture for Semantic Enterprise Application Integration Standards

An Architecture for Semantic Enterprise Application Integration Standards An Architecture for Semantic Enterprise Application Integration Standards Nenad Anicic 1, 2, Nenad Ivezic 1, Albert Jones 1 1 National Institute of Standards and Technology, 100 Bureau Drive Gaithersburg,

More information

Access rights and collaborative ontology integration for reuse across security domains

Access rights and collaborative ontology integration for reuse across security domains Access rights and collaborative ontology integration for reuse across security domains Martin Knechtel SAP AG, SAP Research CEC Dresden Chemnitzer Str. 48, 01187 Dresden, Germany martin.knechtel@sap.com

More information

Computer-assisted Ontology Construction System: Focus on Bootstrapping Capabilities

Computer-assisted Ontology Construction System: Focus on Bootstrapping Capabilities Computer-assisted Ontology Construction System: Focus on Bootstrapping Capabilities Omar Qawasmeh 1, Maxime Lefranois 2, Antoine Zimmermann 2, Pierre Maret 1 1 Univ. Lyon, CNRS, Lab. Hubert Curien UMR

More information

A Semantic Role Repository Linking FrameNet and WordNet

A Semantic Role Repository Linking FrameNet and WordNet A Semantic Role Repository Linking FrameNet and WordNet Volha Bryl, Irina Sergienya, Sara Tonelli, Claudio Giuliano {bryl,sergienya,satonelli,giuliano}@fbk.eu Fondazione Bruno Kessler, Trento, Italy Abstract

More information

Extension and integration of i* models with ontologies

Extension and integration of i* models with ontologies Extension and integration of i* models with ontologies Blanca Vazquez 1,2, Hugo Estrada 1, Alicia Martinez 2, Mirko Morandini 3, and Anna Perini 3 1 Fund Information and Documentation for the industry

More information

Adaptable and Adaptive Web Information Systems. Lecture 1: Introduction

Adaptable and Adaptive Web Information Systems. Lecture 1: Introduction Adaptable and Adaptive Web Information Systems School of Computer Science and Information Systems Birkbeck College University of London Lecture 1: Introduction George Magoulas gmagoulas@dcs.bbk.ac.uk October

More information

Integrating SysML and OWL

Integrating SysML and OWL Integrating SysML and OWL Henson Graves Lockheed Martin Aeronautics Company Fort Worth Texas, USA henson.graves@lmco.com Abstract. To use OWL2 for modeling a system design one must be able to construct

More information

Ontology-based Architecture Documentation Approach

Ontology-based Architecture Documentation Approach 4 Ontology-based Architecture Documentation Approach In this chapter we investigate how an ontology can be used for retrieving AK from SA documentation (RQ2). We first give background information on the

More information

Representing Product Designs Using a Description Graph Extension to OWL 2

Representing Product Designs Using a Description Graph Extension to OWL 2 Representing Product Designs Using a Description Graph Extension to OWL 2 Henson Graves Lockheed Martin Aeronautics Company Fort Worth Texas, USA henson.graves@lmco.com Abstract. Product development requires

More information

Reasoning on Business Processes and Ontologies in a Logic Programming Environment

Reasoning on Business Processes and Ontologies in a Logic Programming Environment Reasoning on Business Processes and Ontologies in a Logic Programming Environment Michele Missikoff 1, Maurizio Proietti 1, Fabrizio Smith 1,2 1 IASI-CNR, Viale Manzoni 30, 00185, Rome, Italy 2 DIEI, Università

More information

Ontology Development. Qing He

Ontology Development. Qing He A tutorial report for SENG 609.22 Agent Based Software Engineering Course Instructor: Dr. Behrouz H. Far Ontology Development Qing He 1 Why develop an ontology? In recent years the development of ontologies

More information

Ontology Design Patterns and XD. Eva Blomqvist

Ontology Design Patterns and XD. Eva Blomqvist Ontology Design Patterns and XD Eva Blomqvist eva.blomqvist@liu.se city - subclassof -> country 2 3 What we can do with OWL... (maybe) we can check the consistency, classify, and query our knowledge base...

More information

Modularization: a Key for the Dynamic Selection of Relevant Knowledge Components

Modularization: a Key for the Dynamic Selection of Relevant Knowledge Components Modularization: a Key for the Dynamic Selection of Relevant Knowledge Components Mathieu d Aquin, Marta Sabou, and Enrico Motta Knowledge Media Institute (KMi) The Open University, Milton Keynes, United

More information

The MUSING Approach for Combining XBRL and Semantic Web Data. ~ Position Paper ~

The MUSING Approach for Combining XBRL and Semantic Web Data. ~ Position Paper ~ The MUSING Approach for Combining XBRL and Semantic Web Data ~ Position Paper ~ Christian F. Leibold 1, Dumitru Roman 1, Marcus Spies 2 1 STI Innsbruck, Technikerstr. 21a, 6020 Innsbruck, Austria {Christian.Leibold,

More information

Dynamic Ontology Evolution

Dynamic Ontology Evolution Dynamic Evolution Fouad Zablith Knowledge Media Institute (KMi), The Open University. Walton Hall, Milton Keynes, MK7 6AA, United Kingdom. f.zablith@open.ac.uk Abstract. Ontologies form the core of Semantic

More information

Verification and Validation. Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 22 Slide 1

Verification and Validation. Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 22 Slide 1 Verification and Validation Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 22 Slide 1 Verification vs validation Verification: "Are we building the product right?. The software should

More information

EFFICIENT INTEGRATION OF SEMANTIC TECHNOLOGIES FOR PROFESSIONAL IMAGE ANNOTATION AND SEARCH

EFFICIENT INTEGRATION OF SEMANTIC TECHNOLOGIES FOR PROFESSIONAL IMAGE ANNOTATION AND SEARCH EFFICIENT INTEGRATION OF SEMANTIC TECHNOLOGIES FOR PROFESSIONAL IMAGE ANNOTATION AND SEARCH Andreas Walter FZI Forschungszentrum Informatik, Haid-und-Neu-Straße 10-14, 76131 Karlsruhe, Germany, awalter@fzi.de

More information

Evaluation of Commercial Web Engineering Processes

Evaluation of Commercial Web Engineering Processes Evaluation of Commercial Web Engineering Processes Andrew McDonald and Ray Welland Department of Computing Science, University of Glasgow, Glasgow, Scotland. G12 8QQ. {andrew, ray}@dcs.gla.ac.uk, http://www.dcs.gla.ac.uk/

More information

2 Experimental Methodology and Results

2 Experimental Methodology and Results Developing Consensus Ontologies for the Semantic Web Larry M. Stephens, Aurovinda K. Gangam, and Michael N. Huhns Department of Computer Science and Engineering University of South Carolina, Columbia,

More information

Digital Archives: Extending the 5S model through NESTOR

Digital Archives: Extending the 5S model through NESTOR Digital Archives: Extending the 5S model through NESTOR Nicola Ferro and Gianmaria Silvello Department of Information Engineering, University of Padua, Italy {ferro, silvello}@dei.unipd.it Abstract. Archives

More information

INFORMATICS RESEARCH PROPOSAL REALTING LCC TO SEMANTIC WEB STANDARDS. Nor Amizam Jusoh (S ) Supervisor: Dave Robertson

INFORMATICS RESEARCH PROPOSAL REALTING LCC TO SEMANTIC WEB STANDARDS. Nor Amizam Jusoh (S ) Supervisor: Dave Robertson INFORMATICS RESEARCH PROPOSAL REALTING LCC TO SEMANTIC WEB STANDARDS Nor Amizam Jusoh (S0456223) Supervisor: Dave Robertson Abstract: OWL-S as one of the web services standards has become widely used by

More information

H1 Spring C. A service-oriented architecture is frequently deployed in practice without a service registry

H1 Spring C. A service-oriented architecture is frequently deployed in practice without a service registry 1. (12 points) Identify all of the following statements that are true about the basics of services. A. Screen scraping may not be effective for large desktops but works perfectly on mobile phones, because

More information

Requirements Validation and Negotiation

Requirements Validation and Negotiation REQUIREMENTS ENGINEERING LECTURE 2015/2016 Eddy Groen Requirements Validation and Negotiation AGENDA Fundamentals of Requirements Validation Fundamentals of Requirements Negotiation Quality Aspects of

More information

IJCSC Volume 5 Number 1 March-Sep 2014 pp ISSN

IJCSC Volume 5 Number 1 March-Sep 2014 pp ISSN Movie Related Information Retrieval Using Ontology Based Semantic Search Tarjni Vyas, Hetali Tank, Kinjal Shah Nirma University, Ahmedabad tarjni.vyas@nirmauni.ac.in, tank92@gmail.com, shahkinjal92@gmail.com

More information

Description Logics as Ontology Languages for Semantic Webs

Description Logics as Ontology Languages for Semantic Webs Description Logics as Ontology Languages for Semantic Webs Franz Baader, Ian Horrocks, and Ulrike Sattler Presented by:- Somya Gupta(10305011) Akshat Malu (10305012) Swapnil Ghuge (10305907) Presentation

More information

LinDA: A Service Infrastructure for Linked Data Analysis and Provision of Data Statistics

LinDA: A Service Infrastructure for Linked Data Analysis and Provision of Data Statistics LinDA: A Service Infrastructure for Linked Data Analysis and Provision of Data Statistics Nicolas Beck, Stefan Scheglmann, and Thomas Gottron WeST Institute for Web Science and Technologies University

More information

Evaluating OWL 2 Reasoners in the Context Of Checking Entity-Relationship Diagrams During Software Development

Evaluating OWL 2 Reasoners in the Context Of Checking Entity-Relationship Diagrams During Software Development Evaluating OWL 2 Reasoners in the Context Of Checking Entity-Relationship Diagrams During Software Development Alexander A. Kropotin Department of Economic Informatics, Leuphana University of Lüneburg,

More information

Towards a Vocabulary for Data Quality Management in Semantic Web Architectures

Towards a Vocabulary for Data Quality Management in Semantic Web Architectures Towards a Vocabulary for Data Quality Management in Semantic Web Architectures Christian Fürber Universitaet der Bundeswehr Muenchen Werner-Heisenberg-Weg 39 85577 Neubiberg +49 89 6004 4218 christian@fuerber.com

More information

Semantic Web. Ontology Engineering and Evaluation. Morteza Amini. Sharif University of Technology Fall 95-96

Semantic Web. Ontology Engineering and Evaluation. Morteza Amini. Sharif University of Technology Fall 95-96 ه عا ی Semantic Web Ontology Engineering and Evaluation Morteza Amini Sharif University of Technology Fall 95-96 Outline Ontology Engineering Class and Class Hierarchy Ontology Evaluation 2 Outline Ontology

More information

TrOWL: Tractable OWL 2 Reasoning Infrastructure

TrOWL: Tractable OWL 2 Reasoning Infrastructure TrOWL: Tractable OWL 2 Reasoning Infrastructure Edward Thomas, Jeff Z. Pan, and Yuan Ren Department of Computing Science, University of Aberdeen, Aberdeen AB24 3UE, UK Abstract. The Semantic Web movement

More information

BPAL: A Platform for Managing Business Process Knowledge Bases via Logic Programming

BPAL: A Platform for Managing Business Process Knowledge Bases via Logic Programming BPAL: A Platform for Managing Business Process Knowledge Bases via Logic Programming Fabrizio Smith, Dario De Sanctis, Maurizio Proietti National Research Council, IASI Antonio Ruberti - Viale Manzoni

More information

Verification of Multiple Agent Knowledge-based Systems

Verification of Multiple Agent Knowledge-based Systems Verification of Multiple Agent Knowledge-based Systems From: AAAI Technical Report WS-97-01. Compilation copyright 1997, AAAI (www.aaai.org). All rights reserved. Daniel E. O Leary University of Southern

More information

Networked Ontologies

Networked Ontologies Networked Ontologies Information Systems & Semantic Web Universität Koblenz-Landau Koblenz, Germany With acknowledgements to S. Schenk, M. Aquin, E. Motta and the NeOn project team http://www.neon-project.org/

More information

Development of an Ontology-Based Portal for Digital Archive Services

Development of an Ontology-Based Portal for Digital Archive Services Development of an Ontology-Based Portal for Digital Archive Services Ching-Long Yeh Department of Computer Science and Engineering Tatung University 40 Chungshan N. Rd. 3rd Sec. Taipei, 104, Taiwan chingyeh@cse.ttu.edu.tw

More information

Testing! Prof. Leon Osterweil! CS 520/620! Spring 2013!

Testing! Prof. Leon Osterweil! CS 520/620! Spring 2013! Testing Prof. Leon Osterweil CS 520/620 Spring 2013 Relations and Analysis A software product consists of A collection of (types of) artifacts Related to each other by myriad Relations The relations are

More information

An ontology for the Business Process Modelling Notation

An ontology for the Business Process Modelling Notation An ontology for the Business Process Modelling Notation Marco Rospocher Fondazione Bruno Kessler, Data and Knowledge Management Unit Trento, Italy rospocher@fbk.eu :: http://dkm.fbk.eu/rospocher joint

More information

Just in time and relevant knowledge thanks to recommender systems and Semantic Web.

Just in time and relevant knowledge thanks to recommender systems and Semantic Web. Just in time and relevant knowledge thanks to recommender systems and Semantic Web. Plessers, Ben (1); Van Hyfte, Dirk (2); Schreurs, Jeanne (1) Organization(s): 1 Hasselt University, Belgium; 2 i.know,

More information

Introduction to the Semantic Web Tutorial

Introduction to the Semantic Web Tutorial Introduction to the Semantic Web Tutorial Ontological Engineering Asunción Gómez-Pérez (asun@fi.upm.es) Mari Carmen Suárez -Figueroa (mcsuarez@fi.upm.es) Boris Villazón (bvilla@delicias.dia.fi.upm.es)

More information

Benefits and Challenges of Architecture Frameworks

Benefits and Challenges of Architecture Frameworks Benefits and Challenges of Architecture Frameworks Daniel Ota Michael Gerz {daniel.ota michael.gerz}@fkie.fraunhofer.de Fraunhofer Institute for Communication, Information Processing and Ergonomics FKIE

More information

Introduction to Software Engineering

Introduction to Software Engineering Introduction to Software Engineering Gérald Monard Ecole GDR CORREL - April 16, 2013 www.monard.info Bibliography Software Engineering, 9th ed. (I. Sommerville, 2010, Pearson) Conduite de projets informatiques,

More information

Available online at ScienceDirect. Procedia Computer Science 52 (2015 )

Available online at  ScienceDirect. Procedia Computer Science 52 (2015 ) Available online at www.sciencedirect.com ScienceDirect Procedia Computer Science 52 (2015 ) 1071 1076 The 5 th International Symposium on Frontiers in Ambient and Mobile Systems (FAMS-2015) Health, Food

More information

Graph Representation of Declarative Languages as a Variant of Future Formal Specification Language

Graph Representation of Declarative Languages as a Variant of Future Formal Specification Language Economy Informatics, vol. 9, no. 1/2009 13 Graph Representation of Declarative Languages as a Variant of Future Formal Specification Language Ian ORLOVSKI Technical University of Moldova, Chisinau, Moldova

More information

1 Version management tools as a basis for integrating Product Derivation and Software Product Families

1 Version management tools as a basis for integrating Product Derivation and Software Product Families 1 Version management tools as a basis for integrating Product Derivation and Software Product Families Jilles van Gurp, Christian Prehofer Nokia Research Center, Software and Application Technology Lab

More information

MERGING BUSINESS VOCABULARIES AND RULES

MERGING BUSINESS VOCABULARIES AND RULES MERGING BUSINESS VOCABULARIES AND RULES Edvinas Sinkevicius Departament of Information Systems Centre of Information System Design Technologies, Kaunas University of Lina Nemuraite Departament of Information

More information

TEL2813/IS2820 Security Management

TEL2813/IS2820 Security Management TEL2813/IS2820 Security Management Security Management Models And Practices Lecture 6 Jan 27, 2005 Introduction To create or maintain a secure environment 1. Design working security plan 2. Implement management

More information

Improving Adaptive Hypermedia by Adding Semantics

Improving Adaptive Hypermedia by Adding Semantics Improving Adaptive Hypermedia by Adding Semantics Anton ANDREJKO Slovak University of Technology Faculty of Informatics and Information Technologies Ilkovičova 3, 842 16 Bratislava, Slovak republic andrejko@fiit.stuba.sk

More information

Trust4All: a Trustworthy Middleware Platform for Component Software

Trust4All: a Trustworthy Middleware Platform for Component Software Proceedings of the 7th WSEAS International Conference on Applied Informatics and Communications, Athens, Greece, August 24-26, 2007 124 Trust4All: a Trustworthy Middleware Platform for Component Software

More information

Designing a System Engineering Environment in a structured way

Designing a System Engineering Environment in a structured way Designing a System Engineering Environment in a structured way Anna Todino Ivo Viglietti Bruno Tranchero Leonardo-Finmeccanica Aircraft Division Torino, Italy Copyright held by the authors. Rubén de Juan

More information

Developing Web-Based Applications Using Model Driven Architecture and Domain Specific Languages

Developing Web-Based Applications Using Model Driven Architecture and Domain Specific Languages Proceedings of the 8 th International Conference on Applied Informatics Eger, Hungary, January 27 30, 2010. Vol. 2. pp. 287 293. Developing Web-Based Applications Using Model Driven Architecture and Domain

More information