Final results of the Ontology Alignment Evaluation Initiative 2011

Size: px
Start display at page:

Download "Final results of the Ontology Alignment Evaluation Initiative 2011"

Transcription

1 Final results of the Ontology Alignment Evaluation Initiative 2011 Jérôme Euzenat 1, Alfio Ferrara 2, Willem Robert van Hage 3, Laura Hollink 4, Christian Meilicke 5, Andriy Nikolov 6, François Scharffe 7, Pavel Shvaiko 8, Heiner Stuckenschmidt 5, Ondřej Šváb-Zamazal 9, and Cássia Trojahn 1 1 INRIA & LIG, Montbonnot, France {jerome.euzenat,cassia.trojahn}@inria.fr 2 Universita degli studi di Milano, Italy ferrara@dico.unimi.it 3 Vrije Universiteit Amsterdam, The Netherlands W.R.van.Hage@vu.nl 4 Delft University of Technology, The Netherlands l.hollink@tudelft.nl 5 University of Mannheim, Mannheim, Germany {christian,heiner}@informatik.uni-mannheim.de 6 The Open University, Milton Keynes, UK A.Nikolov@open.ac.uk 7 LIRMM, Montpellier, FR francois.scharffe@lirmm.fr 8 TasLab, Informatica Trentina, Trento, Italy pavel.shvaiko@infotn.it 9 University of Economics, Prague, Czech Republic ondrej.zamazal@vse.cz Abstract. Ontology matching consists of finding correspondences between semantically related entities of two ontologies. OAEI campaigns aim at comparing ontology matching systems on precisely defined test cases. These test cases can use ontologies of different nature (from simple directories to expressive OWL ontologies) and use different modalities, e.g., blind evaluation, open evaluation, consensus. OAEI-2011 builds over previous campaigns by having 4 tracks with 6 test cases followed by 18 participants. Since 2010, the campaign has been using a new evaluation modality which provides more automation to the evaluation. In particular, this year it allowed to compare run time across systems. This paper is an overall presentation of the OAEI 2011 campaign. 1 Introduction The Ontology Alignment Evaluation Initiative 1 (OAEI) is a coordinated international initiative, which organizes the evaluation of the increasing number of ontology matching systems [10; 8; 15]. The main goal of OAEI is to compare systems and algorithms This paper improves on the First results initially published in the on-site proceedings of the ISWC workshop on Ontology Matching (OM-2011). The only official results of the campaign, however, are on the OAEI web site. 1

2 on the same basis and to allow anyone for drawing conclusions about the best matching strategies. Our ambition is that, from such evaluations, tool developers can improve their systems. Two first events were organized in 2004: (i) the Information Interpretation and Integration Conference (I3CON) held at the NIST Performance Metrics for Intelligent Systems (PerMIS) workshop and (ii) the Ontology Alignment Contest held at the Evaluation of Ontology-based Tools (EON) workshop of the annual International Semantic Web Conference (ISWC) [17]. Then, a unique OAEI campaign occurred in 2005 at the workshop on Integrating Ontologies held in conjunction with the International Conference on Knowledge Capture (K-Cap) [1]. Starting from 2006 through 2010 the OAEI campaigns were held at the Ontology Matching workshops collocated with ISWC [9; 7; 2; 5; 6]. Finally in 2011, the OAEI results were presented again at the Ontology Matching workshop collocated with ISWC, in Bonn, Germany 2. Since last year, we have been promoting an environment for automatically processing evaluations ( 2.2), which were developed within the SEALS (Semantic Evaluation At Large Scale) project 3. This project aims at providing a software infrastructure for automatically executing evaluations, and evaluation campaigns for typical semantic web tools, including ontology matching. Several OAEI data sets were evaluated under the SEALS modality. This provides a more uniform evaluation setting. This paper serves as a synthesis to the 2011 evaluation campaign and as an introduction to the results provided in the papers of the participants. The remainder of the paper is organized as follows. In Section 2, we present the overall evaluation methodology that has been used. Sections 3-6 discuss the settings and the results of each of the test cases. Section 7 overviews lessons learned from the campaign. Finally, Section 8 outlines future plans and Section 9 concludes the paper. 2 General methodology We first present the test cases proposed this year to the OAEI participants ( 2.1). Then, we discuss the resources used by participants to test their systems and the execution environment used for running the tools ( 2.2). Next, we describe the steps of the OAEI campaign ( ) and report on the general execution of the campaign ( 2.6). 2.1 Tracks and test cases This year s campaign consisted of 4 tracks gathering 6 data sets and different evaluation modalities: The benchmark track ( 3): Like in previous campaigns, a systematic benchmark series have been proposed. The goal of this benchmark series is to identify the areas in which each matching algorithm is strong or weak. The test is based on one particular ontology dedicated to the very narrow domain of bibliography and a number of alternative ontologies of the same domain for which reference alignments are

3 provided. This year, we used new systematically generated benchmarks, based on other ontologies than the bibliographic one. The expressive ontologies track offers real world ontologies using OWL modeling capabilities: Anatomy ( 4): The anatomy real world case is about matching the Adult Mouse Anatomy (2744 classes) and a part of the NCI Thesaurus (3304 classes) describing the human anatomy. Conference ( 5): The goal of the conference task is to find all correct correspondences within a collection of ontologies describing the domain of organizing conferences (the domain being well understandable for every researcher). Additionally, interesting correspondences are also welcome. Results were evaluated automatically against reference alignments and by using logical reasoning techniques. Oriented alignments: This track focused on the evaluation of alignments that contain other relations than equivalences. It provides two data sets of real ontologies taken from a) Academia (alterations of ontologies from the OAEI benchmark series), b) Course catalogs (alterations of ontologies concerning courses in the universities of Cornell and Washington). The alterations aim to introduce additional subsumption correspondences between classes that cannot be inferred via reasoning. Model matching: This data set compares model matching tools from the Model- Driven Engineering (MDE) community on ontologies. The test cases are available in two formats: OWL and Ecore. The models to be matched have been automatically derived from a model-based repository. Instance matching ( 6): The goal of the instance matching track is to evaluate the performance of different tools on the task of matching RDF individuals which originate from different sources but describe the same real-world entity. Instance matching is organized in two sub-tasks: Data interlinking (DI) This year the Data interlinking track focused on retrieving New York Times (NYT) interlinks with DBPedia, Freebase and Geonames. The NYT data set includes 4 data subsets: persons, locations, organizations and descriptors that should be matched to themselves to detect duplicates, and to DBPedia, Freebase and Geonames. Only Geonames has links to the Locations data set of NYT. OWL data track (IIMB): The synthetic OWL data track is focused on (i) providing an evaluation data set for various kinds of data transformations, including value transformations, structural transformations, and logical transformations; (ii) covering a wide spectrum of possible techniques and tools. To this end, the IIMB benchmark is generated by starting from an initial OWL knowledge base that is transformed into a set of modified knowledge bases by applying several automatic transformations of data. Participants are requested to find the correct correspondences among individuals of the first knowledge base and individuals of the others. Table 1 summarizes the variation in the results expected from the tests under consideration.

4 test formalism relations confidence modalities language SEALS benchmarks OWL = [0 1] open EN anatomy OWL = [0 1] open EN conference OWL-DL =, <= [0 1] blind+open EN di RDF = [0 1] open EN iimb RDF = [0 1] open EN Table 1. Characteristics of the test cases (open evaluation is made with already published reference alignments and blind evaluation is made by organizers from reference alignments unknown to the participants). This year we had to cancel the Oriented alignments and Model matching tracks which have not had enough participation. We preserved the IIMB track with only one participant, especially because the participant was not tied to the organizers and participated in the other tracks as well. 2.2 The SEALS platform In 2010, participants of the Benchmark, Anatomy and Conference tracks were asked for the first time to use the SEALS evaluation services: they had to wrap their tools as web services and the tools were executed on the machines of the tool developers [18]. In 2011, tool developers had to implement a simple interface and to wrap their tools in a predefined way including all required libraries and resources. A tutorial for tool wrapping was provided to the participants. This tutorial described how to wrap a tool and how to use a simple client to run a full evaluation locally. After local tests had been conducted successfully, the wrapped tool was uploaded for a test on the SEALS portal 4. Consequently it was executed on the SEALS platform by the organisers in a semi-automated way. This approach allowed to measure runtime and ensured the reproducibility of the results for the first time in the history of OAEI. As a side effect, this approach ensures also that a tool is executed with the same settings for all of the three tracks. This was already requested by the organizers in the past years. However, this rule was sometimes ignored by participants. 2.3 Preparatory phase Ontologies to be matched and (where applicable) reference alignments have been provided in advance during the period between May 30 th and June 27 th, This gave potential participants the occasion to send observations, bug corrections, remarks and other test cases to the organizers. The goal of this preparatory period is to ensure that the delivered tests make sense to the participants. The final test base was released on July 6 th, The data sets did not evolve after that, except for the reference alignment of the Anatomy track to which minor changes have been applied to increase its quality. 4

5 2.4 Execution phase During the execution phase, participants used their systems to automatically match the ontologies from the test cases. In most cases, ontologies are described in OWL-DL and serialized in the RDF/XML format [3]. Participants were asked to use one algorithm and the same set of parameters for all tests in all tracks. It is fair to select the set of parameters that provides the best results (for the tests where results are known). Beside parameters, the input of the algorithms must be the two ontologies to be matched and any general purpose resource available to everyone, i.e., no resource especially designed for the test. In particular, participants should not use the data (ontologies and reference alignments) from other test cases to help their algorithms. Participants can self-evaluate their results either by comparing their output with reference alignments or by using the SEALS client to compute precision and recall. 2.5 Evaluation phase Participants have been encouraged to provide (preliminary) results or to upload their wrapped tools on the SEALS portal by September 1 st, Organizers evaluated the results and gave feedback to the participants. For the SEALS modality, a full-fledged test on the platform has been conducted by the organizers and problems were reported to the tool developers, until finally a properly executable version of the tool has been uploaded on the SEALS portal. Participants were asked to send their final results or upload the final version of their tools by September 23 th, Participants also provided the papers that are published hereafter. As soon as first results were available, these results were published on the respective web pages by the track organizers. The standard evaluation measures are precision and recall computed against the reference alignments. For the matter of aggregation of the measures, we used weighted harmonic means (weights being the size of the true positives). This clearly helps in the case of empty alignments. Another technique that was used is the computation of precision/recall graphs so it was advised that participants provide their results with a weight to each correspondence they found. New measures addressing some limitations of precision and recall have also been used for testing purposes as well as measures for compensating the lack of complete reference alignments. Additionally, we measured runtimes for all tracks conducted under the SEALS modality. 2.6 Comments on the execution For a few years, the number of participating systems has remained roughly stable: 4 participants in 2004, 7 in 2005, 10 in 2006, 17 in 2007, 13 in 2008, 16 in 2009, 15 in 2010 and 18 in However, participating systems are now constantly changing. The number of covered runs has increased more than observed last year: 48 in 2007, 50 in 2008, 53 in 2009, 37 in 2010, and 53 in This is, of course, due to the ability to run all systems participating in the SEALS modality in all tracks. However, not all tools participating in the SEALS modality could generate results for the anatomy track (see Section 4). This does not really contradict the conjecture we made last year that

6 systems are more specialized. In fact, only two systems (AgreementMaker and CODI) participated also in the instance matching tasks, and CODI only participated in a task (IIMB) in which no other instance matching system entered. This year we were able to run most of the matchers in a controlled evaluation environment, in order to test their portability and deployability. This allowed us comparing systems on the same execution basis. This is also a guarantee that the tested system can be executed out of their particular development environment. The list of participants is summarized in Table 2. AgrMaker Aroma CSA CIDER CODI LDOA Lily LogMap MaasMtch MapEVO MapPSO MapSSS OACAS OMR Optima Serimi YAM++ Zhishi System Confidence benchmarks 16 anatomy 16 conference 14 Total=18 di 3 iimb 1 Total Table 2. Participants and the state of their submissions. Confidence stands for the type of result returned by a system: it is ticked when the confidence has been measured as non boolean value. Two systems require a special remark. YAM++ used a setting that was learned from the reference alignments of the benchmark data set from OAEI 2009, which is highly similar to the corresponding benchmark in This affects the results of the traditional OAEI benchmark and no other tests. Moreover, we have run the benchmark in newly generated tests where YAM++ is indeed having weaker performances. Considering that indeed benchmarks was one of the few tests on which to train algorithms, we decided to keep YAM++ results with this warning. AgreementMaker used machine learning techniques to choose automatically between one of three settings optimized for the benchmark, anatomy and conference data set. It used a subset of the available reference alignments as input to the training phase and clearly a specific tailored setting for passing these tests. This is typically prohibited by OAEI rules. However, at the same time, AgreementMaker has improved its results over last year so we found interesting to report them. The summary of the results track by track is provided in the following sections. 3 Benchmark The goal of the benchmark data set is to provide a stable and detailed picture of each algorithm. For that purpose, algorithms are run on systematically generated test cases.

7 3.1 Test data The systematic benchmark test set is built around a seed ontology and many variations of it. The ontologies are described in OWL-DL and serialized in the RDF/XML format. The reference ontology is that of test #101. Participants have to match this reference ontology with the variations. Variations are focused on the characterization of the behavior of the tools rather than having them compete on real-life problems. They are organized in three groups: Simple tests (1xx) such as comparing the reference ontology with itself, with another irrelevant ontology (the wine ontology used in the OWL primer) or the same ontology in its restriction to OWL-Lite; Systematic tests (2xx) obtained by discarding features from a reference ontology. It aims at evaluating how an algorithm behaves when a particular type of information is lacking. The considered features were: Name of entities that can be replaced by random strings, synonyms, name with different conventions, strings in another language than English; Comments that can be suppressed or translated in another language; Specialization hierarchy that can be suppressed, expanded or flattened; Instances that can be suppressed; Properties that can be suppressed or having the restrictions on classes discarded; Classes that can be expanded, i.e., replaced by several classes or flattened. Four real-life ontologies of bibliographic references (3xx) found on the web and left mostly untouched (there were added xmlns and xml:base attributes). This is only used for the initial benchmark. This year, we departed from the usual bibliographic benchmark that have been used since We used a new test generator [14] in order to reproduce the structure of benchmark from different seed ontologies. We have generated three different benchmarks against which matchers have been evaluated: benchmark (biblio) is the benchmark data set that has been used since It is used for participants to check that they can run the tests. It also allows for comparison with other systems since The seed ontology concerns bibliographic references and is inspired freely from BibTeX. It contains 33 named classes, 24 object properties, 40 data properties, 56 named individuals and 20 anonymous individuals. We have considered the original version of benchmark (referred as original in the subsections above) and a new automatically generated one (biblio). benchmark2 (ekaw) The Ekaw ontology, one of the ontologies from the conference track ( 5), was used as seed ontology for generating the Benchmark2 data set. It contains 74 classes and 33 object properties. The results with this new data set were provided to participants after the preliminary evaluation. benchmark3 (finance) This data set is based on the Finance ontology 5, which contains 322 classes, 247 object properties, 64 data properties and 1113 named individuals. This ontology was not disclosed to the participants. 5

8 Having these three data sets allows us to better evaluate the dependency between the results and the seed ontology. The SEALS platform allows for evaluating matchers against these many data sets automatically. For all data sets, the reference alignments are still limited: they only match named classes and properties and use the = relation with confidence of 1. Full description of these tests can be found on the OAEI web site. 3.2 Results 16 systems have participated in the benchmark track of this year s campaign (see Table 2). From the eleven participants last year, only four participated this year (AgreementMaker, Aroma, CODI and MapPSO). On the other hand, there are ten new participants, while two participants (CIDER and Lily) have been participating in the previous campaigns as well. In the following, we present the evaluation results, both in terms of runtime and compliance with relation to reference alignments. Portability. 18 systems have been registered on the SEALS portal. One has abandoned due to requirements posed by the platform and another one abandoned silently. Thus, 16 systems bundled their tools into the SEALS format. From these 16 systems, we have not been able to run the final versions of OMR and OACAS (packaging error). CODI was not working on the operating system version under which we measured runtime. CODI runs under Windows and some versions of Linux, but has specific requirements not met on the Linux version that has been used for runnning the SEALS platform (Fedora 8). Some other systems still have (fixable) problems with the output they generate 6. Runtime. This year we were able to measure the performance of matchers in terms of runtime. We used a 3GHz Xeon 5472 (4 cores) machine running Linux Fedora 8 with 8GB RAM. This is a very preliminary setting for mainly testing the deployability of tools into the SEALS platform. Table 3 presents the time required by systems to complete the 94 tests in each data set 7. These results are based on 3 runs of each matcher on each data sets. We also include the result of a simple edit distance algorithm on labels (edna). Unfortunately, we were not able to compare CODI s runtime with other systems. Considering all tasks but finance, there are systems which can run them within less than 15mn (Aroma, edna, LogMap, CSA, YAM++, MapEVO, AgreementMaker, MapSSS), there are systems performing the tasks within one hour (Cider, MaasMatch, Lily) and systems which need more time (MapPSO, LDOA, Optima). Figure 1 better illustrates the correlation between the number of elements in each seed ontology and the time taken by matchers for generating the 94 alignments. The faster matcher, independently from the seed ontology, is Aroma (even for finance), followed by LogMap, 6 All evaluations have been performed with the Alignment API 4.2 [3] with the exception of LDOA for which we had to adapt the relaxed evaluators to obtain results. 7 From the 111 tests in the original benchmark data set, 17 of them have not been automatically generated: , , , For comparative purposes, they were discarded.

9 original biblio ekaw finance System Runtime Top-5 Runtime Top-5 Runtime Top-5 Runtime Top-5 edna AgrMaker x h Aroma CSA h CIDER h CODI Error Error Error Error LDOA 28.94h 29.31h 17h T Lily T LogMap Error MaasMtch h MapEVO h MapPSO 3.05h 3.09h 3.72h 85.98h MapSSS 8.84 x 4.42 x OACAS Error Error Error Error OMR Error Error Error Error Optima 3.15h 2.48h 88.80h T YAM T Table 3. Runtime (in minutes) based on 3 runs, and the five best systems in terms of F-measure in each data set (top-5). Error indicates that the tool could not run in the current setting; or their final version has some packaging error. T indicates that tool could not process the single 101 test in less than 2 hours. x indicates that the tool breaks when parsing some ontologies. Results in bold face are based on only 1 run. CSA, YAM++, AgreementMaker (AgrMaker) and MapSSS. Furthermore, as detailed in the following, AgreementMaker, CSA, CODI and YAM++ are also the best systems for most of the different data sets. For finance, we observed that many participants were not able to deal with large ontologies. This applies to the slowest systems of the other tasks, but other problems occur with AgreementMaker and MapSSS. Fast systems like LogMap could not process some of the test cases due to the inconsistency of the finance ontology (as CODI). Finally, other relatively fast systems such as YAM++ and Lily had to time out. We plan to work on these two issues in the next campaigns. Compliance. Concerning compliance, we focus on the benchmark2 (ekaw) data set. Table 4 shows the results of participants as well as those given by edna (simple edit distance algorithm on labels). The full results are on the OAEI web site. As shown in Table 4, two systems achieve top performance in terms of F-measure: MapSSS and YAM++, with CODI, CSA and AgreementMaker as close followers, respectively. Lily and CIDER had presented intermediary values of precision and recall. All systems achieve a high level of precision and relatively low values of recall. Only MapEVO had a significantly lower recall than edna (with LogMap and MaasMatch (MaasMtch) with slightly lower values), while no system had lower precision.

10 system refalign edna AgrMaker Aroma CSA CIDER CODI LDOA test Prec. FMeas. Rec. Prec. FMeas. Rec. Prec. FMeas. Rec. Prec. FMeas. Rec. Prec. FMeas. Rec. Prec. FMeas. Rec. Prec. FMeas. Rec. Prec. FMeas. Rec. 1xx xx H-mean Symmetric Effort P-oriented R-oriented Weighted Lily LogMap MaasMtch MapEVO MapPSO MapSSS YAM++ Optima test Prec. FMeas. Rec. Prec. FMeas. Rec. Prec. FMeas. Rec. Prec. FMeas. Rec. Prec. FMeas. Rec. Prec. FMeas. Rec. Prec. FMeas. Rec. Prec. FMeas. Rec. 1xx xx H-mean Symmetric Effort P-oriented R-oriented Weighted Table 4. Results obtained by participants on the Benchmark2 (ekaw) test case (harmonic means). Relaxed precision and recall correspond to the three measures of [4]: symmetric proximity, correction effort and oriented (precision and recall). Weighted precision and recall takes into account the confidence associated to correspondence by the matchers.

11 Runtime (mn) biblio (97+76) ekaw (107) finance ( ) AgrMaker Aroma CIDER CSA edna LDOA Test (size) Lily LogMap MaasMtch MapEVO MapPSO MapSSS Optima YAM Fig. 1. Logarithmic plot of the time taken by matchers (averaged on 3 runs) to deal with different data sets: biblio, ekaw and finance Looking at each group of tests, in simple tests (1xx) all systems have similar performance, excluding CODI. As noted in previous campaigns, the algorithms have their best score with the 1xx test series. This is because there are no modifications in the labels of classes and properties in these tests and basically all matchers are able to deal with the heterogeneity in labels. Considering that Benchmark2 has one single test in 1xx, the discriminant category is 2xx, with 101 tests. For this category, the top five systems in terms of F-measure (as stated above) are: MapSSS, YAM++, CODI, CSA and AgreementMaker, respectively (CIDER and Lily as followers). Many algorithms have provided their results with confidence measures. It is thus possible to draw precision/recall graphs in order to compare them. Figure 2 shows the precision and recall graphs. These results are only relevant for the results of participants who provide confidence measures different from 1 or 0 (see Table 2). As last year, they show the real precision at n% recall and they stop when no more correspondences are available (then the end point corresponds to the precision and recall reported in Table 4). The results have also been compared with the relaxed measures proposed in [4], namely symmetric proximity, correction effort and oriented measures ( Symmetric, Effort, P/R-oriented in Table 4). Table 4 shows that these measures provide a uniform and limited improvement to most systems. As last year, the exception is MapEVO, which has a considerable improvement in precision. This could be explained by the fact this system misses the target, by not that far (the false negative correspondences found by the matcher are close to the correspondences in the reference alignment) so the gain provided by the relaxed measures has a considerable impact for this system. This may also be explained by the global optimization of the system which tends to be glob-

12 ally roughly correct as opposed to locally strictly correct as measured by precision and recall. The same confidence-weighted precision and recall as last year have been computed. They reward systems able to provide accurate confidence measures (or penalizes less mistakes on correspondences with low confidence) [6]. These measures provide precision increasing for most of the systems, specially edna, MapEVO and MapPSO (which had possibly many incorrect correspondences with low confidence). This shows that the simple edit distance computed by edna is valuable as a confidence measure (the weighted precision and recall for edna could be taken as a decent baseline). It also provides recall decrease specially for CSA, Lily, LogMap, MapPSO and YAM++ (which had apparently many correct correspondences with low confidence). The variation for YAM++ is quite impressive: this is because YAM++ provides especially low confidence to correct correspondences. Some systems, such as AgreementMaker, CODI, Maas- Match and MapSSS, generate all correspondences with confidence = 1, so they have no change. Comparison across data sets. Table 5 presents the average F-measure for 3 runs, for each data set (as Table 3, some of these results are based on only one run). These three runs are not necessary: even if matchers exhibit non deterministic behavior on a test case basis, their average F-measure on the whole data set remains the same [14]. This year, although most of the systems participating in 2010 have improved their algorithms, none of them could outperform ASMOV, the best system in the 2010 campaign. With respect to the original benchmark data set and the new generated one (original and biblio in Table 5), we could observe a 1-2% constant and negative variation in F- measure, for most of the systems (except CODI and MapEVO). Furthermore, most of the systems perform better with the bibliographic ontology than with ekaw (a variation of 5-15%). The exceptions are LDOA, LogMap and MapPSO, followed by MaasMatch and CIDER with relatively stable F-measures. Although we have not enough results for a fair comparison with finance, we could observe that CSA and MaasMatch are the most stable matchers (with less variation than the others), followed by Aroma, CIDER and AgreementMaker, respectively. Finally, the group of best systems in each data set remains relatively the same across the different seed ontologies. Disregarding finance, CSA, CODI and YAM++ are ahead as the best systems for all three data sets, with MapSSS (2 out of 3) and Agreement- Maker, Aroma and Lily (1 out of 3) as followers. 3.3 Conclusions For the first time, we could observe a high variation in the time matchers require to complete the alignment tasks (from some minutes to some days). We can also conclude that compliance is not proportional to runtime: the top systems in terms of F-measure were able to finish the alignment tasks in less than 15mn (with Aroma and LogMap as faster matchers, with intermediary levels of compliance). Regarding the capability of dealing with large ontologies, many of the participants were not able to process them, leaving room for further improvement on this issue.

13 1. precision recall 1. edna AgrMaker aroma CSA CIDER CODI LDOA Lily LogMap MaasMtch MapEVO MapPSO MapSSS Optima YAM Fig. 2. Precision/recall graphs for benchmarks. The alignments generated by matchers are cut under a threshold necessary for achieving n% recall and the corresponding precision is computed. The numbers in the legend are the Mean Average Precision (MAP): the average precision for each correct retrieved correspondence. Systems for which these graphs are not meaningful (because they did not provide graded confidence values) are drawn in dashed lines.

14 original original biblio ekaw finance System Fmeas. Top-5 Fmeas. Top-5 Fmeas. Top-5 Fmeas. Top-5 Fmeas. Top-5 ASMOV.93 AgrMaker x Aroma CSA CIDER CODI x edna LDOA T Lily T LogMap x MaasMtch MapEVO MapPSO MapSSS.84 x.78 T Optima T YAM T Table 5. Results obtained by participants on each data set (based on 94 tests), including the results from the participants in 2010, and the top-five F-measure (five better systems in each data set). With respect to compliance, newcomers (CSA, YAM++ and MapSSS) have mostly outperformed other participants, for the new generated benchmarks. On the other hand, for the very known original benchmark data set, none of the systems was able to outperform the top-performer of the last year (ASMOV).

15 4 Anatomy As in the previous years, the anatomy track confronts the existing matching technology with a specific type of ontologies from the biomedical domain. In this domain, many ontologies have been built covering different aspects of medical research. We focus on fragments of two biomedical ontologies which describe the human anatomy and the anatomy of the mouse. The data set of this track has been used since For a detailed description we refer the reader to the OAEI 2007 results paper [7]. 4.1 Experimental setting Contrary to the previous years, we distinguish only between two evaluation experiments. Subtask #1 is about applying a matcher with its standard setting to the matching task. In the previous years we have also asked for additional alignments that favor precision over recall and vice versa (subtask #2 and #3). These subtasks are not part of the anatomy track in 2011 due to the fact that the SEALS platform does not allow for running tools with different configurations. Furthermore, we have proposed a fourth subtask, in which a partial reference alignment has to be used as an additional input. In our experiments we compare precision, recall, F-measure and recall+. We have introduced recall+ to measure the amount of detected non-trivial correspondences. From 2007 to 2009, we reported on runtimes measured by the participants themselves. This survey revealed large differences in runtimes. This year we can compare the runtimes of participants by executing them on our own on the same machine. We used a Windows 2007 machine with 2.4 GHz (2 cores) and 7GB RAM allocated to the matching systems. For the 2011 evaluation, we improved again the reference alignment of the data set. We removed doubtful correspondences and included several correct correspondences that had not been included in the past. As a result, we measured for the alignments generated in 2010 a slightly better F-measure ( +1%) compared to the computation based on the old reference alignment. For that reason we have also included the top-3 systems of 2010 with recomputed precision/recall scores. 4.2 Results In the following we analyze the robustness of the submitted systems and their runtimes. Further, we report on the quality of the generated alignment, mainly in terms of precision and recall. Robustness and scalability. In 2011 there were 16 participants in the SEALS modality, while in 2010 we had only 9 participants for the anatomy track. However, this comparison is misleading. Some of these 16 systems are not really intended to match large biomedical ontologies. For that reason our first interest is related to the question, which systems generate a meaningful result in an acceptable time span. Results are shown in Table 6. First, we focused on the question whether systems finish the matching task in less than 24h. This is the case for a surprisingly low number of systems. The systems

16 that do not finish in time can be separated in those systems that throw an exception related to insufficient memory after some time (marked with X ). The other group of systems were still running when we stopped the experiments after 24 hours (marked with T ). 8 Obviously, matching relatively large ontologies is a problem for five out of fourteen executable systems. The two systems MapPSO and MapEVO can cope with ontologies that contain more than 1000 concepts, but have problems with finding correct correspondences. Both systems generate comprehensive alignments, however, MapPSO finds only one correct corespondence and MapEVO finds none. This can be related to the way labels are encoded in the ontologies. The ontologies from the anatomy track differ from the ontologies of the benchmark and conference tracks in this respect. Matcher Runtime Size Precision F-measure Recall Recall+ AgrMaker LogMap AgrMaker CODI NBJLM Ef2Match Lily StringEquiv Aroma CSA MaasMtch MapPSO MapEVO Cider T LDOA T MapSSS X Optima X YAM++ X Table 6. Comparison against the reference alignment, runtime is measured in seconds, the size column refers to the number of correspondences in the generated alignment. For those systems that generate an acceptable result, we observe a high variance in measured runtimes. Clearly ahead is the system LogMap (24s), followed by Aroma (39s). Next are Lily and AgreementMaker (approx. 10mn), CODI (30mn), CSA (1h15), and finally MaasMatch (18h). Results for subtask #1. The results of our experiments are also presented in Table 6. Since we have improved the reference alignment, we have also included recomputed precision/recall scores for the top-3 alignments submitted in 2010 (marked by subscript 2010). Keep in mind that in 2010 AgreementMaker (AgrMaker) submitted an alignment that was the best submission to the OAEI anatomy track compared to all previous 8 We could not execute the two systems OACAS and OMR, not listed in the table, because the required interfaces have not been properly implemented.

17 submissions in terms of F-measure. Note that we also added the base-line StringEquiv, which refers to a matcher that compares the normalized labels of two concepts. If these labels are identical, a correspondence is generated. Recall+ is defined as recall, with the difference that the reference alignment is replaced by the set difference of R \ A SE, where A SE is defined as the alignment generated by StringEquiv. This year we have three systems that generate very good results, namely Agreement- Maker, LogMap and CODI. The results of LogMap and CODI are very similar. Both systems manage to generate an alignment with F-measure close to the 2010 submission of AgreementMaker. LogMap is slightly ahead. However, in 2011 the alignment generated by AgreementMaker is even better than in the previous year. In particular, AgreementMaker finds more correct correspondences, which can be seen in recall as well as in recall+ scores. At the same time, AgreementMaker can increase its precision. Also remarkable are the good results of LogMap, given the fact that the system finishes the matching task in less than half a minute. It is thus 25 times faster than Agreement- Maker and more than 75 times faster than CODI. Lily, Aroma, CSA, and MaasMatch (MaasMatch) have less good results than the three top matching systems, however, they have proved to be applicable to larger matching tasks and can generate acceptable results for a pair of ontologies from the biomedical domain. While these systems cannot (or barely) top the String-Equivalence baseline in terms of F-measure, they manage, nevertheless, to generate many correct non-trivial correspondences. A detailed analysis of the results revealed that they miss at the same time many trivial correspondences. This is an uncommon result, which might, for example, be related to some pruning operations performed during the comparison of matchable entities. An exception is the MaasMatch system. It generates results that are highly similar to a subset of the alignment generated by the StringEquiv baseline. Using an input alignment. This specific task was known as subtask #4 in the previous OAEI campaigns. Originally, we planned to study the impact of different input alignments of varying size. The idea is that a partial input alignment, which might have been generated by a human expert, can help the matching system to find missing correspondences. However, taking into account only those systems that could generate a meaningful alignment in time, only AgreementMaker, implemented the required interface. Thus, a comparative evaluation is not possible. We may have to put more effort in advertising this specific subtask for the next OAEI. Alignment coherence. This year we also evaluated alignment coherence. The anatomy data set contains only a small amount of disjointness statements, the ontologies under discussion are in EL++. Thus, even simple techniques might have an impact on the coherence of the generated alignments. For the anatomy data set the systems LogMap, CODI, and MaasMatch generate coherent alignments. The first two systems put a focus on alignment coherence and apply special methods to ensure coherence. MaasMatch has generated a small, highly precise, and coherent alignment. The alignments generated by the other systems are incoherent. A more detailed analysis related to alignment coherence is conducted for the alignments of the conference data set in Section 5.

18 4.3 Conclusions Less than half of the systems generate good or at least acceptable results for the matching task of the anatomy track. With respect to those systems that failed on anatomy, we can assume that this track was not in the focus of their developers. This means at the same time that many systems are particularly designed or configured for matching tasks that we find in the benchmark and conference tracks. Only few of them are robust allround matching systems that are capable of solving different tasks without changing their settings or algorithms. The positive results of 2011 are the top results of AgreementMaker and the runtime performance of LogMap. AgreementMaker generated a very good result by increasing precision and recall compared to its last years submissions, which was the best submission in 2010 already. LogMap clearly outperforms all other systems in terms of runtimes and still generates good results. We refer the reader to the OAEI papers of these two systems for details on the algorithms. 5 Conference The conference test case introduces matching several moderately expressive ontologies. Within this track, participant results were evaluated using diverse evaluation methods. As last year, the evaluation has been supported by the SEALS platform. 5.1 Test data The collection consists of sixteen ontologies in the domain of organizing conferences. Ontologies have been developed within the OntoFarm project 9. The main features of this test case are: Generally understandable domain. Most ontology engineers are familiar with organizing conferences. Therefore, they can create their own ontologies as well as evaluate the alignments among their concepts with enough erudition. Independence of ontologies. Ontologies were developed independently and based on different resources, they thus capture the issues in organizing conferences from different points of view and with different terminologies. Relative richness in axioms. Most ontologies were equipped with OWL DL axioms of various kinds; this opens a way to use semantic matchers. Ontologies differ in numbers of classes, of properties, in expressivity, but also in underlying resources. Ten ontologies are based on tools supporting the task of organizing conferences, two are based on experience of people with personal participation in conference organization, and three are based on web pages of concrete conferences. Participants were asked to provide all correct correspondences (equivalence and/or subsumption correspondences) and/or interesting correspondences within the conference data set. 9 svatek/ontofarm.html

19 5.2 Results This year, we provided results in terms of F 2 -measure and F 0.5 -measure, comparison with two baseline matchers and precision/recall triangular graph. Evaluation based on the reference alignments. We evaluated the results of participants against reference alignments. They include all pairwise combinations between 7 different ontologies, i.e. 21 alignments. Matcher Prec. F 0.5Meas. Rec. Prec. F 1Meas. Rec. Prec. F 2Meas. Rec. YAM CODI LogMap AgrMaker BaseLine MaasMtch BaseLine CSA CIDER MapSSS Lily AROMA Optima MapPSO LDOA MapEVO Table 7. The highest average F [ ] -measure and their corresponding precision and recall for some threshold for each matcher. For better comparison, we evaluated alignments with regard to three different average 10 F-measures independently. We used F 0.5 -measure (where β = 0.5) which weights precision higher than recall, F 1 -measure (the usual F-measure, where β = 1), which is the harmonic mean of precision and recall, and F 2 -measure (for β = 2) which weights recall higher than precision. For each of these F-measures, we selected a global confidence threshold that provides the highest average F [ ] -measure. Results of these three independent evaluations 11 are provided in Table 7. Matchers are ordered according to their highest average F 1 -measure. Additionally, there are two simple string matchers as baselines. Baseline 1 is a string matcher based on string equality applied on local names of entities which were lowercased before. Baseline 2 enhances baseline 1 with three string operations: removing of dashes, underscores and has words from all local names. These two baselines divide matchers into four groups. Group 1 consists of best matchers (YAM++, CODI, LogMap and AgreementMaker) having better results than baseline 2 in terms of F 1 -measure. Matchers which perform worse than baseline 2 in terms of F 1 -measure but still better than 10 Computed using the absolute scores, i.e. number of true positive examples. 11 Precision and recall can be different in all three cases.

20 baseline 1 are in Group 2 (MaasMatch). Group 3 (CSA, CIDER and MapSSS) contains matchers which are better than baseline 1 at least in terms of F 2 -measure. Other matchers (Lily, Aroma, Optima, MapPSO, LDOA and MapEVO) perform worse than baseline 1 (Group 4). Optima, MapSSS and CODI did not provide graded confidence values. Performance of matchers regarding F 1 -measure is visualized in Figure 3. YAM++ CODI LogMap AgrMaker F 1-measure=0.7 F 1-measure=0.6 F 1-measure=0.5 Baseline 2 MaasMtch Baseline 1 CSA CIDER MapSSS rec=1.0 rec=.8 rec=.6 pre=.6 pre=.8 pre=1.0 Fig. 3. Precision/recall triangular graph for conference. Matchers of participants from the first three groups are represented as squares. Baselines are represented as circles. Dotted lines depict level of precision/recall while values of F 1-measure are depicted by areas bordered by corresponding lines F 1-measure=0.[5 6 7]. In conclusion, all best matchers (group one) are very close to each other. However, the matcher with the highest average F 1 -measure (.65) is YAM++, the highest average F 2 -measure (.61) is AgreementMaker and the highest average F 0.5 -measure (.75) is LogMap. In any case, we should take into account that this evaluation has been made over a subset of all possible alignments (one fifth). Comparison with previous years. Three matchers also participated in the previous year. AgreementMaker improved its average F 1 -measure from.58 to.62 by higher precision (from.53 to.65) and lower recall (from.62 to.59), CODI increased its average F 1 - measure from.62 to.64 by higher recall (from.48 to.57) and lower precision (from.86 to.74). AROMA (with its AROMA- variant) slightly decreased its average F 1 -measure from.42 to.40 by lower precision (from.36 to.35) and recall (from.49 to.46). Evaluation based on alignment coherence. As in the previous years, we apply the Maximum Cardinality measure proposed in [13] to measure the degree of alignment

21 incoherence. Details on the algorithms can be found in [12]. The reasoner underlying our implementation is Pellet [16]. The results of our experiments are depicted in Table 8. It shows the average for all test cases of the conference track, which covers more than the ontologies that are connected via reference alignments. We had to omit the test cases in which the ontologies Confious and Linklings are involved as source or target ontologies. These ontologies resulted in many cases in reasoning problems. Thus, we had 91 test cases for each matching system. However, we faced reasoning problems for some combinations of test cases and alignments. In this case we computed the average score by ignoring these test cases. These problems occurred mainly for highly incoherent alignments. The last row in Table 8 informs about the number of test cases that were excluded. Note that we did not analyze the results of those systems that generated alignments with precision less than.25. Matcher AgrMaker AROMA CIDER CODI Size Inc. Alignments 49/90 58/88 69/88 0/91 69/69 70/90 8/91 21/91 51/90 73/84 41/91 Degree of Inc. 12% 16% 13% 0% >29% 14% 2% 4% 9% >31% 7% Reasoning problems Table 8. Average size of alignments, number of incoherent alignments, and average degree of incoherence. The prefix > is added if the search algorithm stopped in one of the testcases due to a timeout of 10min prior to computing the degree of incoherence. CSA Lily LogMap MaasMtch MapSSS Optima YAM CODI is the only system that guarantees the coherence of the generated alignments. While last year some of the alignments were incoherent, all of the alignments generated in 2011 are coherent. LogMap, a system with special focus on alignment coherence and efficiency [11], generates in most cases coherent alignments. A closer look at the outliers reveals that all incoherent alignments occured for ontology pairs where the ontology Cocus was involved. This ontology suffers from a very specific modeling error based on the inappropriate use of universal quantification. At the third position we find MaasMatch. MaasMatch generates less incoherent alignments than the remaining systems. This might be related to the high precision of the system. Contrary to LogMap, incoherent alignments are generated for different combinations of ontologies and there is no specific pattern emerging. It is not easy to interpret the results of the remaining matching systems due to the different sizes of the alignments that they have generated. The more correspondences are contained in an alignment, the higher is the probability that this results in a concepts unsatisfiability. It is not always clear whether a relatively low/high degree of incoherence is mainly caused by the small/large size of the alignments, or related to the use of a specific technique. Overall, we conclude that alignment coherence is not taken into account by these systems. However, in 2011 we have at least some systems that apply specific methods to ensure coherence for all or at least for a large subset of generated alignments. Compared to the previous years, this is a positive result of our analysis.

First results of the Ontology Alignment Evaluation Initiative 2011

First results of the Ontology Alignment Evaluation Initiative 2011 First results of the Ontology Alignment Evaluation Initiative 2011 Jérôme Euzenat 1, Alfio Ferrara 2, Willem Robert van Hage 3, Laura Hollink 4, Christian Meilicke 5, Andriy Nikolov 6, François Scharffe

More information

First Results of the Ontology Alignment Evaluation Initiative 2010

First Results of the Ontology Alignment Evaluation Initiative 2010 First Results of the Ontology Alignment Evaluation Initiative 2010 Jérôme Euzenat 1, Alfio Ferrara 6, Christian Meilicke 2, Juan Pane 3, François Scharffe 1, Pavel Shvaiko 4, Heiner Stuckenschmidt 2, Ondřej

More information

Results of the Ontology Alignment Evaluation Initiative 2010

Results of the Ontology Alignment Evaluation Initiative 2010 Results of the Ontology Alignment Evaluation Initiative 2010 Jérôme Euzenat, Alfio Ferrara, Christian Meilicke, Andriy Nikolov, Juan Pane, François Scharffe, Pavel Shvaiko, Heiner Stuckenschmidt, Ondrej

More information

Results of the Ontology Alignment Evaluation Initiative 2012

Results of the Ontology Alignment Evaluation Initiative 2012 Results of the Ontology Alignment Evaluation Initiative 2012 José Luis Aguirre 1, Kai Eckert 4, Jérôme Euzenat 1, Alfio Ferrara 2, Willem Robert van Hage 3, Laura Hollink 3, Christian Meilicke 4, Andriy

More information

RESULTS OF THE ONTOLOGY ALIGNMENT EVALUATION INITIATIVE 2009

RESULTS OF THE ONTOLOGY ALIGNMENT EVALUATION INITIATIVE 2009 DISI Via Sommarive 14 38123 Povo Trento (Italy) http://www.disi.unitn.it RESULTS OF THE ONTOLOGY ALIGNMENT EVALUATION INITIATIVE 2009 Jérôme Euzenat, Alfio Ferrara, Laura Hollink, Antoine Isaac, Cliff

More information

Results of the Ontology Alignment Evaluation Initiative 2013

Results of the Ontology Alignment Evaluation Initiative 2013 Results of the Ontology Alignment Evaluation Initiative 2013 Bernardo Cuenca Grau 1, Zlatan Dragisic 2, Kai Eckert 3, Jérôme Euzenat 4, Alfio Ferrara 5, Roger Granada 6,7, Valentina Ivanova 2, Ernesto

More information

Multilingual Ontology Matching Evaluation A First Report on using MultiFarm

Multilingual Ontology Matching Evaluation A First Report on using MultiFarm Multilingual Ontology Matching Evaluation A First Report on using MultiFarm Christian Meilicke1, Ca ssia Trojahn2, Ondr ej S va b-zamazal3, Dominique Ritze1 1 University of Mannheim INRIA & LIG, Grenoble

More information

Results of the Ontology Alignment Evaluation Initiative 2014

Results of the Ontology Alignment Evaluation Initiative 2014 Results of the Ontology Alignment Evaluation Initiative 2014 Zlatan Dragisic 2, Kai Eckert 3, Jérôme Euzenat 4, Daniel Faria 12, Alfio Ferrara 5, Roger Granada 6,7, Valentina Ivanova 2, Ernesto Jiménez-Ruiz

More information

Results of the Ontology Alignment Evaluation Initiative 2008

Results of the Ontology Alignment Evaluation Initiative 2008 Results of the Ontology Alignment Evaluation Initiative 2008 Caterina Caracciolo 1, Jérôme Euzenat 2, Laura Hollink 3, Ryutaro Ichise 4, Antoine Isaac 3, Véronique Malaisé 3, Christian Meilicke 5, Juan

More information

YAM++ Results for OAEI 2013

YAM++ Results for OAEI 2013 YAM++ Results for OAEI 2013 DuyHoa Ngo, Zohra Bellahsene University Montpellier 2, LIRMM {duyhoa.ngo, bella}@lirmm.fr Abstract. In this paper, we briefly present the new YAM++ 2013 version and its results

More information

Evaluation of ontology matching

Evaluation of ontology matching Evaluation of ontology matching Jérôme Euzenat (INRIA Rhône-Alpes & LIG) + work within Knowledge web 2.2 and esp. Malgorzata Mochol (FU Berlin) April 19, 2007 Evaluation of ontology matching 1 / 44 Outline

More information

MapPSO Results for OAEI 2010

MapPSO Results for OAEI 2010 MapPSO Results for OAEI 2010 Jürgen Bock 1 FZI Forschungszentrum Informatik, Karlsruhe, Germany bock@fzi.de Abstract. This paper presents and discusses the results produced by the MapPSO system for the

More information

Testing the Impact of Pattern-Based Ontology Refactoring on Ontology Matching Results

Testing the Impact of Pattern-Based Ontology Refactoring on Ontology Matching Results Testing the Impact of Pattern-Based Ontology Refactoring on Ontology Matching Results Ondřej Šváb-Zamazal 1, Vojtěch Svátek 1, Christian Meilicke 2, and Heiner Stuckenschmidt 2 1 University of Economics,

More information

KOSIMap: Ontology alignments results for OAEI 2009

KOSIMap: Ontology alignments results for OAEI 2009 KOSIMap: Ontology alignments results for OAEI 2009 Quentin Reul 1 and Jeff Z. Pan 2 1 VUB STARLab, Vrije Universiteit Brussel, Pleinlaan 2, 1050 Brussels, Belgium 2 University of Aberdeen, Aberdeen AB24

More information

Ontology Alignment Evaluation Initiative: Six Years of Experience

Ontology Alignment Evaluation Initiative: Six Years of Experience Ontology Alignment Evaluation Initiative: Six Years of Experience Jérôme Euzenat 1, Christian Meilicke 2, Heiner Stuckenschmidt 2, Pavel Shvaiko 3, and Cássia Trojahn 1 1 INRIA & LIG, Grenoble, France

More information

The Results of Falcon-AO in the OAEI 2006 Campaign

The Results of Falcon-AO in the OAEI 2006 Campaign The Results of Falcon-AO in the OAEI 2006 Campaign Wei Hu, Gong Cheng, Dongdong Zheng, Xinyu Zhong, and Yuzhong Qu School of Computer Science and Engineering, Southeast University, Nanjing 210096, P. R.

More information

Ontology Matching with CIDER: Evaluation Report for the OAEI 2008

Ontology Matching with CIDER: Evaluation Report for the OAEI 2008 Ontology Matching with CIDER: Evaluation Report for the OAEI 2008 Jorge Gracia, Eduardo Mena IIS Department, University of Zaragoza, Spain {jogracia,emena}@unizar.es Abstract. Ontology matching, the task

More information

FOAM Framework for Ontology Alignment and Mapping Results of the Ontology Alignment Evaluation Initiative

FOAM Framework for Ontology Alignment and Mapping Results of the Ontology Alignment Evaluation Initiative FOAM Framework for Ontology Alignment and Mapping Results of the Ontology Alignment Evaluation Initiative Marc Ehrig Institute AIFB University of Karlsruhe 76128 Karlsruhe, Germany ehrig@aifb.uni-karlsruhe.de

More information

What you have learned so far. Interoperability. Ontology heterogeneity. Being serious about the semantic web

What you have learned so far. Interoperability. Ontology heterogeneity. Being serious about the semantic web What you have learned so far Interoperability Introduction to the Semantic Web Tutorial at ISWC 2010 Jérôme Euzenat Data can be expressed in RDF Linked through URIs Modelled with OWL ontologies & Retrieved

More information

RiMOM Results for OAEI 2009

RiMOM Results for OAEI 2009 RiMOM Results for OAEI 2009 Xiao Zhang, Qian Zhong, Feng Shi, Juanzi Li and Jie Tang Department of Computer Science and Technology, Tsinghua University, Beijing, China zhangxiao,zhongqian,shifeng,ljz,tangjie@keg.cs.tsinghua.edu.cn

More information

Falcon-AO: Aligning Ontologies with Falcon

Falcon-AO: Aligning Ontologies with Falcon Falcon-AO: Aligning Ontologies with Falcon Ningsheng Jian, Wei Hu, Gong Cheng, Yuzhong Qu Department of Computer Science and Engineering Southeast University Nanjing 210096, P. R. China {nsjian, whu, gcheng,

More information

Ontology matching benchmarks: generation and evaluation

Ontology matching benchmarks: generation and evaluation Ontology matching benchmarks: generation and evaluation Maria Rosoiu, Cássia Trojahn dos Santos, Jérôme Euzenat To cite this version: Maria Rosoiu, Cássia Trojahn dos Santos, Jérôme Euzenat. Ontology matching

More information

Results of the Ontology Alignment Evaluation Initiative 2006

Results of the Ontology Alignment Evaluation Initiative 2006 Results of the Ontology Alignment Evaluation Initiative 2006 Jérôme Euzenat 1, Malgorzata Mochol 2, Pavel Shvaiko 3, Heiner Stuckenschmidt 4, Ondřej Šváb 5, Vojtěch Svátek 5, Willem Robert van Hage 6,

More information

Cluster-based Similarity Aggregation for Ontology Matching

Cluster-based Similarity Aggregation for Ontology Matching Cluster-based Similarity Aggregation for Ontology Matching Quang-Vinh Tran 1, Ryutaro Ichise 2, and Bao-Quoc Ho 1 1 Faculty of Information Technology, Ho Chi Minh University of Science, Vietnam {tqvinh,hbquoc}@fit.hcmus.edu.vn

More information

LPHOM results for OAEI 2016

LPHOM results for OAEI 2016 LPHOM results for OAEI 2016 Imen Megdiche, Olivier Teste, and Cassia Trojahn Institut de Recherche en Informatique de Toulouse (UMR 5505), Toulouse, France {Imen.Megdiche, Olivier.Teste, Cassia.Trojahn}@irit.fr

More information

Anchor-Profiles for Ontology Mapping with Partial Alignments

Anchor-Profiles for Ontology Mapping with Partial Alignments Anchor-Profiles for Ontology Mapping with Partial Alignments Frederik C. Schadd Nico Roos Department of Knowledge Engineering, Maastricht University, Maastricht, The Netherlands Abstract. Ontology mapping

More information

Introduction to the Ontology Alignment Evaluation 2005

Introduction to the Ontology Alignment Evaluation 2005 Introduction to the Ontology Alignment Evaluation 2005 Jérôme Euzenat INRIA Rhône-Alpes 655 avenue de l Europe 38330 Monbonnot, France Jerome.Euzenat@inrialpes.fr Heiner Stuckenschmidt Vrije Universiteit

More information

Results of the Ontology Alignment Evaluation Initiative 2017

Results of the Ontology Alignment Evaluation Initiative 2017 Results of the Ontology Alignment Evaluation Initiative 2017 Manel Achichi 1, Michelle Cheatham 2, Zlatan Dragisic 3, Jérôme Euzenat 4, Daniel Faria 5, Alfio Ferrara 6, Giorgos Flouris 7, Irini Fundulaki

More information

FCA-Map Results for OAEI 2016

FCA-Map Results for OAEI 2016 FCA-Map Results for OAEI 2016 Mengyi Zhao 1 and Songmao Zhang 2 1,2 Institute of Mathematics, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, P. R. China 1 myzhao@amss.ac.cn,

More information

Results of the Ontology Alignment Evaluation Initiative 2016

Results of the Ontology Alignment Evaluation Initiative 2016 Results of the Ontology Alignment Evaluation Initiative 2016 Manel Achichi 1, Michelle Cheatham 2, Zlatan Dragisic 3, Jérôme Euzenat 4, Daniel Faria 5, Alfio Ferrara 6, Giorgos Flouris 7, Irini Fundulaki

More information

On the Feasibility of Using OWL 2 DL Reasoners for Ontology Matching Problems

On the Feasibility of Using OWL 2 DL Reasoners for Ontology Matching Problems On the Feasibility of Using OWL 2 DL Reasoners for Ontology Matching Problems Ernesto Jiménez-Ruiz, Bernardo Cuenca Grau, and Ian Horrocks Department of Computer Science, University of Oxford {ernesto,berg,ian.horrocks}@cs.ox.ac.uk

More information

Ontology matching using vector space

Ontology matching using vector space University of Wollongong Research Online University of Wollongong in Dubai - Papers University of Wollongong in Dubai 2008 Ontology matching using vector space Zahra Eidoon University of Tehran, Iran Nasser

More information

POMap results for OAEI 2017

POMap results for OAEI 2017 POMap results for OAEI 2017 Amir Laadhar 1, Faiza Ghozzi 2, Imen Megdiche 1, Franck Ravat 1, Olivier Teste 1, and Faiez Gargouri 2 1 Paul Sabatier University, IRIT (CNRS/UMR 5505) 118 Route de Narbonne

More information

ODGOMS - Results for OAEI 2013 *

ODGOMS - Results for OAEI 2013 * ODGOMS - Results for OAEI 2013 * I-Hong Kuo 1,2, Tai-Ting Wu 1,3 1 Industrial Technology Research Institute, Taiwan 2 yihonguo@itri.org.tw 3 taitingwu@itri.org.tw Abstract. ODGOMS is a multi-strategy ontology

More information

Using AgreementMaker to Align Ontologies for OAEI 2010

Using AgreementMaker to Align Ontologies for OAEI 2010 Using AgreementMaker to Align Ontologies for OAEI 2010 Isabel F. Cruz, Cosmin Stroe, Michele Caci, Federico Caimi, Matteo Palmonari, Flavio Palandri Antonelli, Ulas C. Keles ADVIS Lab, Department of Computer

More information

ServOMap and ServOMap-lt Results for OAEI 2012

ServOMap and ServOMap-lt Results for OAEI 2012 ServOMap and ServOMap-lt Results for OAEI 2012 Mouhamadou Ba 1, Gayo Diallo 1 1 LESIM/ISPED, Univ. Bordeaux Segalen, F-33000, France first.last@isped.u-bordeaux2.fr Abstract. We present the results obtained

More information

AROMA results for OAEI 2009

AROMA results for OAEI 2009 AROMA results for OAEI 2009 Jérôme David 1 Université Pierre-Mendès-France, Grenoble Laboratoire d Informatique de Grenoble INRIA Rhône-Alpes, Montbonnot Saint-Martin, France Jerome.David-at-inrialpes.fr

More information

The HMatch 2.0 Suite for Ontology Matchmaking

The HMatch 2.0 Suite for Ontology Matchmaking The HMatch 2.0 Suite for Ontology Matchmaking S. Castano, A. Ferrara, D. Lorusso, and S. Montanelli Università degli Studi di Milano DICo - Via Comelico, 39, 20135 Milano - Italy {castano,ferrara,lorusso,montanelli}@dico.unimi.it

More information

Combining Ontology Mapping Methods Using Bayesian Networks

Combining Ontology Mapping Methods Using Bayesian Networks Combining Ontology Mapping Methods Using Bayesian Networks Ondřej Šváb, Vojtěch Svátek University of Economics, Prague, Dep. Information and Knowledge Engineering, Winston Churchill Sq. 4, 130 67 Praha

More information

Lily: Ontology Alignment Results for OAEI 2009

Lily: Ontology Alignment Results for OAEI 2009 Lily: Ontology Alignment Results for OAEI 2009 Peng Wang 1, Baowen Xu 2,3 1 College of Software Engineering, Southeast University, China 2 State Key Laboratory for Novel Software Technology, Nanjing University,

More information

RiMOM Results for OAEI 2010

RiMOM Results for OAEI 2010 RiMOM Results for OAEI 2010 Zhichun Wang 1, Xiao Zhang 1, Lei Hou 1, Yue Zhao 2, Juanzi Li 1, Yu Qi 3, Jie Tang 1 1 Tsinghua University, Beijing, China {zcwang,zhangxiao,greener,ljz,tangjie}@keg.cs.tsinghua.edu.cn

More information

A Session-based Ontology Alignment Approach for Aligning Large Ontologies

A Session-based Ontology Alignment Approach for Aligning Large Ontologies Undefined 1 (2009) 1 5 1 IOS Press A Session-based Ontology Alignment Approach for Aligning Large Ontologies Editor(s): Name Surname, University, Country Solicited review(s): Name Surname, University,

More information

Executing Evaluations over Semantic Technologies using the SEALS Platform

Executing Evaluations over Semantic Technologies using the SEALS Platform Executing Evaluations over Semantic Technologies using the SEALS Platform Miguel Esteban-Gutiérrez, Raúl García-Castro, Asunción Gómez-Pérez Ontology Engineering Group, Departamento de Inteligencia Artificial.

More information

OWL-CM : OWL Combining Matcher based on Belief Functions Theory

OWL-CM : OWL Combining Matcher based on Belief Functions Theory OWL-CM : OWL Combining Matcher based on Belief Functions Theory Boutheina Ben Yaghlane 1 and Najoua Laamari 2 1 LARODEC, Université de Tunis, IHEC Carthage Présidence 2016 Tunisia boutheina.yaghlane@ihec.rnu.tn

More information

Consistency-driven Argumentation for Alignment Agreement

Consistency-driven Argumentation for Alignment Agreement Consistency-driven Argumentation for Alignment Agreement Cássia Trojahn and Jérôme Euzenat INRIA & LIG, 655 Avenue de l Europe, Montbonnot Saint Martin, France {cassia.trojahn,jerome.euzenat}@inrialpes.fr

More information

YAM++ : A multi-strategy based approach for Ontology matching task

YAM++ : A multi-strategy based approach for Ontology matching task YAM++ : A multi-strategy based approach for Ontology matching task Duy Hoa Ngo, Zohra Bellahsene To cite this version: Duy Hoa Ngo, Zohra Bellahsene. YAM++ : A multi-strategy based approach for Ontology

More information

AOT / AOTL Results for OAEI 2014

AOT / AOTL Results for OAEI 2014 AOT / AOTL Results for OAEI 2014 Abderrahmane Khiat 1, Moussa Benaissa 1 1 LITIO Lab, University of Oran, BP 1524 El-Mnaouar Oran, Algeria abderrahmane_khiat@yahoo.com moussabenaissa@yahoo.fr Abstract.

More information

HotMatch Results for OEAI 2012

HotMatch Results for OEAI 2012 HotMatch Results for OEAI 2012 Thanh Tung Dang, Alexander Gabriel, Sven Hertling, Philipp Roskosch, Marcel Wlotzka, Jan Ruben Zilke, Frederik Janssen, and Heiko Paulheim Technische Universität Darmstadt

More information

Matching and Alignment: What is the Cost of User Post-match Effort?

Matching and Alignment: What is the Cost of User Post-match Effort? Matching and Alignment: What is the Cost of User Post-match Effort? (Short paper) Fabien Duchateau 1 and Zohra Bellahsene 2 and Remi Coletta 2 1 Norwegian University of Science and Technology NO-7491 Trondheim,

More information

Towards Rule Learning Approaches to Instance-based Ontology Matching

Towards Rule Learning Approaches to Instance-based Ontology Matching Towards Rule Learning Approaches to Instance-based Ontology Matching Frederik Janssen 1, Faraz Fallahi 2 Jan Noessner 3, and Heiko Paulheim 1 1 Knowledge Engineering Group, TU Darmstadt, Hochschulstrasse

More information

ADOM: Arabic Dataset for Evaluating Arabic and Cross-lingual Ontology Alignment Systems

ADOM: Arabic Dataset for Evaluating Arabic and Cross-lingual Ontology Alignment Systems ADOM: Arabic Dataset for Evaluating Arabic and Cross-lingual Ontology Alignment Systems Abderrahmane Khiat 1, Moussa Benaissa 1, and Ernesto Jiménez-Ruiz 2 1 LITIO Laboratory, University of Oran1 Ahmed

More information

WeSeE-Match Results for OEAI 2012

WeSeE-Match Results for OEAI 2012 WeSeE-Match Results for OEAI 2012 Heiko Paulheim Technische Universität Darmstadt paulheim@ke.tu-darmstadt.de Abstract. WeSeE-Match is a simple, element-based ontology matching tool. Its basic technique

More information

Ontology Modularization for Knowledge Selection: Experiments and Evaluations

Ontology Modularization for Knowledge Selection: Experiments and Evaluations Ontology Modularization for Knowledge Selection: Experiments and Evaluations Mathieu d Aquin 1, Anne Schlicht 2, Heiner Stuckenschmidt 2, and Marta Sabou 1 1 Knowledge Media Institute (KMi), The Open University,

More information

OntoDNA: Ontology Alignment Results for OAEI 2007

OntoDNA: Ontology Alignment Results for OAEI 2007 OntoDNA: Ontology Alignment Results for OAEI 2007 Ching-Chieh Kiu 1, Chien Sing Lee 2 Faculty of Information Technology, Multimedia University, Jalan Multimedia, 63100 Cyberjaya, Selangor. Malaysia. 1

More information

ALIN Results for OAEI 2017

ALIN Results for OAEI 2017 ALIN Results for OAEI 2017 Jomar da Silva 1, Fernanda Araujo Baião 1, and Kate Revoredo 1 Graduated Program in Informatics, Department of Applied Informatics Federal University of the State of Rio de Janeiro

More information

Semantic Interoperability. Being serious about the Semantic Web

Semantic Interoperability. Being serious about the Semantic Web Semantic Interoperability Jérôme Euzenat INRIA & LIG France Natasha Noy Stanford University USA 1 Being serious about the Semantic Web It is not one person s ontology It is not several people s common

More information

ALIN Results for OAEI 2016

ALIN Results for OAEI 2016 ALIN Results for OAEI 2016 Jomar da Silva, Fernanda Araujo Baião and Kate Revoredo Department of Applied Informatics Federal University of the State of Rio de Janeiro (UNIRIO), Rio de Janeiro, Brazil {jomar.silva,fernanda.baiao,katerevoredo}@uniriotec.br

More information

3.4 Data-Centric workflow

3.4 Data-Centric workflow 3.4 Data-Centric workflow One of the most important activities in a S-DWH environment is represented by data integration of different and heterogeneous sources. The process of extract, transform, and load

More information

TOAST Results for OAEI 2012

TOAST Results for OAEI 2012 TOAST Results for OAEI 2012 Arkadiusz Jachnik, Andrzej Szwabe, Pawel Misiorek, and Przemyslaw Walkowiak Institute of Control and Information Engineering, Poznan University of Technology, M. Sklodowskiej-Curie

More information

VERIZON ACHIEVES BEST IN TEST AND WINS THE P3 MOBILE BENCHMARK USA

VERIZON ACHIEVES BEST IN TEST AND WINS THE P3 MOBILE BENCHMARK USA VERIZON ACHIEVES BEST IN TEST AND WINS THE P3 MOBILE BENCHMARK USA INTRODUCTION in test 10/2018 Mobile Benchmark USA The international benchmarking expert P3 has been testing the performance of cellular

More information

VOAR: A Visual and Integrated Ontology Alignment Environment

VOAR: A Visual and Integrated Ontology Alignment Environment VOAR: A Visual and Integrated Ontology Alignment Environment Bernardo Severo 1, Cassia Trojahn 2, Renata Vieira 1 1 Pontifícia Universidade Católica do Rio Grande do Sul (Brazil) 2 Université Toulouse

More information

PRIOR System: Results for OAEI 2006

PRIOR System: Results for OAEI 2006 PRIOR System: Results for OAEI 2006 Ming Mao, Yefei Peng University of Pittsburgh, Pittsburgh, PA, USA {mingmao,ypeng}@mail.sis.pitt.edu Abstract. This paper summarizes the results of PRIOR system, which

More information

Towards On the Go Matching of Linked Open Data Ontologies

Towards On the Go Matching of Linked Open Data Ontologies Towards On the Go Matching of Linked Open Data Ontologies Isabel F. Cruz ADVIS Lab Department of Computer Science University of Illinois at Chicago ifc@cs.uic.edu Matteo Palmonari DISCo University of Milan-Bicocca

More information

Supplementary text S6 Comparison studies on simulated data

Supplementary text S6 Comparison studies on simulated data Supplementary text S Comparison studies on simulated data Peter Langfelder, Rui Luo, Michael C. Oldham, and Steve Horvath Corresponding author: shorvath@mednet.ucla.edu Overview In this document we illustrate

More information

DSSim-ontology mapping with uncertainty

DSSim-ontology mapping with uncertainty DSSim-ontology mapping with uncertainty Miklos Nagy, Maria Vargas-Vera, Enrico Motta Knowledge Media Institute (Kmi) The Open University Walton Hall, Milton Keynes, MK7 6AA, United Kingdom mn2336@student.open.ac.uk;{m.vargas-vera,e.motta}@open.ac.uk

More information

RaDON Repair and Diagnosis in Ontology Networks

RaDON Repair and Diagnosis in Ontology Networks RaDON Repair and Diagnosis in Ontology Networks Qiu Ji, Peter Haase, Guilin Qi, Pascal Hitzler, and Steffen Stadtmüller Institute AIFB Universität Karlsruhe (TH), Germany {qiji,pha,gqi,phi}@aifb.uni-karlsruhe.de,

More information

Evaluating semantic data infrastructure components for small devices

Evaluating semantic data infrastructure components for small devices Evaluating semantic data infrastructure components for small devices Andriy Nikolov, Ning Li, Mathieu d Aquin, Enrico Motta Knowledge Media Institute, The Open University, Milton Keynes, UK {a.nikolov,

More information

Evaluation Measures for Ontology Matchers in Supervised Matching Scenarios

Evaluation Measures for Ontology Matchers in Supervised Matching Scenarios Evaluation Measures for Ontology Matchers in Supervised Matching Scenarios Dominique Ritze, Heiko Paulheim, and Kai Eckert University of Mannheim, Mannheim, Germany Research Group Data and Web Science

More information

DSSim Results for OAEI 2009

DSSim Results for OAEI 2009 DSSim Results for OAEI 2009 Miklos Nagy 1, Maria Vargas-Vera 1, and Piotr Stolarski 2 1 The Open University Walton Hall, Milton Keynes, MK7 6AA, United Kingdom m.nagy@open.ac.uk;m.vargas-vera@open.ac.uk

More information

SEMA: Results for the Ontology Alignment Contest OAEI

SEMA: Results for the Ontology Alignment Contest OAEI SEMA: Results for the Ontology Alignment Contest OAEI 2007 1 Vassilis Spiliopoulos 1, 2, Alexandros G. Valarakos 1, George A. Vouros 1, and Vangelis Karkaletsis 2 1 AI Lab, Information and Communication

More information

SERIMI Results for OAEI 2011

SERIMI Results for OAEI 2011 SERIMI Results for OAEI 2011 Samur Araujo 1, Arjen de Vries 1, and Daniel Schwabe 2 1 Delft University of Technology, PO Box 5031, 2600 GA Delft, the Netherlands {S.F.CardosodeAraujo, A.P.deVries}@tudelft.nl

More information

Challenges on Combining Open Web and Dataset Evaluation Results: The Case of the Contextual Suggestion Track

Challenges on Combining Open Web and Dataset Evaluation Results: The Case of the Contextual Suggestion Track Challenges on Combining Open Web and Dataset Evaluation Results: The Case of the Contextual Suggestion Track Alejandro Bellogín 1,2, Thaer Samar 1, Arjen P. de Vries 1, and Alan Said 1 1 Centrum Wiskunde

More information

CroLOM: Cross-Lingual Ontology Matching System

CroLOM: Cross-Lingual Ontology Matching System CroLOM: Cross-Lingual Ontology Matching System Results for OAEI 2016 Abderrahmane Khiat LITIO Laboratory, University of Oran1 Ahmed Ben Bella, Oran, Algeria abderrahmane khiat@yahoo.com Abstract. The current

More information

Introduction Entity Match Service. Step-by-Step Description

Introduction Entity Match Service. Step-by-Step Description Introduction Entity Match Service In order to incorporate as much institutional data into our central alumni and donor database (hereafter referred to as CADS ), we ve developed a comprehensive suite of

More information

ResPubliQA 2010

ResPubliQA 2010 SZTAKI @ ResPubliQA 2010 David Mark Nemeskey Computer and Automation Research Institute, Hungarian Academy of Sciences, Budapest, Hungary (SZTAKI) Abstract. This paper summarizes the results of our first

More information

XBenchMatch: a Benchmark for XML Schema Matching Tools

XBenchMatch: a Benchmark for XML Schema Matching Tools XBenchMatch: a Benchmark for XML Schema Matching Tools Fabien Duchateau, Zohra Bellahsene, Ela Hunt To cite this version: Fabien Duchateau, Zohra Bellahsene, Ela Hunt. XBenchMatch: a Benchmark for XML

More information

Istat s Pilot Use Case 1

Istat s Pilot Use Case 1 Istat s Pilot Use Case 1 Pilot identification 1 IT 1 Reference Use case X 1) URL Inventory of enterprises 2) E-commerce from enterprises websites 3) Job advertisements on enterprises websites 4) Social

More information

QAKiS: an Open Domain QA System based on Relational Patterns

QAKiS: an Open Domain QA System based on Relational Patterns QAKiS: an Open Domain QA System based on Relational Patterns Elena Cabrio, Julien Cojan, Alessio Palmero Aprosio, Bernardo Magnini, Alberto Lavelli, Fabien Gandon To cite this version: Elena Cabrio, Julien

More information

A direct approach to sequential diagnosis of high cardinality faults in knowledge-bases

A direct approach to sequential diagnosis of high cardinality faults in knowledge-bases A direct approach to sequential diagnosis of high cardinality faults in knowledge-bases Kostyantyn Shchekotykhin and Gerhard Friedrich and Patrick Rodler and Philipp Fleiss 1 1 Alpen-Adria Universität,

More information

Simple library thesaurus alignment with SILAS

Simple library thesaurus alignment with SILAS Simple library thesaurus alignment with SILAS Roelant Ossewaarde 1 Linguistics Department University at Buffalo, the State University of New York rao3@buffalo.edu Abstract. This paper describes a system

More information

Probabilistic Information Integration and Retrieval in the Semantic Web

Probabilistic Information Integration and Retrieval in the Semantic Web Probabilistic Information Integration and Retrieval in the Semantic Web Livia Predoiu Institute of Computer Science, University of Mannheim, A5,6, 68159 Mannheim, Germany livia@informatik.uni-mannheim.de

More information

jcel: A Modular Rule-based Reasoner

jcel: A Modular Rule-based Reasoner jcel: A Modular Rule-based Reasoner Julian Mendez Theoretical Computer Science, TU Dresden, Germany mendez@tcs.inf.tu-dresden.de Abstract. jcel is a reasoner for the description logic EL + that uses a

More information

Semantic Web. Ontology Alignment. Morteza Amini. Sharif University of Technology Fall 95-96

Semantic Web. Ontology Alignment. Morteza Amini. Sharif University of Technology Fall 95-96 ه عا ی Semantic Web Ontology Alignment Morteza Amini Sharif University of Technology Fall 95-96 Outline The Problem of Ontologies Ontology Heterogeneity Ontology Alignment Overall Process Similarity (Matching)

More information

Criteria and Evaluation for Ontology Modularization Techniques

Criteria and Evaluation for Ontology Modularization Techniques 3 Criteria and Evaluation for Ontology Modularization Techniques Mathieu d Aquin 1, Anne Schlicht 2, Heiner Stuckenschmidt 2, and Marta Sabou 1 1 Knowledge Media Institute (KMi) The Open University, Milton

More information

Computing a Gain Chart. Comparing the computation time of data mining tools on a large dataset under Linux.

Computing a Gain Chart. Comparing the computation time of data mining tools on a large dataset under Linux. 1 Introduction Computing a Gain Chart. Comparing the computation time of data mining tools on a large dataset under Linux. The gain chart is an alternative to confusion matrix for the evaluation of a classifier.

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

Applying the KISS Principle for the CLEF- IP 2010 Prior Art Candidate Patent Search Task

Applying the KISS Principle for the CLEF- IP 2010 Prior Art Candidate Patent Search Task Applying the KISS Principle for the CLEF- IP 2010 Prior Art Candidate Patent Search Task Walid Magdy, Gareth J.F. Jones Centre for Next Generation Localisation School of Computing Dublin City University,

More information

Towards an Automatic Parameterization of Ontology Matching Tools based on Example Mappings

Towards an Automatic Parameterization of Ontology Matching Tools based on Example Mappings Towards an Automatic Parameterization of Ontology Matching Tools based on Example Mappings Dominique Ritze 1 and Heiko Paulheim 2 1 Mannheim University Library dominique.ritze@bib.uni-mannheim.de 2 Technische

More information

An Approach to Evaluate and Enhance the Retrieval of Web Services Based on Semantic Information

An Approach to Evaluate and Enhance the Retrieval of Web Services Based on Semantic Information An Approach to Evaluate and Enhance the Retrieval of Web Services Based on Semantic Information Stefan Schulte Multimedia Communications Lab (KOM) Technische Universität Darmstadt, Germany schulte@kom.tu-darmstadt.de

More information

Verification and Validation of X-Sim: A Trace-Based Simulator

Verification and Validation of X-Sim: A Trace-Based Simulator http://www.cse.wustl.edu/~jain/cse567-06/ftp/xsim/index.html 1 of 11 Verification and Validation of X-Sim: A Trace-Based Simulator Saurabh Gayen, sg3@wustl.edu Abstract X-Sim is a trace-based simulator

More information

Learning Approach for Domain-Independent Linked Data Instance Matching

Learning Approach for Domain-Independent Linked Data Instance Matching Learning Approach for Domain-Independent Linked Data Instance Matching Khai Nguyen University of Science Ho Chi Minh, Vietnam nhkhai@fit.hcmus.edu.vn Ryutaro Ichise National Institute of Informatics Tokyo,

More information

Bibster A Semantics-Based Bibliographic Peer-to-Peer System

Bibster A Semantics-Based Bibliographic Peer-to-Peer System Bibster A Semantics-Based Bibliographic Peer-to-Peer System Peter Haase 1, Björn Schnizler 1, Jeen Broekstra 2, Marc Ehrig 1, Frank van Harmelen 2, Maarten Menken 2, Peter Mika 2, Michal Plechawski 3,

More information

Alignment Results of SOBOM for OAEI 2009

Alignment Results of SOBOM for OAEI 2009 Alignment Results of SBM for AEI 2009 Peigang Xu, Haijun Tao, Tianyi Zang, Yadong, Wang School of Computer Science and Technology Harbin Institute of Technology, Harbin, China xpg0312@hotmail.com, hjtao.hit@gmail.com,

More information

A method for recommending ontology alignment strategies

A method for recommending ontology alignment strategies A method for recommending ontology alignment strategies He Tan and Patrick Lambrix Department of Computer and Information Science Linköpings universitet, Sweden This is a pre-print version of the article

More information

Chapter X Security Performance Metrics

Chapter X Security Performance Metrics Chapter X Security Performance Metrics Page 1 of 10 Chapter X Security Performance Metrics Background For many years now, NERC and the electricity industry have taken actions to address cyber and physical

More information

Interactive Campaign Planning for Marketing Analysts

Interactive Campaign Planning for Marketing Analysts Interactive Campaign Planning for Marketing Analysts Fan Du University of Maryland College Park, MD, USA fan@cs.umd.edu Sana Malik Adobe Research San Jose, CA, USA sana.malik@adobe.com Eunyee Koh Adobe

More information

An Architecture for Semantic Enterprise Application Integration Standards

An Architecture for Semantic Enterprise Application Integration Standards An Architecture for Semantic Enterprise Application Integration Standards Nenad Anicic 1, 2, Nenad Ivezic 1, Albert Jones 1 1 National Institute of Standards and Technology, 100 Bureau Drive Gaithersburg,

More information

The AgreementMakerLight Ontology Matching System

The AgreementMakerLight Ontology Matching System The AgreementMakerLight Ontology Matching System Daniel Faria 1,CatiaPesquita 1, Emanuel Santos 1, Matteo Palmonari 2,IsabelF.Cruz 3,andFranciscoM.Couto 1 1 LASIGE, Department of Informatics, Faculty of

More information

Recent Design Optimization Methods for Energy- Efficient Electric Motors and Derived Requirements for a New Improved Method Part 3

Recent Design Optimization Methods for Energy- Efficient Electric Motors and Derived Requirements for a New Improved Method Part 3 Proceedings Recent Design Optimization Methods for Energy- Efficient Electric Motors and Derived Requirements for a New Improved Method Part 3 Johannes Schmelcher 1, *, Max Kleine Büning 2, Kai Kreisköther

More information

Web-based Ontology Alignment with the GeneTegra Alignment tool

Web-based Ontology Alignment with the GeneTegra Alignment tool Web-based Ontology Alignment with the GeneTegra Alignment tool Nemanja Stojanovic, Ray M. Bradley, Sean Wilkinson, Mansur Kabuka, and E. Patrick Shironoshita INFOTECH Soft, Inc. 1201 Brickell Avenue, Suite

More information