Human-Generated learning object metadata

Similar documents
The Semantics of Semantic Interoperability: A Two-Dimensional Approach for Investigating Issues of Semantic Interoperability in Digital Libraries

Designing a System Engineering Environment in a structured way

CoE CENTRE of EXCELLENCE ON DATA WAREHOUSING

Opus: University of Bath Online Publication Store

Metadata Management System (MMS)

Chartered Membership: Professional Standards Framework

WHITE PAPER. SCORM 2.0: Assessment Framework. Business Area Data Representation and Interfaces

UKOLN involvement in the ARCO Project. Manjula Patel UKOLN, University of Bath

SEMANTIC SUPPORT FOR MEDICAL IMAGE SEARCH AND RETRIEVAL

DIGITAL TECHNOLOGIES - IT

Reducing Consumer Uncertainty Towards a Vocabulary for User-centric Geospatial Metadata

Construction and Property

Reducing Consumer Uncertainty

School of Engineering & Computational Sciences

Evaluation and Design Issues of Nordic DC Metadata Creation Tool

Information Retrieval and Knowledge Organisation

THE ROLE OF STANDARDS IN B2B COMMUNICATION

Proposal for Implementing Linked Open Data on Libraries Catalogue

SEMANTIC SOLUTIONS FOR OIL & GAS: ROLES AND RESPONSIBILITIES

Reusability and Adaptability of Interactive Resources in Web-Based Educational Systems. 01/06/2003

Survey of Research Data Management Practices at the University of Pretoria

Pearson BTEC Level 5 Higher National Diploma in Engineering (Electrical and Electronic Engineering)

Robin Wilson Director. Digital Identifiers Metadata Services

The Information Technology Program (ITS) Contents What is Information Technology?... 2

COURSE SPECIFICATION

Development of an Ontology-Based Portal for Digital Archive Services

Data Partnerships to Improve Health Frequently Asked Questions. Glossary...9

MASSiVE, Unità di Torino

Using Metadata Standards Represented in OWL for Retrieving LOs Content

The Metadata Assignment and Search Tool Project. Anne R. Diekema Center for Natural Language Processing April 18, 2008, Olin Library 106G

Workshop B: Application Profiles Canadian Metadata Forum September 28, 2005

Ontology based Model and Procedure Creation for Topic Analysis in Chinese Language

Metadata Framework for Resource Discovery

KNOWLEDGE MANAGEMENT VIA DEVELOPMENT IN ACCOUNTING: THE CASE OF THE PROFIT AND LOSS ACCOUNT

Ontology-Driven Information Systems: Challenges and Requirements

Content Enrichment. An essential strategic capability for every publisher. Enriched content. Delivered.

Context Ontology Construction For Cricket Video

The European Commission s science and knowledge service. Joint Research Centre

EISAS Enhanced Roadmap 2012

MERGING BUSINESS VOCABULARIES AND RULES

SQA Advanced Unit Specification: general information

Programme title: BSc (Hons) Forensic Archaeology and Anthropology

KEY PROGRAMME INFORMATION. Originating institution(s) Bournemouth University. Faculty responsible for the programme Faculty of Science and Technology

The Metadata Challenge:

DIGITAL STEWARDSHIP SUPPLEMENTARY INFORMATION FORM

Enriching Lifelong User Modelling with the Social e- Networking and e-commerce Pieces of the Puzzle

UNIFORM STANDARDS FOR PLT COURSES AND PROVIDERS

ICT-SHOK Project Proposal: PROFI

Feeding the beast: Managing your collections of problems

Globally recognised qualifications. Edexcel Business and Economics qualifications

Application profiles: mixing and matching metadata schemas

All LJMU programmes are delivered and assessed in English

An Architecture of elearning Enterprise Authoring Studio

Initial CITP and CSci (partial fulfilment). *Confirmation of full accreditation will be sought in 2020.

Proposal of a Multi-agent System for Indexing and Recovery applied to Learning Objects

Qualification details

Adaptive Personal Information Environment based on the Semantic Web

Harmonization of usability measurements in ISO9126 software engineering standards

Content Management for the Defense Intelligence Enterprise

Proposed Revisions to ebxml Technical Architecture Specification v ebxml Business Process Project Team

CTI Short Learning Programme in Internet Development Specialist

ISA Action 1.17: A Reusable INSPIRE Reference Platform (ARE3NA)

Writing for the web and SEO. University of Manchester Humanities T4 Guides Writing for the web and SEO Page 1

Managing Learning Objects in Large Scale Courseware Authoring Studio 1

Application Profiles and Metadata Schemes. Scholarly Digital Repositories

Position Description. Computer Network Defence (CND) Analyst. GCSB mission and values. Our mission. Our values UNCLASSIFIED

Conceptual Modeling and Specification Generation for B2B Business Processes based on ebxml

Multimedia Design and Authoring

Ontology integration in a multilingual e-retail system

A comparison of computer science and software engineering programmes in English universities

XML ALONE IS NOT SUFFICIENT FOR EFFECTIVE WEBEDI

EFFICIENT INTEGRATION OF SEMANTIC TECHNOLOGIES FOR PROFESSIONAL IMAGE ANNOTATION AND SEARCH

ICIS PROJECT #2 CONNECTING SPECIFICATIONS AND BIM SUB-PROJECT #2

BBK3253 Knowledge Management Prepared by Dr Khairul Anuar

School of Engineering and Computational Sciences

An Ontology-Based Methodology for Integrating i* Variants

Proposed Revisions to ebxml Technical. Architecture Specification v1.04

ICAEW REPRESENTATION 68/16

Content Interoperability Strategy

Course Information

PROGRAMME SPECIFICATION

CTI Higher Certificate in Information Systems (Internet Development)

University of Bath. Publication date: Document Version Publisher's PDF, also known as Version of record. Link to publication

Ontology Servers and Metadata Vocabulary Repositories

PERSISTENT IDENTIFIERS FOR THE UK: SOCIAL AND ECONOMIC DATA

A User Study on Features Supporting Subjective Relevance for Information Retrieval Interfaces

Title: Metadata quality: implications for library and information science professionals

Java Learning Object Ontology

Transforming Enterprise Ontologies into SBVR formalizations

An Approach to Software Component Specification

Smart Open Services for European Patients. Work Package 3.5 Semantic Services Definition Appendix E - Ontology Specifications

Ontological Library Generator for Hypermedia-Based E-Learning System

About the course.

Digital Library on Societal Impacts Draft Requirements Document

Metadata Quality Assessment: A Phased Approach to Ensuring Long-term Access to Digital Resources

Managing Web Resources for Persistent Access

Form 18 Heriot-Watt University Graduate/Postgraduate Course Structure and Course Notes Template (RAY) 2009/2010

The MovingLife Project

ISTE SEAL OF ALIGNMENT REVIEW FINDINGS REPORT. Certiport IC3 Digital Literacy Certification

LEARNING OBJECT METADATA IN A WEB-BASED LEARNING ENVIRONMENT

Transcription:

Human-Generated learning object metadata Andrew Brasher, Patrick McAndrew UserLab, Institute of Educational Technology, Open University, UK. {a.j.brasher, p.mcandrew}@open.ac.uk Abstract. The paper examines how to address the need for a production process for e-learning resources to include human generated metadata, and considers how users will exploit this metadata. It identifies situations in which human production of metadata is unavoidable, and examines fundamental problems concerned with human metadata generation such as motivation and shared understanding. It proposes and discusses some methods to exploit endemic motivational factors within communities in an attempt to ensure good quality human generated metadata, and identifies how ontological constructs can support the exploitation of such metadata. The relevance of these methods to the semantic web in general is discussed. 1 Introduction In common with metadata schemas from many other domains, learning object metadata proposals such as the IEEE standard for learning object metadata [1] are specified terms of descriptors related to particular aspects of the resource being described. In any situation in which metadata is to be generated, every such descriptor can be considered to be from one of two distinct categories of sources: 1. Intrinsic sources - those sources that are contained within the resource itself, which are a necessary part of the resource itself. Examples of intrinsic sources include format of a resource, and title of a resource; 2. Extrinsic sources those sources that are not contained within the resource itself. Examples of extrinsic sources include personal or organisational views about the expected use of the resource (e.g. the IEEE LOM elements educational level, difficulty, [1]). No matter which metadata schema one applies, and no matter which domain of knowledge one considers, these two categories of sources exist. The relative usefulness of each category will depend on the particular application under consideration. Software applications exist which can take some forms of intrinsic sources as inputs and from these sources automatically determine relevant metadata descriptors. See for example the software tools listed by UKOLN (http://www.ukoln.ac.uk/metadata/software-tools/). Sources such as file size, file format, and location (e.g. the URL) of a resource fall into this automatically

determinable category. In addition, resources for which the semantic content is primarily textual contain many intrinsic sources from which descriptors related to the content can be generated automatically through the use of suitable software performing e.g. linguistic analysis on this intrinsic source material [2], [3], [4] and through techniques such as latent semantic indexing which make use of the properties of collections of material [5]. Many web pages and word processor documents belong to this primarily textual group. There is also technology available to extract metadata from printed textual material [6]. However, for other sorts of resources i.e. those for which the semantic content is not primarily textual (e.g. sound files, movie files, and multimedia resources) it is more difficult to extract descriptors automatically [7] or to obviate the need for descriptors by using other methods (see e.g. papers on visual query of image and video content in [8]). Furthermore, no matter which modality the semantic content of e-learning resources is encoded in (be it text, sound, video) there are many useful characteristics of e-learning resources which are not automatically determinable, but require human intervention because they reside in extrinsic sources and are characteristics of the expected use or actual use of a resource [9]. There are thus two types of sources which require human intervention to create metadata descriptors: i. extrinsic sources and ii. intrinsic sources within non-textual resources. 2 Relevance to Learning Object Metadata Of the 11 descriptors in the IEEE LOM s educational category, 6 are defined as characteristics of expected use, i.e. must be generated by humans ( Intended User Role, Context, Typical Age Range, Difficulty, Typical Learning Time, Language ) and 5 will require human intervention if the resource is non-textual ( Interactivity Type, Learning Resource Type, Interactivity Level, Semantic Density, Description ). The purpose of metadata such as this is to facilitate discovery and (re)use of resources by enabling computer systems to support the identification by users of e- learning resources relevant to the users needs. In general, a system that exploits metadata to this end will involve people, computer systems, the resources that the metadata describes, and the metadata itself; hereafter we will refer to such a system as a Metadata Exploitation System. The computer systems within such a Metadata Exploitation System will invariably include search and retrieval algorithms, and the complexity of design and performance of these algorithms will be influenced by the expected quality of the metadata. One crucial aspect of quality is accuracy, and the accuracy of any metadata descriptor will depend on the system which creates it. This means that to optimise the performance of the Metadata Exploitation System one must optimise the performance of the system which creates the metadata i.e. it is necessary to consider two systems

1. A system for exploiting the metadata: the Metadata Exploitation System 2. A system for creating the metadata: the Metadata Creation System and how these systems will interoperate. Both the exploitation and creation systems will involve people, computer systems, the resources that the metadata describes, and the metadata itself. Experience at the Open University and elsewhere [10, 11] has shown that human produced metadata is often neither complete nor consistent. If the quality of metadata is assumed to be related to its utility in retrieval systems, then factors which influence the quality of human produced metadata can be considered to fall into three categories: motivation of the producers how is the metadata to be useful to those who are producing it? accuracy does the metadata describe the resource fully and as intended by the designers of the system(s) which will exploit the metadata? consistency can the metadata produced by different people be interpreted in the same way by the systems which will use it? For the first of these categories, motivation, it is clear that expectations of increases in efficiency of production of resources (e.g. via reuse), or value of the produced resources (e.g. via personalisation), is providing an incentive for organisations to include production of metadata within their workflow for production of e-learning resources. For some organisations the existence of standards (e.g. [1]), systems for handling stores of resources (e.g. [12]) is providing an additional incentive. However, for individuals within the production system the motivational factors can be more difficult to determine: if benefits are only in reuse or the subsequent finding of objects by a third party the value perceived by the producer will be low at the time the object is produced. Solutions to accuracy and consistency within an organisation can take a central approach, for example by focusing the work of adding metadata on a few individuals such as information science specialists who bring cataloguing experience. In such systems, where metadata is seen as an additional aspect often added at the end of a workflow there is a risk of adopting simplified systems and missing opportunities to exploit the metadata descriptions in the initial production of the materials (i.e. before the materials are made available to students or teachers). The characterising of educational material also requires specialist knowledge to determine expected use. The experts in expected use are the users (e.g. teachers, tutors) and authors of the material itself, thus this community of practice needs to be engaged to both properly create and exploit these descriptors. Individual motivation needs to be encouraged through identifying advantages during creation, community motivation empowered through developing effective sharing of ontologies and vocabularies used inside the community.

3 Mechanisms to exploit motivational factors In this section we outline how an approach based on structured vocabularies may be used in the design of Metadata Creation Systems, and then describe an approach to considering motivational factors. Finally we examine how a method building on a formal description of two communities shared understanding may overcome limitations which can limit the previous approach to single community. A community of practice (e.g. staff working in a particular faculty at a particular University) can use the educational category of the IEEE LOM to accurately and consistently describe educational resources they produce, given suitable tools (e.g. as described in [13]). Structured vocabularies have been shown to improve consistency amongst metadata creators, in particular Kabel et al. report the positive effect of vocabulary structure on consistency [14]. It is clear that a vocabulary structured using (for example) the IMS Vocabulary Definition Exchange (VDEX) specification [15] can provide extra semantic information than a list of terms thus facilitating accuracy. Furthermore, we propose an approach to rapid prototyping of Metadata Creation Systems can be based on the creation and iterative development of structured vocabularies. For example, vocabularies structured according to e.g. the IMS VDEX specification can be exploited in the prototyping process via transformations which facilitate evaluation and iterative development of 1. vocabulary terms and vocabulary structure 2. Help information for metadata creators. The sort of transformations we are referring to are typically XSL [16] transformations which will operate on a structured vocabulary to produced e.g. XHTML user interface components which enable the vocabulary to evaluated by the relevant user group. However, we argue that the motivation of individuals who are called on to create metadata must also be considered, and that better results should be produced by such communities if the descriptors individuals within the community in question are asked to produce are also of use to the individuals themselves. This implies the necessity for analysis of the tasks normally carried out by individuals e.g. using methods such as the socio-cognitive engineering method described by Sharples et al. [17], to identify how individuals themselves could exploit descriptors intended for end users. Thus vocabularies and structures which are developed (or identified) for use within a Metadata Exploitation System should form the basis from which the development of vocabularies for the Metadata Creation System begins, and proceeds via the prototyping process described above. However, such a task analysis may yield alternate generation mechanisms. To demonstrate this we consider an example based on existing learning material: a collection of learning objects created for the Uk-eu masters-level course Learning in the connected economy. This course is part of a Postgraduate Certificate in Online and Distance Education which was offered by the UKeU (see [18] for an overview of the course, and [19] for a description of the learning object structure of the course). In the case of the learning objects created for the course Learning in the connected economy, the course team that created the material had a primary typical target audience in mind (note: we use the term course team to refer to the team of authors

and editors responsible for creating material for a particular course), and characteristics of this audience were described on the course web site. The details of this description need not concern us, but they included such requirements as a first degree and proficiency in English. In developing the learning objects for this course, one of the pedagogic requirements that the course team for this course had to comply with was that students should be given an indication of the estimated study time that each learning object should take to complete. It is apparent then that this estimated study time figure was generated by the content authors in accordance with the guidelines suggested in the IEEE standard for the Typical Learning Time metadata element: Approximate or typical time it takes to work with or through this learning object for the typical intended target audience. [1]. Thus the estimated study time information required could be automatically generated from learning object metadata, if this metadata included the correct figure within the Typical Learning Time element. This then provides the motivation for authors to enter a reasonable figure for Estimated study time. With respect to the Difficulty element, the explanatory note in the IEEE standard states: How hard it is to work with or through this learning object for the typical intended target audience. [1], and that with respect to both the Typical Learning Time and Difficulty elements the standard states NOTE The typical target audience can be characterized by data elements 5.6:Educational.Context and 5.7:Educational.TypicalAgeRange. [1]. Next, we assume that within this context (i.e. the educational context characterized by the typical target audience referred to previously) that Typical learning time will be related to difficulty for this typical target audience. The exact nature of the relationship is not important we merely assume that as Typical learning time increases, so does difficulty. Now, assume that the authors of the course think that the range of levels of difficulty for this course is very easy to difficult, i.e. they agree, as a community, that this range of descriptors of difficulty are apt. Then by analysing the Typical learning time information given to students (and entered into the metadata) for the complete course, this community of authors proposes a categorisation based on this Typical learning time. This categorisation enables Difficulty metadata to be automatically generated for this particular educational context, and hence inserted into the relevant metadata record. This metadata is sufficient to be useful within this particular educational context. It can enable learning objects to be sorted and compared in terms of difficulty (or Typical learning time ) for this context; knowledge of this context is implicit in the community of authors decision about the range of applicable descriptors of difficulty, and their description of the difficulty of individual learning objects. However, difficulty metadata that is generated by the mechanism described hereto is only valid for one educational context, and can only be correctly interpreted by people and systems that are aware of this fact. With the resources described so far (i.e. the vocabulary and metadata resources) it is not possible for an algorithm to make useful comparisons between the difficulty of a learning object on a particular

topic from this post graduate course with a learning object on the same topic from an undergraduate course in another subject area. To make such comparisons possible, extensions to the metadata and the creation and exploitation systems are necessary. For example, the IEEE LOM permits any number of Educational elements, hence a Metadata Creation System could implement a difficulty element for every educational context perceived to be of interest, and a Metadata Exploitation System could include an algorithm to perform the necessary comparisons. However, there are problems with this approach, not least the burden of creating this additional metadata (note that although the difficulty metadata can be automatically generated, the Typical learning time metadata is still a prerequisite, i.e. for every context someone will have to ascribe a value for this, hence allowing the difficulty value to be generated. We now propose a method to reduce the burden placed upon communities of authors, yet still enable the difficulty of learning objects to be compared across contexts. Assume the course team responsible for creating material for context1 is team1, and that responsible for creating material for context2 is team2. These course teams can meet, discuss and agree how the difficulty of learning objects they have created for their own context will be perceived by students in the other context. For example, the teams may agree that learning objects created for context1 (e.g. postgraduate level) will be perceived as more difficult by students in context2 (e.g. undergraduate level) than the difficulty descriptors that have been applied by team1 for context1 appear to indicate. This agreement establishes a relationship between descriptions of difficulty in one context and perceptions of difficulty in the other. We propose that in general (i.e. for any pair of contexts and no matter what the exact nature of the contexts in question is), that the gamut of useful descriptors of this contextual relationship is: Assignments of difficulty made in this context context1 are perceived as more difficult than assignments of difficulty made in this context context2 Assignments of difficulty made in this context context1 are perceived as less difficult than assignments of difficulty made in this context context2 Assignments of difficulty made in this context context1 are perceived as as difficult as assignments of difficulty made in this context context2 (note that in this case assignment should be interpreted as the action of assigning, not as a task allocated to somebody as part of a course of study ). Providing there is metadata available which describes the difficulty of learning objects for these two educational contexts context1 and context2 (perhaps automatically generated by the mechanism described previously) then what is required is a mechanism for encoding the statements describing the relationships presented above so that they can be interpreted by machines and people. We have developed a prototype ontology in OWL [20] which enables machine interpretation of such contextual relationships. The classes in this ontology are described in table 1.

Table 1. Description of classes of concepts in the ontology Difficulty class Individuals of class Difficulty represent a level of difficulty that can be assigned to individuals of class LO. A particular individual (i.e. level of difficulty) can be related to other individuals via the slots 'less_difficult_than' and 'more_difficult_than'. Assignment class The Assignment class describes assignments of individuals of Difficulty to individuals of LO by a particular actor in a particular Context. For example, the individual of Assignment identified as Henry Hall should be interpreted as: Henry Hall says that learning object individual lo2 is difficult for students in the Physics context.. Context class Individuals of the class Context represent the context (e.g. educational level and study skills), that individuals of class Assignment maybe assigned to. ContextRelationship class The ContextRelationship class describes relationships between assignments of difficulty in different contexts. An example of an individual is e.g. R1: "For students in the Maths context, the assignments of difficulty made for students in the Physics context are less difficult." LO class Individuals of the class LO represent learning objects, i.e. they are references to metadata records which describe the relevant learning object. All the individuals necessary for such an ontology to be exploited may be automatically generated from metadata records created for a particular context, except the ContextRelationship individuals. It is envisaged that these could be created e.g. by discussions and agreements between individual managers representing the course teams of the contexts in question. 4 Conclusions Systems which seek to exploit metadata descriptions (e.g. a search and retrieval system) which have been created with more than one expected use in mind must have an understanding of the semantics of the descriptions of expected use, and of the semantic relationships between these values of the descriptors. In most current systems this understanding exists outside of the computer system, in the shared knowledge and understanding of the communities that both create and exploit the values via the computer system. For computer systems to do this without human intervention, there must be a machine interpretable description of the relationships between the contexts of expected use. Formal languages which enable such machine interpretable descriptions to be realized can help address this e.g. by implementing representations of communities shared understanding of relevant aspects using the Web Ontology Language, OWL [20]. The use of OWL was not necessitated by the requirements of the problem we have addressed herein; other languages could have

been used. However, we realise that organisational motivation is needed to create and/or utilise an ontology such as the one described. One factor in the choice of the OWL language to implement the relatively simple (compared with what is possible with OWL) constraints and relationships necessary for this ontology is the expectation of availability of tools that can reason about them [21], which should enable widespread exploitation of context relationship instances implemented in OWL. Finally we remark that the situations which necessitate human-generated metadata for educational resources (i.e. extrinsic sources and intrinsic sources for non-textual resources) are exactly similar for resources from all other domains. Thus any semantic web application which has a reliance on metadata created from these sources could make use of the methods described in this paper i.e. to consider and exploit the motivational and community aspects of human metadata generation. References 1. IEEE, IEEE Standard for Learning Object Metadata, in IEEE Std 1484.12.1-2002. 2002, The Institute of Electrical and Electronics Engineers, Inc. p. i-32. 2. Sebastiani, F., Machine Learning in Automated Text Categorization. ACM Computing Surveys, 2002. 34(1). 3. Lam, W., M. Ruiz, and P. Srinivasan, Automatic Text Categorization and Its Application to Text Retrieval. IEEE Transactions on Knowledge and Data Engineering, 1999. 11(6): p. 865-879. 4. Liddy, E.D., et al. Automatic metadata generation & evaluation. in Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval. 2002. Tampere, Finland: ACM Press. 5. Deerwester, S., et al., Indexing by latent semantic analysis. Journal of the Society for Information Science, 1990. 41(6): p. 391-407. 6. METAe - Meta Data Engine. Vol. 2003. 2002: The METAe project group. http://metae.aib.uni-linz.ac.at/. 7. Brunelli, R., O. Mich, and C.M. Modena, A Survey on the Automatic Indexing of Video Data,,,. Journal of Visual Communication and Image Representation, 1999. 10(2): p. 78-112. 8. Ngan, K.N., et al., eds. Special issue on segmentation, description and retrieval of video content. 1998. 9. Marshall, C. Making Metadata: a study of metadata creation for a mixed physical-digital collection. in ACM Digital Libraries '98 Conference. 1998. Pittsburgh, PA. 10. Currier, S., et al., Quality Assurance for Digital Learning Object Repositories: Issues for the Metadata Creation Process. ALT-J, research in Learning Technology, 2004. 12(1): p. 6-20. 11. Chan, L.M., Inter-Indexer Consistency in Subject Cataloging. Information Technology and Libraries, 1989. 8(4): p. 349-358. 12. GEM, The Gateway to Educational Materials. 2003. http://www.thegateway.org/. 13. Brasher, A. and P. McAndrew, Metadata vocabularies for describing learning objects: implementation and exploitation issues. Learning Technology, 2003. 5(1). http://lttf.ieee.org/learn_tech/issues/january2003/. 14. Kabel, S., R.d. Hoog, and B.J. Wielinga. Consistency in Indexing Learning Objects: an Empirical Investigation. in ED-Media 2003. 2003. Honolulu, Hawaii, USA. 15. IMS, IMS Vocabulary Definition Exchange. 2004. http://www.imsglobal.org/vdex/. 16. Clark, J., XSL Transformations (XSLT) Version 1.0. 1999, W3C. http://www.w3.org/tr/xslt.

17. Sharples, M., et al., Socio-Cognitive Engineering: A Methodology for the Design of Human- Centred Technology. European Journal of Operational Research, 2002. 136(2): p. 310-323. 18. UKeU, Postgraduate Certificate in Online and Distance Education. 2003/4. http://www.ukeu.com/courses/connectedeconomy/courses_connectedeconomy_programme. php?site=. 19. Weller, M., C. Pegler, and R. Mason, Working with learning objects - some pedagogical suggestions. 2003. http://iet.open.ac.uk/pp/m.j.weller/pub/altc.doc. 20. McGuinness, D.L. and F.v. Harmelen, OWL Web Ontology Language Overview. 2003, W3C. http://www.w3.org/tr/owl-features/. 21. Abecker, A. and R. Tellmann, Analysis of Interaction between Semantic Web Languages, P2P architectures, and Agents. 2003, SWWS Semantic Web Enabled Web Services project IST-2002-37134. http://swws.semanticweb.org/public_doc/d1.3.pdf.