Integrated Knowledge Workbench

Size: px
Start display at page:

Download "Integrated Knowledge Workbench"

Transcription

1 Integrated Project Priority Integrated Knowledge Workbench Integrated Knowledge Workbench Deliverable D1.3 Version 1.1 January 14, 2009 Dissemination level: PU Nature R/P Due date Lead contractor FZI Forschungszentrum Informatik Start date of project Duration 36 months

2 Authors Heiko Haller, FZI Max Völkel, FZI Andreas Abecker, FZI Brian Davis, NUIG Laura Drăgan, NUIG Simon Scerri, NUIG Alexander Schutz, NUIG Pradeep Varma, NUIG Henrik Edlund, KTH Kristina Groth, KTH Pär Lannerö, KTH Sinna Lindquist, KTH Alexander Troussov, IBM Mikhail Sogrin, IBM Alexander Polonsky, COG Mentors Ansgar Bernardi, DFKI Mehdi Jazayeri, USI Project Co-ordinator Dr. Ansgar Bernardi German Research Center for Artificial Intelligence (DFKI) GmbH Trippstadter Str Kaiserslautern Germany phone: Partners DEUTSCHES FORSCHUNGSZENTRUM F. KUENSTLICHE INTELLIGENZ GMBH IBM IRELAND PRODUCT DISTRIBUTION LIMITED SAP AG HEWLETT PACKARD GALWAY LTD THALES S.A. PRC GROUP - THE MANAGEMENT HOUSE S.A. EDGE-IT S.A.R.L COGNIUM SYSTEMS S.A. NATIONAL UNIVERSITY OF IRELAND, GALWAY ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE FORSCHUNGSZENTRUM INFORMATIK AN DER UNIVERSITAET KARLSRUHE UNIVERSITAET HANNOVER INSTITUTE OF COMMUNICATION AND COMPUTER SYSTEMS KUNGLIGA TEKNISKA HOEGSKOLAN UNIVERSITA DELLA SVIZZERA ITALIANA IRION MANAGEMENT CONSULTING GMBH Copyright: Nepomuk Consortium 2006 Copyright on template: Irion Management Consulting GmbH 2006 Deliverable 1.3 Version 1.1 ii

3 Versions Version Date Reason Start of first draft by Alexander Schutz Minor Cosmetic Change to Deliverable Nature by Brian Davis Some new contributions merged in and some placeholders added. Version sent to the mentors by Heiko Haller 0.4 xx Working version under construction taking into account mentors feedback 0.5 xx Working version under construction added in final Speech Acts section to NL Tools prefinal version content complete last corrections being made final version submitted to project coordinator final version last missing content included submitted to project coordinator Explanations of abbreviations on front page Nature R: Report P: Prototype R/P: Report and Prototype O: Other Dissemination level PU: Public PP: Restricted to other FP6 participants RE: Restricted to specified group CO: Confidential, only for Nepomuk partners Deliverable 1.3 Version 1.1 iii

4 Contents 1 Executive Summary / Introduction Integrated Knowledge Workbench NepomukSimple How to use NepomukSimple Background Claudia goes to Belfast Functionalities Galaxy recommendations in NepomukSimple Evaluation Methods Expert evaluation Usability evaluation with users New area of use and positive aspects as presented by the users Lessons Learned Future Work: Recommendation Functions Future Work: Customization & Sustainability Future Work: Collaborative Spread of Activation Semantic and Semanta Stand-Alone Applications imapping Client QuiKey Lessons Learnt and Future Work Nepomuk KDE SemNotes - The Semantic Note-taking Tool Web-Application BounceIt Functionalities Architecture Evaluation Exploitation and dissemination plans Conclusion Shared Information Space SIS (Prototype) Background The Final Version Evaluations New area of use and positive aspects as presented by the users Lessons learned Test for final improvements Natural Language Tools and Other Enabling Technologies StrucRec Text Analysis and Graph Mining StrucRec Functionality Semantic Text Analysis Related Item Recommendation Keyphrase Extraction Implementation Deliverable 1.3 Version 1.1 iv

5 4.2.2 Deployment Evaluation ROA - Roundtrip Ontology Authoring Introduction Design and Implementation Evaluation Related work Conclusion & Discussion Integration in SemNotes and Nepomuk-KDE Speechact Detection Background Conceptual Classification Implementation Evaluation Related Work Conclusion Speech Act Detection in Nepomuk Conceptual Data Structures API Architecture and Status Integration with NEPOMUK Technologies Conclusions References Deliverable 1.3 Version 1.1 v

6 1 Executive Summary / Introduction The Social Semantic Desktop project NEPOMUK 1, that has been going on for three years ( ) has now reached its end. This deliverable reports the achievements of the third and final year of NEPOMUK s work package 1 named Knowledge Articulation and Visualisation. Previous work in this work-package has been described in the deliverables D1.1 Semantic Wiki Prototypes (Kotelnikov, Polonsky, Kiesel, Völkel, Haller, Sogrin, Lannerö & Davis 2006), D1.2 Conceptual Data Structure Tools (Völkel, Haller, Bolinder, Davis, Edlund, Groth, Gudjonsdottir, Kotelnikov, Lannerö, Lundquist, Sogrin, Sundblad & Westerlund 2008) and D1.4 Design methods and work process (Lindquist, Gudjonsdottir, Westerlund, Groth, Sundblad & Bogdan 2008) and many additional publications referred to in these deliverables. The multitude of tools described in D1.2 Conceptual Data Structure Tools, has been integrated more tightly with one another and with the NEPOMUK backend services: Some have been integrated into the rich and monolithic NEPOMUK application framework PSEW, some have fused into small stand-alone applications that all connect to the NEPOMUK back-end to integrate and share their contents with other NEPOMUK components. PSEW/NepomukSimple (Sec. 2) NepomukSimple is a so-called perspective of NEPOMUK PSEW combining a selection of well-integrated tools and services in one holistically designed appearance. It allows users to organise, annotate, search and browse their desktop resources. It is based on the Pile concept used to manage groups of resources (not only files, but also address book entries, calendar entries, concepts and Web links). Some information is displayed in specialised Views, including a time-line and a map. NepomukSimple uses background services like IBM Galaxy, which provides recommendations for related items. NepomukSimple has also been integrated with some extensions to widespread third-party software such as Mozilla Firefox (Web browser) and Thunderbird ( client). A more sophisticated tool also backed by PSEW is Semanta which enhances plain with semantic information. Visual Knowledge Workbench / imapping Client (Sec. 3.1) The imapping client is now a stand-alone tool for structuring ideas and concepts freely, in a visual, graph-based environment without being constrained by a formal model. It uses CDS ( Conceptual Data Structures ) as a flexible adaptable data model. imapping combines several established visual knowledge modelling paradigms and bridges the gap between informal brainstorming and note-taking applications to more formal ontology-based applications like PSEW. The imapping client now includes QuiKey an advanced lightweight tool that can be compared to a semantic command line. While it can be easily installed and used completely stand-alone, it will use a NEPOMUK server if present to align with other NEPOMUK applications. SemNotes (Sec ) SemNotes is a semantic note taking application integrated into KDE. There are different plug-ins available like e.g. one for text-based ontology authoring. BounceIt (Sec. 3.3) Given that information on our desktops is rarely purely personal, BounceIt provides a much-needed smooth integration between personal and shared information. BounceIt has a browser-based interface and focuses mainly on data and meta-data management (sharing and search). NepomukSimple and the imapping application, on the other hand, are stand-alone systems that focus mostly on sophisticated meta-data creation and its management on the local desktop. BounceIt allows to share via the Nepomuk P2P platform the information 1 Deliverable 1.3 Version 1.1 1

7 created within other NEPOMUK components like NepomukSimple or the imapping application, as well as to carry out a semantic search across the shared information spaces. Moreover, BounceIt provides an alternative browser-based interface for viewing local information and editing text content entered in NepomukSimple or imapping using the Wiki model that is compatible with all these systems. This information can then be accessed from the Web using the BounceIt central server that is a node within the NEPOMUK s P2P system. Shared Information Space (Sec. 3.4) The Shared Information Space (SIS) is an interactive prototype exploring alternative interaction methods for finding shared information. Natural Language Tools and other Enabling Technologies (Sec. 4) This section finally describes a set of background components utilised by the various workbenches which can be seen as middleware libraries that are not necessarily tied to a particular application. It includes technologies like: recommendation functions based on graph mining, key-phrase extraction, controlled-language generation and -parsing to enable text-based ontology engineering, speech act detection and the Conceptual Data Structures (CDS) engine and reports on the implementation and evaluation details for those tools. Deliverable 1.3 Version 1.1 2

8 2 Integrated Knowledge Workbench Nepomuk January 14, 2009 PSEW is a nepomuk platform made for building integrated knowledge wokbenches. It is based on the eclipse Rich Client Plattform 2. Nepomuk PSEW integrates many different nepomuk tools and views, organised in different so-called perspectives. It also contains all nepomuk core services like storage, search and many more. Nepomuk PSEW can be downloaded and installed as one monolithic application from. PSEW is described in Detail in Barth, Mancinelli, Christidis, Sauermann & Laurière (2007), Lelli, Felix, Gudjonsdottir, Sauermann, Johnston & Pinzger (2008). It can be downloaded from 3. Instructions about installation and usage can be found at 4. Because there are many different user interfaces and views in PSEW, some of which require quite some technical knowledge, we created NepomukSimple, a comprehensive user interface combining only a selection of integrated nepomuk functionalities. The Design of NepomukSimple is directly based on user studies and is realised as an additional perspective to PSEW. 2.1 NepomukSimple NepomukSimple allows users to organize, annotate, search and browse desktop resources. It is based on the Pile concept which means that it can be used to manage groups of resources (not only files, but also address book entries, calendar entries, concepts and web links). Some information is displayed in specialized Views, including a timeline and a map. NepomukSimple has been integrated with IBM Galaxy, which provides recommendations for related items. NepomukSimple has also been integrated with Mozilla Firefox (Web browser) and Thunderbird ( client) to facilitate the creation of piles. NepomukSimple is a Nepomuk PSEW perspective How to use NepomukSimple To use NepomukSimple, install and run the NEPOMUK software package. Then open the Window->Perspectives menu. Select NepomukSimple. Detailed usage instructions are available on the web at 5 as well as in the NEPO- MUK Deliverable 6.7A Background The design of the NepomukSimple prototype is based on experiences from the field studies performed during the beginning of the NEPOMUK project. The strategy for NepomukSimple was to make the simplest thing that could possibly work. During the design, the Claudia-Trip-to-Belfast scenario, one of many scenarios produced as a result of the field studies, was used Claudia goes to Belfast This scenario was previously published in NEPOMUK Deliverable 10.1 Grebner, Ong, Riss & Edlund (2006) Deliverable 1.3 Version 1.1 3

9 On the 27th of February, 2008, Claudia is having a meeting with Klaus, a colleague at SAP Research, and they are planning a meeting in the project called CID they are going to attend in Belfast. There are many things that need to be done before they leave for Belfast in a few weeks. Claudia goes to her office and adds the meeting to her calendar and then a task called Belfast meeting with relevant practical tasks is created. First she needs to get permission for the trip and then she needs to find fitting travel and accommodation. Besides the practical travel issues the task list also includes some items connected to the work that needs to be done in the meeting and Claudia adds some specific tasks to the list. When Claudia has got permission to travel, she proceeds to book train, flight and hotel, the system knows her preferences and only recommends fitting options. When she has booked they are added to her Belfast meeting pile which includes everything related to the meeting like work and travel documentation. The Belfast meeting pile is also accessible online. When the trip gets closer the system checks the weather so that she can pack the right things. The system adds recommendations of restaurants and shops according to her profile. She can then create a travel timeline where she sees all the relevant time stamps she needs to make: from leaving the office in order to make it to the train in Karlsruhe, to the tram when she returns home from the trip. Afterwards she gives her colleagues access to her timeline. When Claudia comes back from her trip she needs to make a travel report in order to claim her expenses. This is automatically created according to the travel timeline she created before she left. She takes the receipts that she has on paper and sends them to HR. Figure 1: The NepomukSimple application. The application, see Figure 1, is divided into three parts: View A The pile. Here the user can take any resource from the desktop by drag-and-drop to put in the pile. This is derived from the scenario part of when Claudia has booked everything and needs some sort of bundle to keep the relevant resources ( , tickets, bookings, etc.) in one context. View B The semantics of the content in A is extracted and presented here. This view gives Claudia the opportunity to see vital data from all parts of the Belfast trip pile in one place. She can also make annotations to the pile, and edit the semantics. By drag-and-drop she can create new piles based on one or more of the resources shown (For example: She can drag the Belfast Deliverable 1.3 Version 1.1 4

10 concept to the pile tabs panel, thus starting a new pile focusing on Belfast related stuff.) This is somewhat similar to browsing the web using tabs. View C Different views of the resources in A combined with the annotations in B enables Claudia to look at her trip by a timeline or map view. This is also a place where recommendations for related items (from Galaxy) can be displayed. The intention with this field is to keep it open for custom made plug-ins. One further example could be an imapping view Functionalities Important NEPOMUK functionalities were agreed upon and documented in the NEPOMUK workshop in Lugano on the 5th-9th of February 2007, involving most NEPOMUK partners. The following list of NepomukSimple functionalities is reusing the ordering and terminology from that workshop. For each functionality, the current status in the prototype implementation (as of December 2008) is indicated within parentheses. Annotation (Implemented) The B-area is showing all annotations for the selected resources in A. Here the user can add, edit and remove annotations. Search (First version implemented) The search concept is consistency-aligned with the interaction of annotation. Searching is performed by writing a free-text query in the text field in the bottom of view A. By clicking on the search button the user creates a searchquery-resource annotated with free-text search as predicate and the term as object. In the same manner the user can continue to annotate the searchquery-resource to narrow down the search. Query related items (Implemented) Dragging and dropping predicates and objects from the B-field into the A- field lets the user browse his or her semantic desktop. Resource management (Implemented) The resource management is handled with the concept of Piles. From the original design idea of piles: Pile is a collection of items and information relevant for a specific activity, purpose, area, etc. For example a pile can be a collection of things relevant for a trip; tickets, weather info, flights, passport reminder, visa information, hotel booking, restaurant suggestions, etc. The whole A-field represents the pile. The user can have multiple piles opened concurrently with a tab-system in the top of the application. Sorting and Grouping (First version implemented) In order for the B view not to be filled with (normally) un-interesting meta data such as system time-stamps, message ids etc, the semantic data can be filtered. If, for example, the user is primarily interested in image related information (exposure values, name of photographer etc) then the Image filter is suitable. It will display all meta data where the predicate comes from the NEPOMUK EXIF ontology (image meta data). Similarly, the Music filter will display all metadata where the predicate comes from the ID3 ontology (such as artist, album etc.). By default, all specialized filters are selected. If the user un-selects some of the filters, the amount of information in B will decrease, making it easier to overview. Deliverable 1.3 Version 1.1 5

11 Right now, the meta data in B can only be sorted by subject, predicate or object. In the future, we would like to be able to offer relevance sorting as well as any other option that can make the view more effective to the user. Tailoring/templating (Concept) The user should be able to re-use structures previously stored, to increase work speed and quality. Templating has not yet been added to the prototype implementation. Application integration (Second version implemented) A plug-in for Firefox as well as one for Thunderbird have been developed, that extract contents from these applications and add it to a NepomukSimple pile. Using the Galaxy integration, a suitable pile for the currently selected or the currently viewed Web page is presented in a dropdown menu in the plug-in. Therefore, adding a message or Web page to a NepomukSimple pile requires no more than a single click. More integration with commonly used desktop applications is on the wish-list. For example, it would be nice to be able to automatically extract document outlines and key concepts from Word and Powerpoint documents for example. Many users encountered in the NEPOMUK Case studies work very extensively with these applications. Recommendations (IBM Galaxy integrated) For each pile, a ranked list of relevant items found on the semantic desktop is available as a tab in the C View. Ranking is based on semantic network proximity. See Galaxy documentation from IBM. The items recommended are candidates for inclusion in the pile. If the user considers an item should belong to the pile, he or she can just drag it from the recommendations panel into the pile (view A). Of course, a recommended item can also be opened in a new pile. Semantic auto-completion first version implemented) When editing the meta data, the user gets some help from dropdown lists containing predicates and objects already present in the triple store. The dropdown is automatically updated when the user enters more letters in the input field. This semantic auto-completion feature needs to be further refined. High-Level Meta Data (Proof of concept done) We have noticed that in order to be able to implement scenarios such as Claudia goes to Belfast we would need more high level meta data than what is usually present in NEPOMUK. (The majority of meta data is often at the operating system level, such as filenames, time stamps, addresses etc.) For example, we would need to know that a certain text is a hotel reservation or a flight ticket. As a proof of concept, a specialized feature for the extraction of flight ticket meta data from messages and web pages has been developed. For the future, we would like to see more high-level meta data extractors added to the LocalDataAlignment component. Notification management and Resource Sharing (On wish list) In the case studies, the need for efficient sharing and notification mechanisms has been highlighted. These needs could be addressed in the context of NepomukSimple, but have not been realised due to other priorities Galaxy recommendations in NepomukSimple Within the Pile-based interface of NepomukSimple, Galaxy Activation Spread (described in Sec. 4.1) helps the user manage their piles in two major ways: (1) given Deliverable 1.3 Version 1.1 6

12 a pile with its items, it recommends possible candidate items to add to the pile and (2) given an item, it recommends possible candidate piles to add it to. We are considering a number of other ways in which Activation Spread can help pile-based user interfaces, which are detailed in (Troussov et al. 2008). While the NepomukSimple time-line view and the map view are based on timerelated and respectively location-related statements about pile items, the recommendation view shows other items in the user s NEPOMUK-based local semantic data store that are potential candidates for the present pile, with a relevance level computed by Galaxy. First several topics are computed for pile items based on their text content, and these topics constitute the initial foci of the Galaxy Activation Spread. Then the spread takes place through the statements (RDF relations) of the local semantic data store. In effect, this is a fuzzy polycentric query allowing us to make full use of the ASM power, beyond the egocentric queries used by other recommenders. Figure 2: The NepomukSimple pile-based user interface. The second major way of using Galaxy spread-of-activation in the Pile context is to recommend a pile for a given item. We have implemented such recommendations as plug-ins for the Mozilla 6 applications FireFox (Web browser) and ThunderBird ( application). With the NepomukSimple plugin, the Firefox Web browser shows on its status bar the name of the highest-relevance pile for the content viewed, if any (see Figure 2). This recommendation is based on the text content which is then matched against the profile of each Pile using activation spread mechanisms. A similar interface exists for Mozilla Firefox, see Figure 3. Upon viewing any item ( or Web page), it can be immediately sent to any of the piles by choosing from the NepomukSimple plug-in menu at the lower right. Based on the item text and the content of the existing piles, some piles may be recommended and thus show a relevance score with + signs. The exemplified is sent between fictitious personas that we used in the project. In a similar manner, the user can get a pile recommendation when reading their with the Mozilla Thunderbird client (see Figure 4) Evaluation NepomukSimple was developed, based on field studies of different groups of knowledge workers, using an iterative process with a continuous dialogue between devel- 6 Deliverable 1.3 Version 1.1 7

13 Figure 3: Easy piling and pile recommendation in Mozilla Thunderbird. Figure 4: Pile recommendation for an (lower right). opers, designers and usability experts. One can talk about the evaluation process as being in a similar manner. The prototype has been undertaken an evaluation discussion, based on knowledge about the users, and between the members of the team through out the development. Though, important here are the focussed activities of evaluation, both with users (experts of their work) and usability experts (experts of usability). The case studies represent different groups of knowledge workers: bioscience re- Deliverable 1.3 Version 1.1 8

14 searchers at Institute Pasteur, France; information project workers at Time Manager International (TMI), Greece, UK and Denmark; systems development and computer science researchers at SAP Research in Germany; members of the Linux community Mandriva Club with members all over the world. More in-depth descriptions of the situation in the different case studies can be found in Deliverables 8.1 (Polonsky, Polly, Lindquist & Bogdan (2006)), 9.1 (Papailiou, Panagiotou, Apostolou, Mentzas, Nomikos, Dimitriadis, Gudjonsdottir & Edlund (2006)), 10.1 (Grebner et al. (2006)) and 11.1 (Lauriere, Solleiro, Trug, Bogdan, Groth & Lannerö (2006)) Methods For the NepomukSimple prototype evaluation we used both expert evaluations and evaluations with target group users. The overall user centered design methodology and descriptions for doing usability testing and evaluations are described in Design methods and work process: From Field studies to design guidelines. NEPOMUK Deliverable D1.4 (Lindquist et al. (2008)) Expert evaluation For the expert evaluation we used a heuristic evaluation plan as guidance an integrated method for evaluating developed interfaces. Basically, the method consists of five steps: 1. Gathering domain knowledge 2. Conducting the heuristic evaluation 3. Categorizing the issues 4. Prioritizing the issues 5. Writing the report and includes recommendations for solving the problems. The results were also summarized and presented in a table where two main parameters, Importance (to change) and Difficulty (to change), show what aspects need to be considered (see Figure 5). We used the same table later on to classify the issues found during usability evaluation. High-value (low right): very important issues that require less effort to fix Strategic (top right): very important issues that require more effort to fix Targeted (low left): less important issues that require less effort to fix Luxuries top left): less important issues that require more effort to fix Positive aspects of the prototype and valuable conceptual ideas were listed to be recognised later in the prototyping development. The expert evaluation also worked as catalyst for understanding how the usability evaluation with users should be designed to generate valuable and usable data for future development work Usability evaluation with users The usability testing was conducted over a 6 day period with 9 persons all representing knowledge workers, either part of or resembling with the case study groups. Deliverable 1.3 Version 1.1 9

15 Figure 5: Prioritising problems. The NepomukSimple prototype was set up on a laptop, showing both an empty pile and a pile filled with different types of previously created persona data, i. e. , photo, folder, documents, calendar and address data, etc. (Gudjonsdottir & Lindquist (2008), Pruitt & Adlin (2006)). NepomukSimple was presented as a new way of creating an overview over of data and information on the desktop, and as a prototype with limitations regarding test data, slow response etc. A user scenario helps them understand what the evaluation was about, helps them understand the setting of NEPOMUK, and also puts them into office work mode. The think-aloud-method was used to get hold of the users thoughts and ideas about the prototype (Nielsen, Clemmensen & Yssing (2002)). They would clickand-tell their way through buttons, fields, and data, describing what they thought would happen, what they saw and how they understood what was happening. The pile concept was presented as an entrance for the click-and tell session. Issues about the different view fields, the appearance of the prototype, the usefulness of the idea in general and the usefulness in the individual users daily work, were raised and discussed. There were two researchers present at each session. Notes were taken in a pre-made questionnaire that worked as a guideline for what questions and issues to bring up. The whole procedure was videotaped New area of use and positive aspects as presented by the users The users were asked to describe situations when they, from their perspective, believed that NepomukSimple would help them carry out work tasks. None of them considered NepomukSimple being a self-explaining tool that would be obvious for them to use. The interface outline resembled with ordinary software they would use at work, but the outcome of the users own test activities, such as setting parameters or selecting a certain file, was different. After some clicking around in the prototype, some thinking and discussions with the test leader, all users could tell of situations when NepomukSimple could be useful. The person from the Mandriva Linux community believed he could handle chunks of texts in books he had downloaded from the Internet. All project leaders thought it would be perfect to see and compare offers and prices and when to renew contracts and to plan projects, via the time-line. Deliverable 1.3 Version

16 NepoSimple Allow to add paragraph in A-field The pile must be shareable Difficulty Be able to communicate with people that are represented in the same pile Luxuries Targeted Introduction to the fields, what can this application do? All other buttons than Normal should be on different level Delete Oder by subject. Sort columns instead Move annotation functionality above B- field Let the user make a pofile and choose: Save search automatically Save search manually Importance Move search function to above toolbar above all A- Strategic B-C-fields Show a nicer way that a resource is selected in A-field Put metadata abstract in relevant place in A-field by using icons Show edit icons from start in B-field, not in logged annotations High-Value Should be possible to should be Reset edit the information Search result in C-field instead of Search in C-field Write Add tag Too short field for selected resourse, difficult to understand that it is selected Normal button Use different word than Metadata C-field shall use information in B-field (date, geotag) and focus on that Reset should be in a different location Make clear what can be added in annotation field, explain Figure 6: Analysed and validated results from NepomukSimple evaluation with users. The person working at a pharmaceutical company could see that data on a timeline, where different data from different sources, could be displayed and compared in one place, would help the sales people at the company to plan their sales visits in much more effective way. They would be able to compare different kinds of marketing data, for example sales figures related to time of year, with sales visits and the specific person they were talking to, and even relate that to how they arranged for that sales meeting. A person working in a bioscience lab could see the benefit of adding patients in piles and analyze their medical record for a getting an overview of what treatment is given and how treatment is working Lessons Learned The time-line and Google Maps were recognisable for the users. Through visualization of data in the C-field, the prototype became understandable, and that was the main incentive for the users to describe how they would use NepomukSimple in their work. When performing evaluations of prototypes that, firstly, require a new mind set, and secondly, where concepts have to be familiar and still give meaning to something new, as in the NEPOMUK case, it is very important to give life to the prototype with relevant user data. The conceptual visualization and understanding of the strength of the prototype is dependant on that. It is close to impossible to show and discuss new aspects of computerized knowledge work with users if it can t be supported by real life examples shown in the prototype, examples related to the test user or to project personas, or both (Gudjonsdottir & Lindquist 2009). Unfortunately, some of the data in the prototypes where not obvious to the users, due to unclear file names and strings, and miscalculations of what could be retrieved from the Internet and presented in a common way, at the time for evaluating. If developers and evaluation designers would work more closely when creating the details of the prototype data, they could have solved some of these difficulties. Deliverable 1.3 Version

17 Future Work: Recommendation Functions (The following ideas have been elaborated in a conference paper by Troussov et al in 2008 Troussov, Judge, Sogrin, Bogdan, Lannerö, Edlund & Sundblad (2008a).) The IBM Galaxy system can provide relevant suggestions on related resources, and it fits very well with the NepomukSimple pile interface. However, NepomukSimple is not a very good tool for experimenting with the suggestions provided by Galaxy. Whenever a user interacts with a pile, the modified pile is being saved in the system. So if a user wants to keep her existing pile, but still see what recommendations would come up if she did this or that modification, the user must first make a copy of the existing pile in order not to destroy it. The experience from the web is that people like to see what happens when they pursue all kinds of associative threads. Tabbed browsing is a popular GUI feature supporting this tentative navigation. If you follow a link in a new tab it is easy to go back to where you were before following the link. In NepomukSimple, tabs are used to separate piles, and creating a new tab means creating a new pile. Generally, such a tab is either empty or it contains only the resource dropped there. To provide a playground for experimenting with possible changes to an existing pile, we would like to provide the option of creating a (temporary) copy of a pile, in a new tab. Having a copy of a pile is a good starting point for playing with different pile configurations, and see how they affect the recommendations provided by Galaxy. However, playing with resources can be even further supported if we introduce the concept of a shelf. On a shelf, the user can put (iconic representations of) resources which at some point appear on the screen, and which the user considers potentially interesting candidates for inclusion in the pile. Resources put on the shelf can be easily re-located, included in the pile, put back on the shelf, or removed from the shelf. We believe that this GUI element would support the user in his or her mental processing of the pile topics. The usage scenario would be somewhat similar to everyday web searching, where queries are often successively refined by adding and subtracting terms to the latest query string. Yet another option for more fully exploiting the capabilities of the Galaxy algorithms would be to provide one positive and one negative area in the pile. Resources placed in the negative area would provide input to Galaxy to lessen the ranking of resources having a strong relation with this resource, while resources put in the positive area would be treated just as ordinary initial activation nodes in the Galaxy spread of activation algorithms Future Work: Customization & Sustainability We have documented a number of ways in which we would develop NepomukSimple further. The documentation is in the form of a video prototype 7. The development ideas are firmly grounded in today s NEPOMUK PSEW implementation infrastructure, such as RDF, PIMO and Eclipse RCP, therefore their implementation does not need components that are beyond the current state of the art. Our main concern in this development is the long-term sustainable pile. We assume that the pile of a real-life project will have several thousand elements towards the project completion time. Some of the current interface elements already fit this vision: easy piling via drag-and-drop, easy piling from Mozilla applications, what else to add to the pile recommendations, which pile to add item to recommendations, etc. However, other interface principles do not scale up to a high number of items in the pile. Therefore the user can have multiple views of the pile, each 7 Deliverable 1.3 Version

18 showing different items, filtered according to various principles. One standard view called All will always include all pile elements. Based on the All view, a new view can be created by starting a new filtering. A filter can be defined by drag-and-drop of an item property to the top of the view. Item properties are represented as icons shown for each pile item RDF statement. Such icons can also be used (also via drag and drop) to add more columns to a certain view, and therefore letting the user easier sort and search within a view. Figure 7 shows a view where a date column has been added to a view via drag-and-drop from an item property Figure 7: Item property icons (middle) used for filtering and column configuration (a date column at the left). Figure 8 shows an view which is filtered using drag-and-drop of a mail icon. It shows the 3 mails out of the total of 29 in the pile. Figure 8: A Mail view done via drag-and-drop of the Mail icon in a filtering area. All views can be configured in this way, including the NepomukSimple Search and Recommendation views. In the end NepomukSimple becomes a collection of such customized views of the pile items, or of the items based on them (such as recommendations). The filtering of views that are shown and used most, can help to improve the recommendation machinery. For example, we could assign more initial activation to the items that fit the filtering criteria. Besides long-term pile sustainability another issue that we were interested in is the logical combination of several pile items. For example, a project meeting abroad is usually associated with some calendar entry, some agenda (e.g., in an ), some flight reservation (possibly also in ) and some hotel reservation (e.g., on a Web site). When such a combination is made, the dates in the calendar item should be consistent with the dates of the reservations, therefore logical constraints should be possible to add (for example the Structured Query Builder or a similar interface can be used to define such constraints). Figure 9 shows such composite items such as a meeting called kick-off which has a ticket associated. Like for Deliverable 1.3 Version

19 all other NepomukSimple views as suggested by this prototype, the icons of the combined items are shown near the item name, as properties. The interface for expressing item composition can be the Pile interface itself (i.e. a composite item can be edited in a Pile) though care must be taken with the large number of piles that will result. Figure 9: Composite pile items Future Work: Collaborative Spread of Activation This section will introduce the ideas of using spread of activation for awareness in collaborative software applications. Awareness is defined as understanding the activity of others in the context of one s own activity. Traditional spread of activation use scenarios assume a lone user looking for interesting issues based on some focus items. In collaborative settings (as most knowledge workers find themselves today) there may be items that other users would like (to some extent) our user (or some of their co-workers) to look at. In such a case, these items should get even more powerful activation for all searches made by the user, and maybe also if searches are not made explicitly, i.e. the user would get a notification on items that their co-workers changed and were searched for in the past, or are simply related to some objects they work with. Therefore the user will become more aware of the relevant work of co-workers. The theoretical framework for such spread of activation, which we could call Mutual collaborative Spread of Activation (SoA), is described in [Sandor97]. The set of items that are in the user interest (hence set the starting point for a polycentric query) are in the user focus. The focus is thus covered by the current spread of activation scenarios. The set of items that others (co-workers) would like the user to look at are said to have a nimbus towards the user. The nimbus is not considered by the current SoA scenarios, however, it is easy to notice that the nimbus can spread through the network exactly as the focus does, using the same Spread of Activation methods. The addition of nimbus to this framework leads us to the mutual collaborative characterization of this approach. Focus is a bit different from a simple polycentric search in that it has a persistence dimension. The focus models the interests of a user, which can be extracted from the explicit searches the user made, but also from other sources, such as the Piles created, the objects looked at, etc. Nimbus also has similar long-term properties. Also important is the time evolution of focus and nimbus: for example the nimbus of a meeting calendar item will depreciate a lot after the meeting time. In spread of activation terms, this will lead to much lower (or zero) initial activation. Deliverable 1.3 Version

20 2.2 Semantic and Semanta Semanta 8 is the prototype implemented as a proof of concept for the notion of Semantic 9. It has been implemented as an add-in/extension to two popular Mail User Agents (MUA)-Microsoft Outlook 2003 and Mozilla Thunderbird 10. Based on the smail Conceptual Framework 11, it assists the user with handling common workflows, e.g. Meeting Scheduling, Task Delegation, Event Announcement, Information Exchange, etc., that are (co)executing in threads. Although Semanta would have many benefits as a stand-alone application, the full power of Semantic is exploited via integration within the Social Semantic Desktop. By integrating Semanta as a Semantic Desktop application, semantic knowledge captured and created by Semanta can be exploited by other intelligent desktop applications. Given that Semanta already represents semantic knowledge in RDF, integration of Semanta on the SSD only required modifying the Semantic (smail) Ontology 12 to extend concepts already existing in the semantic desktop ontologies (e.g., nmo: ) in addition to storing and accessing generated semantic knowledge on the desktop s RDF repository. Figure 10 illustrates the general architecture for Semanta on the SSD. Business logic is separate from the GUI and is available within the Semantic service. This service provides the vast ma- Figure 10: Overview of Semanta Architecture. jority of the business logic for Semanta. It acts as the invisible layer beneath the GUI which performs: Semi-automatic content annotation (via the use of the text analytics service). Reading and writing RDF statements that accompany messages into and from the semantic header. Reasoning over which options a user is given when reacting to action items (given the models of the smail Conceptual Framework). Detecting tasks or events generated within . Storing data in the SSD s central RDF Repository. Querying data in the repository to provide the user with information regarding action items, s, tasks, events, people and their relationships The prototype for this MUA will be developed as part the DERI Lion II project as NUI Galway Deliverable 1.3 Version

21 Hence only the GUI is dependent on the targeted Mail User Agent (MUA). Thus since Semanta uses RDF for knowledge representation, it can support different users using different MUA s on different platforms. The Semantic service is just one of the services provided on the SSD. Another service is the Text Analytics service, which is used by the Semantic service to provide semi-automatic annotation of content. The text analytics service uses Ontology- Based Information Extraction (OBIE) techniques to elicit speech acts (action items) in bodies. The information extraction is based on a declarative model which classifies text into speech acts based on a number of linguistic features like sentence form, tense, modality and the semantic roles of verbs. The service deploys a GATE 13 corpus pipeline. The pipeline consists of a tokeniser, modified sentence splitter, POS tagger, keyphrase lookup via Finite State gazetteers and several JAPE 14 grammars. Some of these grammars are conditionally run based on the outcome of previous JAPE annotations and are ordered in priority to consume the longest matching annotation. Earlier work implemented similar Knowledge Based (KB) approaches using earlier versions of GATE. The current service is at the beta testing stage whereby JAPE grammars are iteratively tuned based on the outcome of each test cycle. We are currently improving the recognition of persons involved in the , taking advantage of existing structured sources such as Address books. We have also included simple co-reference resolution. This technology was evaluated separately by comparing the results of manually annotated versus an automatic annotation; we achieved an average f- measure of This is acceptable considering it is our first eval- Figure 11: Semanta Annotation Wizard. uation. The KB approach to IE is an iterative process whereby lingustic enginneers must test, evaluate and tune their grammars over several cycles. In the intial evaluation, the low f-score is attributed to a deficit of dictionary entries and we intend to improve upon our performance by extending our gazetteer list entries. However, in its current state the service suffices for semi-automatic annotation. As Semanta will continue beyond the lifespan of Nepomuk, further refinement and improvement of TextAnalytics Annotation service will continue. For specific details pertaining to the textanalytics applied to Semanta, we refer the reader to Section 4.4. Both the Semantic and the Text Analytics services have access to the knowledge of the smail models via the smail Ontology and other NEPOMUK Ontologies. The MUA is still responsible for creating and sending messages. Semanta is responsible for annotating and processing outgoing and incoming semantic e- mail, respectively. Since semantic knowledge captured and processed by Semanta is stored in the RDF Repository, this enables information integration between e- mail-generated data and items on the user s desktop. These items -representations 13 General Architecture for Text Engineering 14 JAPE - Java Annotation Patterns Engine - Regular Expressions over Annotations Deliverable 1.3 Version

22 of people, files, appointments, tasks, projects etc, frequently consist of artefacts of communication workflows. Thus data generated by Semanta can be used by other Semantic Desktop applications and vice versa. Additionally, by using the SSD as a basis for Semanta, relationships between the given artefacts (e.g., a task attended to by a group of people) and other data structures (e.g., a folder) and information elements (e.g., files in the same folder or a project related to the folder) can be deduced. We will have a look at the functionalities offered by Semanta and how they are abstracted by a simplistic user interface. Screen-shots shown in the section are taken from Semanta s Outlook add-in. When writing new s, the user can annotate an . After an attempt at automatically recognising action items via the text analytics service, the user can review, change or create new annotations via the Semanta Annotation Wizard (See Figure 11). Rather than providing a list of predefined speech acts to the user for annotation (there are 21 unique instances) the user is supported with building the annotation via literally only a few clicks. This process takes up between two clicks for simpler annotations, e.g. Request Information ; up to four or more clicks (depending on the amount of recipients) for more complex annotations, e.g. Request a Meeting between Yourself, Dirk and Claudia and direct the request to Dirk. A dynamic sentence (bottom of the annotation wizard) assists the user with constructing the annotation. It is worth to note that the objects populating the choices in the wizard are dynamically loaded from the smail ontology via the semantic service. When the user sends an annotated , the annotations together with other meta data regarding thread information are invisibly transported alongside the content in the headers. Semanta scans incoming messages for any action items and if found, messages are flagged red using the Outlook default flags. If no action items are found, the item is flagged blue. When an is viewed with Semanta, the action items are brought to the user s attention. The user can then react to each individual action item via a right mouse click. Depending on its type, the user is given a number of appropriate options. For example, in Figure 12, Dirk is still considering the second action item, which constitutes a request for a joint meeting. On right click a number of appropriate options are given whereby Dirk can agree to the event as well as decline or amend the proposal. Additionally, he can instruct Semanta to simply ignore the request, as well as do something else like question the reason for the meeting. In Figure 12, Dirk has already dealt with the first action item, which constituted a request for information ( How is the report going? ). Figure 12: Viewing / Processing an e- mail. After right clicking the item, he chose the Deliver Information option whereby he was shown a text-box for the provision of this information ( So far, so good ). In this case, whatever the input text, it was automatically annotated as an Information Delivery. This becomes a new action item (i.e. it needs to be brought to the manager s attention) in the eventual reply from Dirk. It is also automatically linked to the action item in the original (i.e. the information request from the manager). If Dirk accepts the meeting request, Semanta will detect that an event involving Dirk has been generated (Figure 13). Deliverable 1.3 Version

23 Figure 13: Semanta detects new events/tasks. Dirk can then review the generated activities via a right mouse click, where he can either dismiss the detected events and tasks or add them to his Outlook Calendar or Outlook Tasklist, respectively. Currently Semanta can only populate this window up to a certain degree and Dirk is required to complete the information. However Semanta knows who are the meeting participants and these are appropriately listed as the attendees for this appointment. Back on the manager s side, on receiving the automatic with the event approval action item, the manager can similarly review this action item, where the option would be to acknowledge the meeting, ignore it or do something else. If the manager acknowledges Dirk s reply, they are also prompted with suggestions to add the event to their calendar. The links between appointments and tasks created via Semanta and the messages from where they were generated are not lost. On the contrary Semanta shows these links to the user in the Semanta toolbar. Figure 14 shows the two buttons provided in the Semanta command bar for this purpose - the Related button loads the source for stored tasks and events; whereas the Related Activity button points back to any generated tasks/events from an message. The same linking functionality is also provided for messages within one e- mail thread. In Figure 14 the reader can also note that alongside the Related Activity button the user is given the possibility to traverse the thread via the Previous button. Figure 14: Linking related , Task and Event items. So far we have shown how Semanta can support the user with finding and exchanging action items within . We have also shown how by reacting to individual action items within messages the user is effectively executing ad-hoc workflows with the support of Semanta. We also discussed how these workflows are effectively a string of exchanged action items each of which is either directly or indirectly related to the previous action item. Therefore all executing workflows in our system can be viewed as action item threads. We will now introduce this function as it is provided by Semanta s Action Item Tracker. Three views are available: Deliverable 1.3 Version

24 1. Pending Incoming this view shows all incoming action items (e.g., requests, assignments, suggestions) which the user has received but not yet processed. 2. Pending Outgoing this view shows all outgoing action items (e.g., requests) for which the user is still awaiting a reply. 3. All Items this view shows all incoming and outgoing items, regardless whether they have been tackled by the user or the user s contacts accordingly. By viewing the pending incoming action items users can keep track of items that they have received and still need to process. By double-clicking on such an item, the user is taken directly to the item where they can then review the item. For items in all three views, the user can expand the action item window to view more information about that particular item (Figure 15). Figure 15: Action Items. In particular, the user can view the context of that item within its thread. In Fig 15, Dirk chooses to view more information about one of his four pending incoming action items. He can see that this action item represents an event requested by his manager ( Can we discuss the review tomorrow afternoon? ). In the context panel Dirk can see that he reacted to this request by asking for more information regarding the event ( What is there to be discussed? ). Although he did react to the event request, the request itself has not been answered and that is why it is shown in the pending incoming view. He can also see that the manager already replied to this request ( The way forward! ). After viewing the context, i.e. the whole action item thread - independently of the s which were exchanged during its progress - Dirk can now decide on how best to proceed. Figure 16: Viewing an Appointment generated by Semanta from PSEW. Deliverable 1.3 Version

25 By integrating Semanta as a Semantic Desktop application, semantic knowledge captured and created by Semanta can be exploited by other intelligent desktop applications. Figure 16 shows the appointment generated by our example, consisting of a meeting between the manager - Martin Williams, Claudia and Dirk. The results of a pre-evaluation of the implemented system have shown that if the users sacrifice some extra time for the writing process and review the automatic annotation of action items, they are in return supported by a system which: is aware of the existence and the status of action items within otherwise invisible workflows; can support the user with reviewing incoming action items and the semiautomatic provision of replies; can detect tasks and events generated within communication; links s within threads and links tasks and events with s that generated them; exposes all -generated knowledge to other Social Semantic Desktop applications via the RDF Repository. Deliverable 1.3 Version

26 3 Stand-Alone Applications 3.1 imapping Client The imapping client is now a stand-alone application for freely structuring ideas and concepts. imapping is a visual knowledge modelling approach that combines several established paradigms and bridges the gap between informal brainstorming and note-taking applications to more formal ontology-based applications like PSEW. imapping uses a zooming user interface to facilitate navigation and to help users maintain an overview especially in large, user-organised knowledge spaces. An imap is comparable to a large whiteboard where information items can be positioned like sticky notes but also nested into each other. Spatial browsing and zooming facilities are provided to ease structuring content in an intuitive way. The imapping approach is described in more detail in Haller (2006) and general information is available at 15. Figure 17: installing the imapping application with one single drag-and-drop. The imapping application can be easily downloaded and installed from 16 for all platforms supporting java. A dedicated Mac application is also available there. While it can be easily installed (see Fig. 17). and used completely stand-alone, the imapping application will use a nepomuk server like it comes with PSEW (see Sec. 2) if present, to align with other nepomuk applications by loading and saving all uniquely named entities (i.e. CDS-NameItems). Documentation on how to use the application is included in the application itself: When started, an interactive imap is pre-loaded explaining all interactions (see Fig. 18). The nepomuk imapping client uses Conceptual Data Structures ( CDS ) as a flexible data model that is easily extensible and adaptable by the user. CDS is described in Völkel & Haller (2006) and the preceding deliverable D1.2 Völkel et al. (2008)). The CDS back-end is described in technical detail in Sec The nepomuk imapping application has two other CDS tools integrated that give direct access to the CDS model underlying an imap and that can be called with hotkeys: the web browser based CDS editor Hypertext-based Knowledge Workbench (HKW) described in Völkel et al. (2008) and QuiKey, a kind of CDS command-line described below. These two additional tools follow different interaction approaches: HKW is a full-featured CDS-editor and -browser showing each item in its structural context, with all content related items displayed and accessible with one click. It uses the whole browser window and shows all related items in positions according to their relation to the item in focus. All CDS-details of the item e. g. are listed directly below the item, all contexts above it, all annotations in the upper right corner etc. see Fig Deliverable 1.3 Version

27 Figure 18: Tutorial map in imapping application explaining its usage QuiKey QuiKey is a light-weight tool that can act as an interactive command-line for a semantic knowledge base. It focuses on highest interaction-efficiency to browse, query and author graph-based knowledge bases in a step-by-step manner. QuiKey is inspired by the Mac tool quicksilver 17 It combines ideas of simple interaction techniques like auto-completion, command interpreters and faceted browsing and 17 Figure 19: HKW the Hypertext-based Knowledge Workbench showing the item Dirk Hageman and all related items around it. Deliverable 1.3 Version

28 integrates them to a new interaction concept. Figure 20: Using QuiKey to navigate to items in an imap. QuiKey is described in more detail in Haller (2008a) and general information is available at 18. QuiKey is now integrated into the imapping application (see Fig. 20). It can be used for the following actions: to quickly enter new items without having to deal with finding an appropriate position and other visual properties to enter CDS-Statements (i.e. triples/links) between existing items without having to draw an arrow between them, which otherwise can be quite inconvenient for items that lie far apart (see Fig. 21) to quickly find existing items in the current imap via an auto-completing search and open them in the imap to browse the graph-structure of the knowledge-base in a minimalistic interface this even works for items that are not yet positioned in the imap to interactively construct simple semantic queries to construct more complex queries out of stored simple queries Lessons Learnt and Future Work Most users that were confronted with the imapping concept and prototypes generally liked the idea and often stated that they would really need a tool like this. After several iterations of user testing we learnt, that even with such a promising concept, it has been quite hard to design interactions in a way that they are really intuitive and efficient for novice users especially since user feedback is often inconsistent or even contradictory. It was a challenge to combine such conflicting wishes into a smooth interaction concept Deliverable 1.3 Version

29 Figure 21: Using QuiKey to make the statement (triple) that Dirk works in the CID project. Similarly, the QuiKey concept has been well received in the research community and has won a best poster award (Haller 2008b). However since QuiKey quite versatile and can achieve certain actions with very little interaction, it becomes immediately important to make users aware of what exactly they are doing, especially when he is about to make changes to the knowledge model. For that, an explanation feature is currently being developed, that in every status explains in a short sentence what the current output means or what is about to happen when a certain action is carried out. The current implementation of both might actually become the basis for more advanced versions with the level of maturity required for production use. Since both imapping and QuiKey are also the subject of a currently ongoing dissertation project and may even become the subject of a commercial spin-off, there is some future work to be expected. This may include some advanced features like actual infinite zooming, item types like pictures, external documents and web-pages or queries, tighter integration of QuiKey into the imapping interaction concept, evolution of the CDS back-end to connect to existing popular semantic systems like Semantic MediaWiki 19 (Krötzsch, Vrandecic, Völkel, Haller & Studer 2007). Also, since QuiKey needs very little screen space, and zooming is a core concept of many state of the art mobile user interfaces, both interaction concepts are good candidates for future mobile use Deliverable 1.3 Version

30 3.2 Nepomuk KDE Nepomuk-KDE is a sub-project of Nepomuk which aims to provide a full implementation of the standards and APIs defined in Nepomuk on the KDE Desktop by introducing the technologies to the KDE community and helping with an integration as a central KDE technology. KDE is a powerful Free Software graphical desktop environment for Linux and Unix workstations. It consists of an elaborate development framework, a large collection of desktop applications including a complete office suite and all-daytools like an client and a powerful internet browser. KDE is the leading desktop environment on Unix derivates. As a result of the Nepomuk-KDE project, the Nepomuk-KDE middleware and the core services providing important features like RDF storage have been implemented and integrated within the (current) 4.x release of the KDE framework, and Nepomuk-KDE components have been included into the kdelibs, which is the core of KDE which each KDE component is based upon. A more elaborate overview of the KDE community involvement is given in NEPO- MUK deliverable D7.2 (Trüg, Lauriére & Barth 2007) 20. A small, but growing number of native applications support the infrastructure and API which is offered by KDE-Nepomuk libraries, such as the file manager Dolphin for tagging or Strigi for crawling and extraction of meta data. Within this workpackage, SemNotes, a tool for note taking has been developed which heavily makes use of features offered by libnepomuk, thereby emphasizing the additional benefit gained by using semantic technology on the KDE desktop. It will be described subsequently SemNotes - The Semantic Note-taking Tool SemNotes 21 is a note-taking application developed for KDE4 22, using Nepomuk- KDE 23 libraries. It uses the PIMO 24 ontology to store the notes in the Nepomuk RDF store as instances of pimo:note. The data stored about a note consists of: title, content, tags, creation and last modification date/time. Plugins The architecture of SemNote is plugin-based. There are 3 possible types of plugins: editor, analyzer and visualizer plugins. When SemNotes starts, an icon appears in the system tray. The main window is shown/hidden when clicking the system tray icon. The main window displays the existing notes and provides a title filter and access to the plugins. Editor plugins are, as the name suggests, note editors. An editor window has a title line, body text editor and tag input field. Tags are typed in the input field as comma separated words. When writing, the auto-completion feature provides existing tags from NEPOMUK as options, while new words become new NEPO- MUK tags. There is a simple editor for editing plain text notes. The editor that makes most use of the semantic technologies provided by nepomuk is the linked editor, which automatically links notes to the resources they reference by creating relations of type pimo:isrelated. The referenced resources can be anything that makes sense, like people - contacts from the addressbook; artists - taken from the 20 retrieved retrieved Deliverable 1.3 Version

31 Figure 22: SemNote User interface. music the user has on her computer; places, cities, countries. The user can choose in the settings dialog which types of resources she wants linked to the notes. Analyzer Plugins work on one note at a time. They analyze the note in various ways (the content, tags, references, etc.). Two of the analyzer plugins provide basic export functionality - to text and to HTML files, and another one allows import of notes from files. When a note is exported to or imported from a file, the tags that are set to the note are also assigned to the resulting file and vice-versa. More complex analyzer plugins are the keyword extraction and the controlled language plugins (ontology generation, meeting minutes and status reports). The keyword extraction plugin described in section generates a list of keywords based on the content of the note and displays them in a dialog for the user to choose some, all or none of them and set them as tags to the note. The controlled language plugins described in section 4.3 parse notes written in restricted vocabularies specific to their dedicated task: ontology authoring or ontology population. Visualizer plugins use data from all the notes that are shown in the main window. This type of plugin can set filters on the list of notes that is displayed in the main window. The timeline and the tagcloud are visualizer plugins. The timeline adapts to the time interval to be shown. The bar height for each interval is the normalized value of the number of notes created in that interval. When a bar is clicked, a time filter is set, allowing only the notes created in that interval to be displayed. The tagcloud is built based on the tags assigned to all notes. The tags are clickable. When a tag is clicked, a corresponding tag filter is set. When a filter is set from a plugin or is removed from the main window, all the visualizer plugins are notified and refreshed, using only the filtered notes as a basis. Another visualizer plugin is the linked resources plugin. It displays a list of resources referenced by the notes, grouped by type. Clicking on a resource in the list will set a resource filter; only the notes referencing that resource will be displayed. Deliverable 1.3 Version

32 3.3 Web-Application BounceIt Given that information on our desktops is rarely purely personal, BounceIt provides a much-needed smooth integration between personal and shared information. BounceIt has a browser-based interface and focuses mainly on data and meta-data management (sharing and search). NepomukSimple and the imapping application, on the other hand, are stand-alone systems that focus mostly on sophisticated meta-data creation and its management on the local desktop. BounceIt allows to share via the Nepomuk P2P platform the information created within other nepomuk components like NepomukSimple or the imapping application, as well as to carry out a semantic search across the shared information spaces. Moreover, BounceIt provides an alternative browser-based interface for viewing local information and editing text content entered in NepomukSimple or imapping using the Wiki model that is compatible with all these systems. This information can then be accessed from the Web using the BounceIt central server that is a node within the Nepomuk s P2P system Functionalities The main BounceIt functionalities are: A semantic wiki where users can specify semantic properties of a wiki page (based on the Semantic Pad prototype see D1.1 for details). Folder sharing via P2P: A local folder can be shared with other P2P network users in a secure way. The available files can also be securely accessed from any Web browser. Once shared, the users get both read and write permission, in the same spirit as a regular wiki. However, just like in a wiki various folder versions can be saved and made available within the P2P network, to protect from potential errors of collaborative editing. Users are also informed about the differences between the various versions and can synchronize their local information with the version of interest from the P2P network. All this amounts to a kind of a P2P-based SVN. Semantic search and navigation in shared folders: users can carry out simple queries in the format property:value (e.g., type:project) that take advantage of the automatically extracted metadata and the user annotations in the semantic wiki. The semantic annotation also helps to browse and view the files. Semantics-based communication control: Users can select new content based on a topic of interest as indicated by the content s semantic properties. The idea is based on the notion of focus-nimbus as described, for example, in the case-study deliverable D8.1. Simple ad-hoc collaboration: Easily create a collection of items of different types and share it with others. Automatic construction of annotated social networks: automatically extract social connections from sharing activity and annotate these connections and nodes based on the semantics indicated within the shared information. The result is a network of people automatically annotated with the known individual and common interests Architecture Integrates with the technology from most core Nepomuk WPs: WP1: Semantic Pad, Wiki model Deliverable 1.3 Version

33 Figure 23: Overview of BounceIt Architecture. Creation and editing of wiki pages with semantic properties (see D1.1 and D1.2 for more details) WP2: Data Wrapper, Local Search Automatic extraction of data from local files (pdf, office documents, mails,...) Indexing and search of local files (based on both full-text and meta-data indexes) WP4: Distributed Search & Storage Share folders with other users Find all folders available for a user Versioning Synchronization WP6: RDF storage, Security component Storage and indexing of the metadata extracted by the Data Wrapper as well as user annotations in the semantic wiki An open user access management system based on public user keys Secure distribution of shared content between users via open communication channels Deliverable 1.3 Version

34 Figure 24: Screenshot: This is the desktop browser-based interface of BounceIt. It allows to view and manage files and folders on the desktop. Above we see the contents of a folder Projects/fMRI with the difference between the current version (called the current snapshot) and a previous version. The get this version command appearing next to the changed files allows to synchronize between the different versions. The prototype has two primary interfaces: local browser-based application and the remote Web-based application (connected to the central server that is a part of the P2P network). These interfaces have been in parallel development and have not yet been integrated with each other. The integration is planned for after the end of the project as a part of exploitation activities Evaluation BounceIt was evaluated within WP8000 as a major part of the Bionote prototype. Actions: 3 expert evaluations (KTH: 8 people) 2 user evaluations (4 users) Internal use of functional components Methods (see deliverable D1.4 for details): Think-aloud protocol Heuristic evaluations Results summary: Positive overall impression Ability to implement a test scenario New ideas, changes of priorities Specific feedback: Clarify access rights Deliverable 1.3 Version

35 Figure 25: Screenshot: The files can be viewed directly from this browser-based interface, whereby the content and the metadata of the proprietary formats (e.g., images, pdf, doc files) are extracted using Nepomuk s Data Wrapper component. The Project Users panel to the right allows to view and manage who has access to the folder. Clarify distinction between various profile types and contexts (e.g., professional vs. social) Friend-of-friend restriction on a community Icons, terminology issues Exploitation and dissemination plans A running demo of the prototype together with a couple of demo videos is available for download from the Nepomuk website: 25 Some prototype components have been included in a commercial product at 26 (a free web service oriented towards the general public). This is the core product of Cognium Systems. Will include other Nepomuk components once they mature Some components have been disseminated as independent open-source projects: Wiki model: 27 This project contains a set of wiki-related libraries, such as parsers for various wiki syntaxes, and common wiki model (event- and objectbased). GWT templates: 28 A template system for the Google Web Toolkit that allows to define HTML layout for GWT widgets in an XML document Conclusion A social semantic wiki that integrates desktop and Web environments seems to be a novel and promising approach for personal and shared information management. There are clear market opportunities for commercial adoption of this approach, and in fact, this adoption is already well on its way workpackage/wp1000/bounceit/ Deliverable 1.3 Version

36 3.4 Shared Information Space SIS (Prototype) Background This prototype development is based on two already existing prototypes two versions that was built and tested during This approach comes from the iterative methodology where one of the foundations is to get inspired and learn from existing suggestions. First lesson: It has to be something new Our initial prototypes were really similar to normal folder structures, just a bit enhanced. One could see them as virtual folders. Lessons learnt from these evaluations was that people are really good at understanding how their current folders work and can not really see how these new virtual folders (we called them piles) area any different. Therefore it is a need for something new, something that cannot be misunderstood or misinterpreted. The trade-off, of course, is that the user has to learn something new and does not recognise what is presented. Below in Figure 26 a screenshot is presented over one of these virtual folders created from an . Figure 26: Prototype one. Second lesson: A little bit of structure is good Having the lessons from our first prototype in mind we came up with the idea of a stretchable rectangle that floats in front of the real desktop. This can be seen as a window or lens into the semantic data. Everything covered by this lens is used as sources for semantic data extracting and the resources can then be distributed along two axes (horizontal and vertical) according to the parameters that are set in the end of each axis. This way the user can distribute his resources according to its annotations. See Figure 27. This however gave quite an inexact and arbitrary distribution of the resources whereas the user tests showed us that this distracted the user. Still, the stretchable rectangle was appreciated. It was recognised as an inspiring way to deal with breaking the silo structure on the traditional desktop, i.e. limitations of resources by folder, type and application. Deliverable 1.3 Version

37 Figure 27: The semantic lens The Final Version What we had was a validated understanding for a prototype that needed to be straightened up. Meaning that the stretchable lens was a good solution but with a bit to loose structure. With the inspiration from overlapping transparents (also called overheads), see Figure 28, each containing an annotation, we made up the idea of creating fields with resources annotated with that annotation. Figure 28: The overlapping principle inspired from transparencies. When these transparencies overlapped and creates intersections, the intersection shows resources that fit with the annotations from both or more transparencies. This way the user can find resources with more than one annotation Evaluations Shared Information Space, SIS, was developed with an iterative process with a continuous dialogue between developers, designers and usability experts. For evaluating the prototype, we conducted both expert evaluations and evaluations with users, representing the users in case studies WP8, WP9, WP10 and WP11. The over all user centred design methodology and descriptions for doing usability testing and evaluations are described in Design methods and work process: From Deliverable 1.3 Version

38 Field studies to design guidelines. Nepomuk Deliverable D1.4 (2008). Expert evaluation For the expert evaluation we used a heuristic evaluation plan as guidance. The results where summarized and presented in a spreadsheet where two main parameters, Importance (to change) and Difficulty (to change), show what aspects need to be considered, (Lindquist et al. (2008)). The expert evaluation also worked as a catalyst for understanding how a usability evaluation with users should be designed to generate valuable and usable data for future development work. Usability testing with users The usability testing was conducted over a 6 day period with 9 persons all representing knowledge workers, either part of or resembling with the WP8-11 case study groups. Procedure: The SIS prototype was set up on a laptop. The prototype was filled with different types of data, i.e. , photo, folder, documents, calendar and address data, etc. The users where presented a user scenario that would help them understand what the evaluation was about, help them understand the setting of Nepomuk, and also put them into office work mode. There were two researchers present at each session. Notes where taken in a pre made questionnaire that worked as a guideline for what questions and issues to bring up. The procedure was videotaped. Shared Information Space Preview on the hovering Save views/ perspectives Difficulty Luxuries Targeted Dynamic view would be cool Free text search Drag and drop annotation Strategic Notification on parameter level Make obvious what filed concepts belong to High-Value Terminology: Keyword, or something else that works Leftovers in non tagged window Importance Figure 29: Results from Shared Information Space, SIS, evaluation with users New area of use and positive aspects as presented by the users The person working at the pharmaceutical company could see SIS as a new way for sales persons to overview what is over budget. Deliverable 1.3 Version

39 All test persons thought it was a playful and different interface that could work as a complement in the work situation to get overview of specific data. It became apparent to some of them why tagging would be a good thing to do. They understood that the tagging would help them set parameters on the data themselves Lessons learned Just as in NepomukSimple, it was the visually recognisable view that made the users aware of what tagging is good for, the impact Nepomuk could have on their work and how they might will come to use it in the future. When performing evaluations of prototypes that, firstly, requires a new mind set, and secondly, where concepts have to be familiar and still give meaning to something new, as in the Nepomuk case, it is very important to give life to the prototype with relevant user data. The conceptual visualization and understanding of the strength of the prototype is dependant upon that. It is close to impossible to show and discuss new aspects of computerized knowledge work with users if it can t be supported by real life examples shown in the prototype, examples related to the test user or to project personas, or both. (Gudjonsdottir & Lindquist (2008)) Test for final improvements The prototype, see Figure 30, was tested and evaluated. The users had a hard time figuring out what the annotations where and due to the fact that the information space was empty until a field was made they did not see that it contained any resources. Besides the threshold of getting started it was a concept hard to grasp in the first seconds. However, when getting hold of it, the users understood the principle and started to play with the prototype. The prototype was especially appreciated among users working with large sets of data. Figure 30: Results from Shared Information Space, SIS, evaluation with users. The user tests resulted in three changes before finalising. The first is to fill the prototype with every resource available. The main reason for this is not to help Deliverable 1.3 Version

40 the user find the relevant resource but to tell him/her that something is actually happening. The overwhelming feeling will probably also give incentive for starting to structuring the different fields. The second change of design was the position of the annotation. From the border of the field to the middle of it, whereas the users had problem deciding on which annotation belonged to what field. The third and last was to enhance the annotations themselves by showing available annotations letting the user know that there are more choices. The current result can be seen in Figure 31. Figure 31: Current Version. Deliverable 1.3 Version

41 4 Natural Language Tools and Other Enabling Technologies As a number of tools and technologies are of generic nature, they are not tied to a particular workbench, and deployed in a number of different scenarios. This section summarizes the efforts undertaken by various project partners to deliver a number of mature utilities and components offering different functionalities, services and APIs on the NEPOMUK desktop which can be consumed by other applications: The StrucRec component provided by IBM which offers a graph mining approach for identifying and disambiguating mentions of already known entities, is described in the first part of this section in 4.1. Next, complementary to StrucRec s knowledge-based approach, the keyphrase extraction utility by NUIG is knowledge poor and uses statistical methods over data yielded from linguistic preprocessing steps to propose a number of topical candidate terms for a given textual document. Section 4.2 reports on implementation considerations, deployment use cases and evaluation details of this component. Subsequently, section 4.3 introduces the Roundtrip Ontology Engineering (ROA) component implemented by NUIG and University of Sheffield, where the main focus besides design is on the evaluation of the approach carried out as a system usability scale (SUS). The Speech-Act-Detection component outlined in section 4.4 is used primarily by the semantic application Semanta (cf. section 2.2), however it has been deployed both in NepomukSimple (NEPOMUK-PSEW) as well as in NEPOMUK-KDE. Finally, this section concludes with an overview of the design decisions, implementation and integration choices that were made in order to provide an API for Conceptual Data Structure tools as described in NEPOMUK deliverable D1.2 (Völkel et al. 2008), and briefly revisited in section StrucRec Text Analysis and Graph Mining IBM components are all based on the Java library Galaxy (also known as IBM LanguageWare Miner for Multidimensional Socio-Semantic Networks) 29. Galaxy uses knowledge based methods to analyse written texts. Galaxy allows for the extraction of concepts from texts, even if term mentioning of these concepts is ambiguous or if a concept is not mentioned at all. IBM components are used in Nepomuk for search, retrieval, recommendation, rating and navigation. The Galaxy library exploits a generic spreading activation methods framework which allows efficient graph-mining for multidimensional networks constructed from large data sets (Troussov, Sogrin, Judge & Botvich (2008)) (Kinsella, Harth, Troussov, Sogrin, Judge, Hayes & Breslin (2007)) (Troussov, Judge, Sogrin, Bogdan, Lannero, Edlund & Sundblad (2008b)). One can say that Galaxy adds the dimension of soft computing methods to the methods traditionally used in ontology-based text processing; and this allows Galaxy to tolerate incompleteness and inconsistencies in data StrucRec Functionality The Structure Recommender component uses graph mining techniques and data in the RDF repository to perform two operations: 29 Deliverable 1.3 Version

42 Figure 32: IBM Galaxy library in Ontologies and methods (Figure based on a figure presented, for example, by Buitelaar (2006)). 1. Process a text document, disambiguate lexical expressions used in it and determine its focus or topic. Disambiguated lexical expressions or document s focus as PIMO concepts can be used to enhance the document s meta data or to provide additional hyperlinks when presenting a document to users. 2. Generate recommendations of items related to one or more starting nodes. The component documentation and code examples can be found on the wiki Semantic Text Analysis The Structure Recommender component can perform a semantic analysis of text documents. The analysis is performed in two steps: 1. Lexical analysis, including tokenization, sentence splitting, detection of multiword lexical expressions. The Multi-word detector is very flexible and can find expressions with changing order of constituents, optional constituents and intermediate words (Davis, Handschuh, Troussov, Judge & Sogrin (2008)) (Troussov, Judge, Sogrin, Akrout, Davis & Handschuh (2008)). The IBM LanguageWare jfrost is used as lexical analysis engine. The lexical dictionary is constructed automatically at the start of StrucRec component using the data from PIMO and CDS. 2. Semantic graph analysis begins by mapping lexical expressions detected in text to possible semantic concepts (here PIMO or CDS items). This mapping may be ambiguous when multiple semantic concepts have associated identical lexical expressions, and an attempt to resolve this ambuguity is made in the following steps: Potential semantic concepts become the nodes for initial activation on the network. The analysis then proceeds with performing spreading activation, selecting the focus of a document and trying to disambiguate the remaining ambiguous expressions if one possible referent is located close to the determined focus IBM Deliverable 1.3 Version

43 Then the spreading activation, focus determining and disambiguation steps are repeated until no ambuguity left or no more foci can be found. IBM Galaxy is used for performing graph analysis, spreading activation, and semantic text analysis. The results of semantic text analysis include: Lexical expression disambiguation, which is achieved by linking words and phrases to the most likely item from PIMO or CDS with the equivalent name. Foci of the document, which may be treated as topics or keywords. One notable feature of this algorithm is that the determined focus may have been not mentioned in the text at all, but only implied by its context. The quality of semantic text analysis achieved by StrucRec component largely depends on and limited by the quality of the semantic graph used in the process, i.e. data stored in Nepomuk RDF repository. In particular, it will never be able to disambiguate words which are not present in PIMO/CDS, and it may return wrong results when PIMO/CDS describe a thing with the name identical to the topic of a document even though they refer to completely different semantic concepts. Imagine that RDF repository contains a list of all cities and towns in the world but no educational institutions, then a mention of Geneva University may be disambiguated as two towns in Florida, USA, simply because the system does not have any knowledge about the university in the city of Geneva, Switzerland. On the other hand, if the data in semantic graph matches the topic of an analyzed document, then analysis will produce the reasonable results. The best quality can be achieved by using large domain-specific databases like MeSH 31 and analyzing documents in the same domain Related Item Recommendation The StrucRec component provides functionality for related item recommendation. Similar to semantic text analysis, this recommender is based on spreading activation functions of IBM Galaxy library. By related item we mean PIMO/CDS items which are most strongly linked to the starting items. But the items which already directly linked to the starting nodes are not returned in the list of related items. Several related item recommender functions are implemented: Given an item, it will return a list of related items. This function is primarily used in PSEW PIMO browser as a part of Unified Recommender, where a user can select a PIMO Thing and choose to add new relations to some other items. Given a text document, return a list of items related to it. This function uses and extends semantic text analysis to provide more recommendations to the user. Given a collection of items (pile in Nepomuk Simple application), it will return a list of related items, which have been recommended to add to the specified collection. Given a set of items or a text document, recommend a collection (pile) to which these items can be added. This functionality is available for use as a Firefox 32 plugin for Nepomuk Simple for easy addition of Web pages to Nepomuk Simple. 31 Medical Subject Headings Deliverable 1.3 Version

44 4.2 Keyphrase Extraction It was the goal to deliver a component that achieves automatic keyphrase extraction from textual data, putting the emphasis on single documents. The proposed software component does not need a training step, can be utilised off the shelf, and is applicable for English, German and French. Due to licensing restrictions, the integrated distributions only cover functionality for English texts, such that German and French functionality is restricted to the web interface and the web service. The component has been evaluated quantitatively on a medium-sized corpus with a priori assigned keyphrases, whereas a user study gave insight into the acceptance of the algorithms results in a practical setting. Evaluation results showed that the approach is comparable with the current state-of-the-art, while potential for performance improvement still exists. For a more elaborate description of the approach, including a detailed discussion of the evaluation procedure and results please refer to Schutz (2008) Implementation The resulting software artifact has been implemented in Java as a number of GATE 34 plugins, thereby making extensive use of resources the framework provides off the shelf. Where necessary, the framework has been extended appropriately by additional plugins (language identification, stopword analyser, frequency analyser, etc.), which can also be used independently. Figure 33: Keyphrase Extraction Architecture. It was the aim to produce a knowledge poor solution capable of multilingual text processing so far, the keyphrase extraction can deal with English, German and French documents. Many of the linguistic resources (stopword and frequency lists) necessary to support additional languages have been put in place, such that the burden of integrating extra languages is lowered to some degree. The keyphrase candidate selection procedure works as follows, and is portrayed in figure 34: A statistical χ 2 measure is computed over lemmas of lexical items found in the document in relation to a large, general reference corpus to assess their sig- 33 see also General Architecture for Text Engineering (Cunningham, Maynard, Bontcheva & Tablan 2002a) Deliverable 1.3 Version

45 Figure 34: Candidate Extraction Strategy. nificance in the context of the given document. The top 25% (or top 25, whichever comes first) significant lemmas of the document are considered cue tokens, and noun chunks (or n-grams, in absence of a noun chunker) containing them are extracted and used to construct complex terms. Subsequently, the list of complex terms is partitioned into clusters by the Monge-Elkan string-similarity metric, in order to form clusters containing similar complex terms, as shown in Figure 35. For each cluster, one representative is determined, exhibiting the maximal boosted intra-cluster similarity. This representative corresponds to a keyphrase candidate, and its scope over the document is assessed. A confidence-scoring function is computed for each candidate, relying on significance of cue token contained, scope contribution of representing cluster, and number of words of the candidate. The result is a ranked list of keyphrase candidates, each one with an assigned confidence score between zero and one Deployment The keyphrase extraction has been deployed in a number of different scenarios. It has been integrated into the information visualisation workbench for exploratory document collection analysis (IVEA) 35 (Thai, Handschuh & Decker 2008), where it helps to extract prominent phrases that are not yet part of an underlying ontology for semantic annotation. It has been deployed on the KDE-NEPOMUK semantic desktop as part of the semantic note taking tool SemNote 36, and as component of the TextAnalytics service 37 in the NEPOMUK-Eclipse Social Semantic Desktop 38. The component is described online 39, where it is available in a variety of customised packages, including a library API for integration into other projects. A web-interface exposes the functionality for testing and demo purposes, and is accessible from the ServiceDescription/TextAnalysis Deliverable 1.3 Version

46 Figure 35: Candidate Grouping Strategy. project page. Keyphrase Extraction in NEPOMUK-PSEW as Part of Unified Recommender The keyphrase extraction has also been deployed as a service for NEPOMUK- Eclipse 40, a reference implementation of the social semantic desktop as proposed by the NEPOMUK consortium 41. The keyphrase extraction functionality is part of the TextAnalytics component, which offers a wider spectrum of NLP-based services, such as information extraction and speech act detection. The functionality can be accessed in a very simple way by any component residing on the desktop, for suggestion of free-form associated keyphrases for textual documents. The approach stands in contrast to a knowledge driven component for information extraction, which only recognises instances of classes that are already contained in the knowledge base, suggesting a use-case on the NEPOMUK desktop where the two approaches complement each other. Moreover, the output of the keyphrase extraction could be used as input for automatic summarisation algorithms. Keyphrase Extraction in NEPOMUK-KDE as Part of SemNote The semantic note taking tool SemNote 42 builds on the semantic desktop release Nepomuk- KDE 43 for KDE4 44, and has been designed to accommodate a number of brief, personal interlinked records, as described in section The nature of the tool handling textual information and the need to briefly annotate the data with a small number of descriptive terms suggested the use of the keyphrase extraction application. The amount of information stored on Deliverable 1.3 Version

47 each individual note is rather small, thereby creating a particular challenge for the keyphrase extraction component. When the user requests the generation of keyphrases for a particular note, she is presented with a list of relevant terms describing the content, and it is up to her which suggestions to accept. Accepted keyphrases are stored as tags in the system wide meta data store and can be used to generate views of notes which have been associated with a given term, very much in the fashion of the well-known blog interfaces. Here, the keyphrase extraction tool acts as a middleware service on the operating system, communicating with SemNotes via DBus 45, a popular message-bus for Linux, providing an implementation of an interprocess protocol (IPC). This approach should enable other applications and desktop resources on Nepomuk-KDE to easily consume the keyphrase extraction service Evaluation The approach has been extensively evaluated for English documents, both against a gold standard, where recall was determined at 51.8%, and as a user study, with an average acceptance rate of 49%. Gold Standard Evaluation The open digital archive PubMed Central 46 currently hosts literature from 644 different journals of the biomedical domain and the lifesciences. A subset of its literature is provided as an XML dataset 47 for NLP and data mining purposes, which has been utilised for construction of the ground truth. The dataset obtained from PubMed Central 48 comprises 77,496 peer-reviewed articles. Its XML schema offers a fine grained distinction of various aspects of publication-related metadata (journal, authors, affiliation, index-terms/keywords), full text and references. From these articles, only those containing assigned keywords were considered, reducing the number of articles used as evaluation dataset to 1,323, consisting of 4,921,583 words in total. The 1,323 articles were distributed across 254 different journals published by PubMed Central, ranging from Abdominal Imaging to World Journal of Urology. The text entered into the keyphrase extraction algorithm did not contain the keyphrases assigned to the articles. User Study The experiment was conducted via a web-application specifically developed for this undertaking, presenting the self-selected judges with an interface to the keyphrase extractor and an upload mechanism for the documents. The predictions were also presented via a web-interface in such a way that judges were able to conveniently fill out the generated forms and submit their assessment, whether a generated keyphrase candidate was accepted or rejected. In case of rejection, the user was asked to choose one of three options indicating a finer grained reason, i.e., too general, too specific or nonsense. A screen-shot of the evaluation interface depicted in Figure 36. Overall, 47 users signed up for the experiment, which was running for 10 days. In total 94 documents were used as input, with the largest document at 81,668 words, whereas the smallest document consisted of only 4 words 49. The average document length was 7,671 words, the median was determined at 5,128 words per document, and it took an average of just over 3 minutes to determine the good and bad candidates per user, per document Data_Mining 48 ftp://ftp.ncbi.nlm.nih.gov/pub/pmc/articles.tar.gz 49 The instructions suggested to use reasonably sized documents, if possible consisting of at least 500 words. Deliverable 1.3 Version

48 Figure 36: Evaluation Form for Qualitative Assessment of Keyword Extraction. No restrictions were imposed on the document content, and judges were encouraged to use documents from all sorts of domains, ranging from scientific articles, technical records such as RFCs, contemporary writing and news, to personal communication. Documents however were required to be written in English, and of type PDF, Microsoft Word, plain text or HTML. The judges came from a multitude of backgrounds, ranging from PhD students and researchers (mostly in computer science) to IT professionals, engineers, as well as persons employed in the financial sector. Results and Lessons Learned Both experiments revealed that a bigger proportion of good keyphrases is found at the beginning of the candidate list, confirming the suitability of the confidence ranking function. Figure 37: Distribution over Keyphrase Prediction List: Matching Types in Gold Standard Evaluation. Deliverable 1.3 Version

49 Considering the information obtained by Figure 37, at around 40% of the candidate list the contribution of good candidates starts to fall below significant values. The qualitative evaluation suggests that bad candidates begin to gain ground on the good ones after 20% as shown in figure 38, however an outnumbering does not take place until after 90% of the candidate list. Thus, a good position for a cut-off will lie somewhere in between 20% and 90%, however in reality it will depend on the practical setting and the luxury to include a number of misses. The user study experiment also gave further insights into the nature of rejections, and the large proportion of rejections was rated as too general to be used as an index term. This may be true when each single one is viewed in isolation, although Figure 38: Distribution over Keyphrase Prediction List: Accept/Reject in User Study. when the whole set of produced candidates is taken into consideration, it could be argued that the general phrases receive their context from the candidates that have been found acceptable. About one third of the general candidates were singleword phrases, thus it would be possible to exclude such predictions altogether. Unfortunately, around 43% of all accepted candidates consisted of a single word, which would mean those would also be lost. As in a number of cases, single-word candidates found as too general were already included in keyphrase predictions of larger word-cardinality, a post-processing step could be implemented that discards the proposal of such single-word phrases in case they are already part of another keyphrase. A detailed examination of the rejections classified as nonsense revealed that a considerable amount resulted from text-conversion errors and mistakes at the very beginning of the linguistic processing pipeline. While it is very difficult to undo mistakes caused by ligature-to-text conversion as a preprocessing step, phrases garbled and broken by hyphenation could be bypassed easily. These findings are encouraging and suggest that the algorithm has the potential of outperforming state-of-the-art approaches such as KEA (Frank, Paynter, Witten, Gutwin & Nevill-Manning 1999, Jones & Paynter 2002) and Extractor/GenEx (Turney 2000), given the flaws are eliminated and ideas for improvements are realised in a future development cycle. Deliverable 1.3 Version

50 4.3 ROA - Roundtrip Ontology Authoring Introduction Formal data representation can be a significant deterrent for non-expert users or small organisations seeking to create ontologies and subsequently benefit from adopting semantic technologies. Existing ontology authoring tools such as Protégé 50 attempt to resolve this, but they often require specialist skills in ontology engineering on the part of the user. This is even more exasperating for domain specialists, such as clinicians, business analysts, legal experts, etc. Such professionals cannot be expected to train themselves to comprehend Semantic Web formalisms and the process of knowledge gathering; involving both a domain expert and an ontology engineer can be time-consuming and costly. Controlled languages for knowledge creation and management offer an attractive alternative for naive users wishing to develop small to medium sized ontologies or a first draft ontology which can subsequently post-edited by the Ontology Engineer. In previous work, Funk, Tablan, Bontcheva, Cunningham, Davis & Handschuh (2007), we presented CLOnE Controlled Language for Ontology Editing which allows naive users to design, create, and manage information spaces without knowledge of complicated standards (such as XML, RDF and OWL) or ontology engineering tools. CLOnE s components are based on GATE s existing tools for IE (Information Extraction) and NLP (Natural Language Processing) Cunningham, Maynard, Bontcheva & Tablan (2002b). Figure 39: The ROA RoundTrip Ontology Authoring pipeline. The CLOnE system was evaluated using a repeated-measures, task-based methodology in comparison with a standard ontology editor Protégé. CLOnE performed favourably with test users in comparison to Protégé. Despite the benefits of applying Controlled Language Technology to Ontology Engineering, a frequent criticism against its adoption, is the learning curve associated with following the correct syntactic structures and/or terminology in order to use the Controlled Language properly. Adhering to a controlled language can be, for some naive users, time consuming and annoying. These difficulties are related to the habitability problem, whereby users do not really know what commands they can or cannot specify to the NLI (Natural Language Interface) Thompson, Pazandak & Tennant (2005). Where the CLOnE system uses natural language analysis to unambiguously parse CLOnE in order to create and populate an ontology, the reverse of this process, NLG (Natural Language Generation), involves the generation of the CLOnE language from an existing ontology. The text generator and CLOnE authoring processes combine to form a RoundTrip Ontology Authoring(ROA) environment: a user can start with an existing imported ontology or one originally produced using CLOnE, (re)produce the Controlled Language using the text generator, modify or edit the text as required and subsequently parse the text back into the ontology using the CLOnE environment. The process can be repeated as necessary until the required result is obtained. Building on previous methodology from, Funk et al. (2007), we undertook a repeated-measures, task-based evaluation, comparing the RoundTrip Ontology Authoring process with Protégé. Where previous work required a refer Deliverable 1.3 Version

51 ence guide in order to use the controlled language, the substitution of NLG can reduce the learning curve for users, while simultaneously improving upon existing results for basic Ontology editing tasks Design and Implementation Round Trip Ontology Authoring (ROA) pipeline which is implemented in GATE Cunningham et al. (2002b), builds on and extends the existing advantages of the CLOnE software and input language. Procedurally, CLOnE s analysis consists of the ROA pipeline of processing resources (PRs) shown in Figure 39 (left dotted box). This pipeline starts with a series of fairly standard GATE NLP tools which add linguistic annotations and annotation features to the document. These are followed by three PRs developed particularly for CLOnE: the gazetteer of keywords and phrases fixed in the controlled language and two JAPE 51 transducers which identify quoted and unquoted chunks. Names enclosed in pairs of single or double quotation marks can include reserved words, punctuation, prepositions and determiners, which are excluded from unquoted chunks in order to keep the syntax unambiguous. The last stage of analysis, the CLOnE JAPE transducer, refers to the existing ontology in several ways in order to interpret the input sentences. Table 1 below provides an excerpt of the grammar rules of the CLOnE language. We refer the reader to Funk, Davis, Tablan, Bontcheva & Cunningham (2006), Funk et al. (2007) for additional rules and examples. Table 1: Excerpt of CLOnE grammar with examples Sentence Pattern Example Usage Forget everything. Forget everything. Clear the whole ontology corpus to start with the new ontology. (Forget that) There is \ are <classes>. (Forget that) <instances> is a\are <class>. (Forget that) <classes\instances> <verb property> <classes\instances>. There are researchers, universities and conferences. Ahmad Ali Iqbal and Brian Davis are Ph.D. Scholar. Ph.D. Scholar is a type of Student. Professor supervises student. Create or delete (new) classes. Create (or delete) instances of the class. (Forget that) <subclasses> is\are a type\types of <superclass>. Make subclass(es) of an existing super-class. Forget that only unlinks the the subclass-superclass relationship. Create the property of the form Domain verb Range either between two classes or instances. Text generation of CLOnE The text generation component in Figure 39 (right dotted box) displayed in the ROA pipeline is essentially an Ontology Verbalizer. Unlike some NLG systems, the communicative goal of the text generator is not 51 GATE provides the JAPE (Java Annotation Pattern Engine) language for matching regular expressions over annotations, adding additional annotations to matched spans, and manipulating the match patterns with Java code. Deliverable 1.3 Version

52 to construct tailored reports for specific content within the knowledge base or to respond to user specific queries. Hence no specific content selection subtask or choice is performed since our goal is to describe and present the Ontology in textual form as unambiguous subset of English the CLOnE language for reading, editing and amendment. We select the following content from the Ontology: top level classes, subclasses, instances, class properties, their respective domain and ranges and instance properties. The text generator is configured using an XML file, whereby text templates are instantiated and filled by the values from the Ontology. This file is decoupled from the text generator PR. Examples of two templates used to generate top level classes and class properties are displayed in Figure 40. The text generator (See Generator in Figure 39) is realised as a GATE PR and consists of three stages: Stage 1 within the text generator converts the input ontology into an internal GATE ontological resource and flattens it into RDF style triples. This is executed in a breadth-first manner so lists are created where super-classes always precede their corresponding subclasses in the following order: top-level classes, subclasses, instances, class properties, and instance properties. Stage 2 matches generation templates from the configuration file (See Figure 40) with the triples list derived from the Ontology in Stage 1. A generation template has three components: (1) an in element containing a list of triple specifications, (2) an out element containing phrases that are generated when a successful match has occurred and (3) an optional ignoreiif element for additional triple specifications that cause a match specified in the in element to be ignored if the conditions are satisfied. The triple specifications contained within the in portion of the template can have subject, property and object XML elements. The triple specifications act as restrictions or conditions, such that an input triple generated from the Ontology must match this template. If more than one triple is included in the in element they are considered as a conjunction of restrictions, hence the template will only match if one or more actual triples for all triple specifications within the in element are found. One triple can reference another, i.e., a specification can constrain a second triple to have the same object as the subject of the first triple. Only backward referencing is permitted since the triples are matched in a top down fashion according to their textual ordering. An example of referencing can be seen in line 188 of the out element of the template shown in Figure 40 for generating class properties. In Stage 3 the out section of the template describes how text is generated from a successful match. It contains phrase templates that have text elements and references to values matched within the in elements. Phrases are divided into singular and plural forms. Plural variants are executed when several triples are grouped together to generate a single sentence (Sentence Aggregation) based on a list of Ontology objects (i.e., There are Conferences, Students and Universities). Text elements within a template are simply copied into the output while reference values are replaced with actual values based on matching triple specifications. We also added a small degree of lexicalization into the Text Generator PR, whereby, for example, an unseen property, which is treated as a verb is inflected correctly for surface realisation i.e. study and studies. This involves a small amount of dictionary look-up using the SimpleNLG 52 Library to obtain the third person singular inflection studies from study to produce Brian Davis studies at NUIG. The out elements of the generation template also provide several phrase templates for the singular and plural sections. These are applied in rotation to prevent tedious and repetitious output. Stage 2 also groups matches together into sets that can be expressed together in a plural form. For this to proceed, the required condition is that the difference between matches, occurs in only one of the references used in the phrase templates, i.e., if singular variants would only differ by one value. A specialized generation template with no in restrictions is also included in the configuration file. This allows for the production of text where there are no specific input triple 52 ereiter/simplenlg/ Deliverable 1.3 Version

53 Figure 40: Example of a generation template. dependencies Evaluation Methodology Our methodology is deliberately based on the criteria previously used to evaluate CLOnE Funk et al. (2006, 2007), so that we can fairly compare the earlier results using the CLOnE software with the newer RoundTrip Ontology Authoring(ROA) process. The methodology involves a repeated-measures, taskbased evaluation: each subject carries out a similar list of tasks on both tools being compared. Unlike our previous experiment, the CLOnE reference guide list and examples are withheld from the test users, so that we can measure the benefits of substituting the text generator for the reference guide and determine its impact on the learning process and usability of CLOnE. Furthermore, we used a larger sample size and more controls for bias. All evaluation material and data are available online for inspection, including the CLOnE evaluation results for comparison 53. The evaluation contained the following: A pre-test questionnaire asking each subject to test their degree of knowledge with respect to ontologies, the Semantic Web, Protégé and Controlled Languages. It was scored by assigning each answer a value from 0 to 2 and scaling the total to obtain a score of A short document introducing Ontologies, the same quick start Protégé instructions as used in Funk et al. (2006), and an example of editing CLOnE text derived from the text generator. The CLOnE reference guide and detailed grammar examples used in for the previous experiment Funk et al. (2006) were withheld. Subjects were allowed to refer to an example of how to edit generated Controlled Language but did not have access to CLOnE reference guide. A post-test questionnaire for each tool, based on the System Usability Scale (SUS), which also produces a score of to compare with previous results Brooke (1996). A comparative questionnaire similar to the one used in Funk et al. (2006) was 53 Deliverable 1.3 Version

54 Figure 41: Text Generated by ROA applied to measure each user s preference for one of the two tools. It is scored similarly to SUS so that 0 would indicate a total preference for Protégé, 100 would indicate a total preference for ROA, and 50 would result from marking all the questions neutral. Subjects were also given the opportunity to make comments and suggestions. Two equivalent lists of ontology-editing tasks, each consisting of the following subtasks: creating two subclasses of existing classes, creating two instances of different classes, and either (A) creating a property between two classes and defining a property between two instances, or (B) extending properties between two pairs of instances. For both task lists, an initial ontology was created using CLOnE. The same ontology was loaded into Protégé for both tasks and the text generator was executed to provide a textual representation of the ontology for editing purposes(see Figure 41), again for both tasks. Sample quality We recruited 20 volunteers from the Digital Enterprise Research Institute, Galway 54. We tried to ensure that participants had limited or no knowledge of GATE or Protégé. First, subjects were asked to complete the pre-test questionnaire, then they were permitted time to read the Protégé manual and Text Generator examples, and lastly they were asked to carry out each of the two task lists with one of the two tools. (Half the users carried out task list A with ROA and then task list B with Protégé; the others carried out A with Protégé and then B with ROA.) Each user s time for each task list was recorded. After each task list the user completed the SUS questionnaire for the specific tool used, and finally the comparative questionnaire. Comments and feedback were also recorded on the questionnaire forms. Quantitative findings Table 2 summarizes the main measures obtained from our evaluation. We used SPSS 55 to generate all our statistical results. In particular the mean ROA SUS score is above the baseline of 65 70% while the mean SUS score for Protégé is well below the baseline Bailey (2006). In the ROA/Protégé Preference (R/P Preference) scores, based on the comparative questionnaires, we SPSS 2.0, Deliverable 1.3 Version

55 Table 2: Summary of the questionnaire scores Measure min mean median max Pre-test scores ROA SUS rating Protégé SUS rating R/P Preference note that the scores also favour on average ROA over Protégé. Confidence intervals are displayed in Table Table 3: Confidence intervals (95%) for the SUS scores Tool Confidence intervals Task list A Task list B Combined Protégé ROA We also generated Pearson s and Spearman s correlations coefficients Connolly & Sluckin (1971), John L. Phillips (1996). Table 4 displays the coefficients. In particular, we note the following results. The pre-test score has a weak negative correlations the with ROA task time. There are no correlations with pre-test score and the ROA SUS score. The pre-test score has a weak negative correlation with the Protégé SUS score. There are no correlations with pre-test score and the Protégé time. In previous results in comparing CLOnE and Protégé, the task times for both tools were more positively correlated with each other while in the case of ROA and Protégé, there correlation has being weakened by a significant 32% of its original value (of 78%) reported for CLOnE Funk et al. (2007), indicating that the users tended not spend the equivalent time completing both ROA and Protégé tasks. There is a moderate correlation with Protégé task time and Protégé SUS scores. There is a strong negative correlation of between the ROA task time and the ROA SUS scores. Our previous work reported no correlation between the CLOnE task time and CLOnE SUS time. A strong negative or inverse correlation implies that users who spent less time completing a task using ROA tended to produce high usability scores - favouring ROA. More importantly, we noted that the associated probability reported by SPSS, was less then the typical 5% cut-off point used in social sciences. This implies there is a 5% chance that the true population coefficient is very unlikely to 56 A data sample s 95% confidence interval is a range 95% likely to contain the mean of the whole population that the sample represents John L. Phillips (1996). Deliverable 1.3 Version

56 Table 4: Correlation coefficients Measure Measure Pearson s Spearman s Correlation Pre-test ROA time weak Pre-test Protégé time none Pre-test ROA SUS none Pre-test Protégé SUS weak ROA time Protégé time ROA time ROA SUS Protégé time Protégé SUS ROA time Protégé SUS none Protégé time ROA SUS none ROA SUS Protégé SUS none ROA SUS R/P Preference Protégé SUS R/P Preference none be 0 (no relationship). Conversely, one can infer statistically that for 19 out of 20 (95%)users, with little or no experience in either NLP or Protégé who favour RoundTrip Ontology Authoring over Protégé also tend to spend less time completing Ontology editing tasks. The R/P Preference score correlates moderately with the ROA SUS score, similar to previous results, but no longer retains a significant inverse correlation with the Protégé SUS score. The reader should note the R/P Preference scores favour ROA over Protégé. We also varied the tool order evenly among our sample. As noted previously in Funk et al. (2007), once again the SUS scores have differed slightly according to tool order (as indicated in Table 3). Previous SUS scores for Protégé tended to be slightly lower for B than for A, which we believe may have resulted from the subjects decrease in interest as the evaluation progressed. While in previous results there was a decrease in SUS scores for CLOnE (yet still well above the SUS baseline), in the case of ROA however, the SUS scores increased for task B (see Table 3), implying that if waning interest was a factor in the decrease in SUS scores for CLOnE, it does not appear to be the case for ROA. What is of additional interest is that group I, subjects with industrial background scored on average 10% higher for both ROA SUS and ROA/Protégé, which implies that Industrial collaborators or professionals with an Industrial background favoured a natural language interface over a standard Ontology Editor even more than Researchers Related work Controlled Natural Languages (CL)s are subsets of natural language whose grammars and dictionaries have been restricted in order to reduce or eliminate both ambiguity and complexity Schwitter (2007). CLs were later developed specifically for computational treatment and have subsequently evolved into many variations and flavours such as Smart s Plain English Program (PEP), White s International Deliverable 1.3 Version

57 Table 5: Groups of subjects by source and tool order Source Tool order Total PR RP R Researcher I Industry Total Table 6: Comparison of the two sources of subjects Measure Group min mean median max Pre-test R I ROA SUS R I Protégé SUS R I R/P Preference R I Language for Serving and Maintenance (ILSAM) Adriaens & Schreurs (n.d.) and Simplified English. 57 They have also found favour in large multi-national corporations, usually within the context of machine translation and machine-aided translation of user documentation Adriaens & Schreurs (n.d.), Schwitter (2007). The application of CLs for ontology authoring and instance population is an active research area. Attempto Controlled English 58 (ACE) Fuchs & Schwitter (1996), is a popular CL for ontology authoring. It is a subset of standard English designed for knowledge representation and technical specifications, and is constrained to be unambiguously machine-readable into DRS - Discourse Representation Structure. ACE OWL, a sublanguage of ACE, proposes a means of writing formal, simultaneously human- and machine-readable summaries of scientific papers Kaljurand & Fuchs (2006), Kuhn (2006). Similar to RoundTrip Ontology Authoring, ACE OWL also aims to provide reversibility (translating OWL DL into ACE). The application NLG, for the purposes editing existing ACE text, is mentioned in Kaljurand & Fuchs (2007). The paper discusses the implementation of the shallow NLG system - an OWL Verbalizer, focusing primarily on the OWL to ACE rewrite rules, however no evaluation or quantitative data are provided in attempt to measure the impact of NLG in the authoring process. Furthermore OWL s allvaluesfrom must be translated into a construction which can be rather difficult for humans to read. A partial implementation is however available for public testing 59. Another well-known implementation which employs the use of NLG to aid the 57 htm Deliverable 1.3 Version

58 knowledge creation process is WYSIWYM (What you see is what you meant). It involves direct knowledge editing with natural language directed feedback. A domain expert can edit a knowledge based reliably by interacting with natural language menu choices and the subsequently generated feedback, which can then be extended or re-edited using the menu options. The work is conceptually similar to RoundTrip Ontology Authoring, however the natural language generation occurs as a feedback to guide the user during the editing process as opposed to providing an initial summary in Controlled Language for editing. A usability evaluation is provided in Piwek (2002), in the context of knowledge creation, partly based on IBM heuristic evaluations 60, but no specific quantitative data that we are aware of, is presented. However, evaluation results are available for the MILE (Maritime Information and Legal Explanation) application, which used WYSIWYM, but in the context of query formulation for the CLIME 61 project, of which the outcome was favourable Piwek (2002). Similar to WYSIWYM is GINO (Guided Input Natural Language Ontology Editor) provides a guided, controlled NLI (natural language interface) for domainindependent ontology editing for the Semantic Web. GINO incrementally parses the input not only to warn the user as soon as possible about errors but also to offer the user (through the GUI) suggested completions of words and sentences similarly to the code assist feature of Eclipse 62 and other development environments. GINO translates the completed sentence into triples (for altering the ontology) or SPARQL 63 queries and passes them to the Jena Semantic Web framework. Although the guided interface facilitates input, the sentences are quite verbose and do not allow for aggregation. A full textual description of the Ontology is not realized as is the case of the CLOnE text generator Bernstein & Kaufmann (2006). Furthermore, similar, to our evaluation, a small usability evaluation was conducted using SUS Brooke (1996), however the sample set of six was too small to infer any statistically significant results Tullis & Stetson (2004). In addition, GINO was not compared to any existing Ontology editor during the evaluation. Finally, Namgoong & Kim (2007) presents an Ontology based Controlled Natural Language Editor, similar to GINO, which uses a CFG (Context-free grammar) with lexical dependencies - CFG-DL to generate RDF triples. To our knowledge the system ports only to RDF and does not cater for other Ontology languages. Furthermore no quantitative user evaluation is provided. Other related work involves the application of Controlled Languages for Ontology or knowledge base querying, which represent a different task than that of knowledge creation and editing but are worth mentioning for completeness sake. Most notably AquaLog 64 is an ontology-driven, portable Question-Answering (QA) system designed to provide a natural language query interface to semantic mark-up stored in a knowledge base. PowerAqua Lopez, Motta & Uren (2006) extends AquaLog, allowing for an open domain question-answering for the semantic web. The system dynamically locates and combines information from multiple domains Conclusion & Discussion The main research goal of ROA is to assess the effect of introducing Natural Language Generation (NLG) into the CLOnE Ontology authoring process to facilitate RoundTrip Ontology Authoring. The underlying basis of our research problem is the habitability problem (See Section 4.3.1): How can we reduce the learning curve associated with Controlled Languages? And how can we ensure their uptake as a Natural Language Interface (NLI)? Our contribution is empirical evidence to CLIME, Cooperative Legal Information Management and Explanation, Esprit Project EP Deliverable 1.3 Version

59 support the advantages of combining of NLG with ontology authoring, a process known as RoundTrip Ontology Authoring (ROA). The reader should note, that we compared Protégé with ROA, because Protégé is the standard tool for ontology authoring. Previous work Funk et al. (2007) compared CLOnE with Protégé. Hence, in order to compare ROA with CLOnE, it was necessary to repeat the experiment and use Protégé as the baseline. We make no claims that Protégé should be replaced with ROA, the point is that ROA can allow for the creation of a quick easy first draft of a complex Ontology by domain experts or the creation of small to medium sized Ontologies by novice users. Domain experts are not Ontology Engineers. Furthermore, a large percentage of an initial Ontology would naturally consists of taxonomic relations and simple properties/relations. Our user evaluation consistently indicated that our subjects found ROA (and continue to find CLOnE ) significantly more usable and preferable than Protégé for simple Ontology editing tasks. In addition our evaluation differs, in that we implemented more tighter restrictions during our selection process, to ensure that users had no background in NLP or Ontology engineering. Furthermore, 40% of our subjects with an industrial background, tended to score ROA 10% higher then Researchers indicating that a NLI to a Ontology Editor might be a preferred option for Ontology development within industry. In detail, this evaluation differs from previous work Funk et al. (2007) by two important factors: (1) we excluded the CLOnE reference manual from the training material provided in the previous evaluation; and (2) we introduced a Text Generator, verbalizing CLOnE text from a given populated Ontology and asked users to edit the Ontology, using the generated CLOnE text based on an example provided. We observed two new significant improvements in our results: (1) the previous evaluation indicated a strong correlation between CLOnE task times and Protégé task times, this correlation has significantly weaken by 32% between ROA and Protégé task times. Hence, where users previously required the equivalent time to implement tasks both in CLOnE and Protégé, this is no longer the case with ROA (the difference being the text generator); and (2) our previous evaluation indicated no correlation between either CLOnE/Protégé task times and their respective SUS scores. However, with ROA, we can now infer that 95% of the total population of naive users, who favour RoundTrip Ontology Authoring over Protégé, would also tend to spend less time completing Ontology editing tasks. We suspect that this is due to the reduced learning curve caused by the text generator. Furthermore, ROA tended to retain user interest, which CLOnE did not. We suspect that the absence of the need to refer to the CL reference guide was a factor in this. While Protégé is intended for more sophisticated knowledge engineering work, this is not the case for ROA. Scalability, both in performance and usage, was also an issue raised by our test subjects. From a performance perspective, when loading large Ontologies, we do not forsee any major issues as ROA is currently being ported to the newest release of GATE which contains a completely new Ontology API that utilises the power of OWLIM - OWL in Memory, a high performance semantic repository developed at Ontotext 65. Finally, from a user perspective, authoring memory frequently used in translation memory systems or text generation of selective portions of the Ontology (using a Visual Resource) could significantly aid the navigation and authoring of large Ontologies Deliverable 1.3 Version

60 4.3.6 Integration in SemNotes and Nepomuk-KDE SemNotes 66 is a note-taking application developed for KDE4 67, using Nepomuk- KDE 68 libraries. It uses the PIMO 69 ontology to store the notes in the Nepomuk RDF store as instances of pimo:note. The data stored about a note consists of: title, content, tags, creation and last modification date/time. ROA has been wrapped as an Analyzer plugin for SemNotes (see Section for details). ROA has been jointly developed by the University of Sheffield, Sheffield NLP Group and DERI, NUIG and is currently available upon request. At the time of writing, we were in the final stages of finalizing the licensing with respect to releasing ROA via the next suitable version of GATE as well as SourceForge as an analyzer plugin for SemNotes. Currentnly, the ROA (Analyzer plugin) tool acts as a middleware service on the operating system, communicating with SemNotes via DBus 70, a popular message-bus for Linux, providing an implementation of an interprocess protocol (IPC). The component is further described online retrieved ServiceDescription/RoundtripOntologyAuthoring Deliverable 1.3 Version

61 4.4 Speechact Detection A lot of work in today s business environments depends on online communication. Tasks are created, managed and delegated; meetings requested and scheduled; important data exchanged all via online communication media and on a daily basis. Communication means like and instant messaging (IM) have become essential abstract working environments. Keeping track of these workflows is not easy, and frequently people become inundated with too much data than they can possibly handle a problem termed as information overload, Whittaker & Sidner (1996). As a result, questions get ignored, commitments forgotten and in general, collaboration and ultimately productivity suffer. There have been numerous attempts at automatically extracting action items, to-do s and general commitments from text pertaining to electronic conversations, especially with regards to ones taking place over . In particular we have worked on models (Simon Scerri & Handschuh 2008), that conceptualise these items and outline their expected workflows. Ultimately these models will be used in intelligent applications to support the user with the management of action items arising from unstructured electronic conversations. Semanta, Scerri, Handschuh & Decker (2008) is an client plug-in which strives to provide these features for communication. However, from a practical point of view Semanta cannot rely on the end-user to recognize, classify and annotate each single action item. Therefore at the least, partial automation is required. A similar application could mine action items from text can be integrated within IM services. We will describe a declarative model, based on speech act theory, which promises to classify most of the action items within an online chat or an thread. After describing the implementation the Speech Act Annoation Service, we compare and discuss the results of automatic annotation versus manual annotation. An overview of speech act theory is provided in Section 4.4.1, in which we discuss the linguistic features that our model considers to classify text, before presenting the model itself in Section In Section we describe how and what we implemented from our model. Section discusses the collection and processing of and chat corpora, and the results of their automatic annotation based on our model in comparison to manual annotation. In section we refer to related work before providing some concluding remarks Finally Section describes the Speech Act Detection and its integration within Nepomuk Background The Speech Act Annotation Service attempts to classify textual segments from different kinds of electronic conversations (in the English language) into a number of classes. These classes are nothing other than instances of a Speech Act Model provided in the smail Framework a conceptual framework for Semantic Simon Scerri & Handschuh (2008) we presented in earlier work. The model is based on Speech Act Theory Searle (1969), which states that every utterance (sentence) implies an action by the speaker (sender) with possible effects on both the speaker (sender) themselves as well as the hearer (recipient). Thus the model dealt with the intentions and expectations of conversations. Although at face value and IM conversations differ (in particular is asynchronous whereas chat is usually synchronous), their dialogue style is very similar. The Speech Act model we refer to (represented within an ontology1) is defined as a triple consisting of an: Action what is being performed, e.g., a request, a notification or an assignment. Object the object of the action, e.g., a request for a meeting. Subject the subject/agent of the object if applicable, e.g., who would attend Deliverable 1.3 Version

62 the meeting. Collectively these parameters form a Speech Act. The classes for our classification task coincide with a subset of the valid combinations of the (Action, Object, Subject) triple. Some combinations are not valid, i.e. whereas Activities (a category of objects including Task and Event) have a subject (being the task performer or the event participant), Data objects (this category includes Information and Resource) do not. Additionally, some of the concepts in the model deal with contextual statements. For example, the Decline action as defined in the model is only valid in the context of a refusal of a preceding Request. Chaining subsequent utterances in electronic conversations is out of the scope of this work. Instead we only deal with classifying text given surface structure. Thus we end up with 22 classes for our classification task, as shown in Figure 42, where for simplification purposes we included the object categories introduced above (a.k.a. Nouns). The four actions are: Figure 42: The 22 individual speech acts for classification. Request a statement requiring a reply from the recipient (e.g., a question); Assign a statement requiring an activity and no reply (e.g., an order or a commitment); Suggest a statement involving an optional activity; Deliver covering any other statement (e.g., a factual statement). Whereas requests can involve both noun categories (i.e. request for an event/- task/information/resource) assignments and suggestions can involve activities only, and deliveries data only. Activity speech acts require the definition of a subject. The subject refers to who is implied in the activity and is not to be confused with what we call the Target (not part of Speech Act model) which represents the person to whom the speech act is directed (consider multiple recipients). Thus, a request for permission to attend an event is represented as a (Request, Event, Sender), an order to perform a task as an (Assign, Task, Recipient), and a suggestion for a meeting between the sender and recipient(s) as a (Suggest, Event, Both). A request for information can be represented as a (Request, Information,θ), an informing statement as a (Deliver, Information, θ). Deliverable 1.3 Version

63 4.4.2 Conceptual Classification In this section we present a declarative model for the classification of free-text emanating from electronic conversations into speech acts, by considering the 5 linguistic and grammatical features discussed below. Features Linguistic Modality Modality is concerned with our opinions and attitudes, Palmer (2001). Most linguists agree that there are two general types of modality Epistemic, which is concerned with the speaker s judgement of the truth of the proposition embedded in a statement; and Deontic, concerned with the illocutionary point or general intent of the speaker. These are comparable to Austin s obsolete dichotomy Austin (1975) of Constative (epistemic) and Performative utterances (deontic). There is also a distinction between Sentence Modality which deals with different types of statements, i.e., Declaratives, Imperatives, Interrogatives and Exclamatives; and Verbal Modality, which deals with modal verbs and verb moods. In our classification task, we consider all sentence modality types but extend the declarative category to include exclamatives, since these have the same impact on the classification. Modal verbs (e.g. must, will, should) are used to express concepts like Possibility,Necessity, Permissibility and Probability. For our task we restrict these conceptual categories to two Possibility and Necessity, which are roughly equal to the Suggest and Assign speech act actions respectively. Although there is a conventional correlation between modality, moods and speech act forces (e.g., Interrogative textrightarrow request) the relationship is not always straightforward, Wilson (1998). Verb Type Verbs are used to express an Action, an Occurrence or a State of being. Since we attempt to recognise action items, our main interest is action verbs. Furthermore, we are mostly interested in two specific subsets of existing action verbs, which we call: Activity Verbs representing activity nouns as implied in the speech act model (e.g. go, prepare); and Communicative Verbs implying actions specific to the communication medium (e.g. send, attach) Semantic Role The model underlying thiw classification classes incorporates the speech act subject. Thus when dealing with action verbs we are interested in who is implied in the action. We cannot simply rely on the grammatical roles, i.e. the subject and the object of an action. Instead we are interested in the semantic roles, i.e. the Agent and the Patient. In I will call you and You will get a call from me whereas the subject and object alternate, the agent ( I / me ) and the patient ( you ) refer to the same semantic person. Although the classes for classification only refer to the agent (termed the subject), the patient can also have an impact on the classification. The grammatical person also affects the classification task, so for both agent/patient we consider the grammatical persons First, First Plural, Second or Third. Grammatical Tense The tense morpheme specifies the time at/during which the descriptive content of the sentence in question holds, Ogihara (2007). There are different opinions when it comes to categorizing tenses in the English language, Comrie (1985). Whereas some consider only two tenses Past and non- Past, others consider their combinations with different moods and aspects (e.g. perfect, progressive) as 12 separate tenses. In our work we adhere to the first dichotomy, and we are mostly interested in actions that occur in the non-past. Negation Negation negates the proposition of an affirmative statement. From a pragmatic point of view, it usually expresses the exact opposite of what otherwise the statement would convey, i.e. impossibility instead of possibility, prohibition instead of permissibility, Moescherler (1992). Grammatically, both nouns and verbs can be negated via the use of a negative adjective, a negative pronoun or a negative adverb. Table 43 shows how we can extract knowledge about the presence or form of these features from an analysis of written text. We provide the most general triggers (using BNF notation 72 ) for each feature, e.g. interrogative sentences are normally ending with a question mark, whereas an imperative usually starts with a verb Deliverable 1.3 Version

64 Figure 43: Example BNF notations for feature recognition. There are other linguistic features with a major influence on our classification task not to mention issues like ambiguity, rhetorical and metaphorical language use. For this work we disregard these issues and confine them to future work. We are particularly aware of problems regarding multiple sentential clauses and their dependencies (e.g. conditional clauses). Summing up, the discussed features can only give an indication as to the illocutionary force of an utterance. Declarative Classification Model In this section we present the declarative model as depicted in Figure 44. Before providing some practical examples, we explain the general idea. The model breaks down the linguistic space into a number of dimensions based on the selected features. Statements are thus classified into exactly one of the resulting subsets given the presence/form of the features. These subsets correspond to our 22 speech act classes. For simplicity, the sets do not include the speech act subject. Thus the sets as shown in 44 do not differ between the subject of a speech act (i.e. Sender/Recipient/Both). The figure also abstracts over the speech act object, showing only the noun instead. Thus, the 22 speech acts shown in Figure 42 have been grouped to the following 5 sets: 1. Request Data incorporating (Request, Information, θ) and (Request, Resource, θ) 2. Deliver Data incorporating (Deliver, Information, θ) and (Deliver, Resource, θ) 3. Request Activity incorporating (Request, Task, Sender/Recipient/Both) and (Request, Event, Sender/Recipient/Both) 4. Suggest Activity (similarly incorporating 6 classes) 5. Assign Activity (incorporating another 6 classes) As shown at the top of Figure 42, our Sentence and Verbal modality categories split the space vertically into 5 dimensions. Since there is an overlap between imperative sentences and declarative sentences that have a necessity modal verb, there appear to be only 4 dimensions. The space is split horizontally given our verb categories communicative verbs, activity verbs and their complement. The 8 resulting portions dealing with action verbs are further segmented given the agent Deliverable 1.3 Version

65 semantic role where A1S stands for Agent 1 st Person Singular; A2P for Agent 1 st Person Plural; and A2, A3 for Agent 2 nd and 3 rd Person respectively. The patient role has an impact in just one specific area. In this case an Agent segment (A2) is further split up into four segments P1S, P1P, P2 and P3, standing for Patient 1 st Person Singular, Plural, and Patient 2 nd and 3 rd Person respectively. The other two features negation and past tense, are represented as (overlapping) horizontal shades of grey across the space. Table 45 demonstrates how the model can be used to classify text into exactly one of our classes via 8 practical examples. In general, the presence of a past tense reduces all non-interrogative statements to a delivery of information (Figure 44) which is the most generic speech act. Thus whereas You should forward it to me is a Suggest-Activity (ex. A in Table 45, Figure 44), You should have forwarded to me would classify as a Deliver-Data. Similarly, the past tense reduces all interrogative statements to a request for information. Thus whereas Will I send you the file? would have been a request for a personal task, Haven t I sent you the file? is only a request for information (ex. B). Negation also reduces all non-interrogatives to a Deliver-Data. However, negation does not have an impact on interrogatives. Thus Are we discussing tod-ay? and Aren t we discussing today? (ex. C) would both be requests for an activity. Although D and E differ only with respect to the patient role, they are classified differently. Whereas D is a request for information (generalised as Request-Data), E is a request for the transfer of data to third parties equivalent to a task assignment (generalised as Assign-Activity). The type of verb has a major impact on the classification. We are attending the meeting classifies as an Assign-Activity (ex. F) given the activity verb. We are sending the files is a sentence with similar features except that it has a communicative verb, thus classifying as a Deliver-Data (ex. G). We are happy is also similar but having neither a communicative nor an activity verb, it classifies as a Deliver-Data (ex. H). Figure 44: The Declarative model Linguistic space split up according to presenceform of the selected features. Figure 45: Practical examples of classification. Deliverable 1.3 Version

66 4.4.3 Implementation We implemented the Speech Act annotation service as an ANNIE 73 Conditional Corpus IE Pipeline in GATECunningham et al. (2002b). We took a Knowledge Based (KB) approach to IE whereby Language Resources (LRs) were created manually by a Language Engineer based on consultations with a Speech Act Domain Modeler. LRs in the pipeline consist of the default gazetteer word lists for Person Named Entities, verb lists associated with each of the Speech Act Actions (e.g. Request, Assign) and trigger/key phrases associated with Speech Act Objects (e.g. Events, Tasks). Other miscellaneous linguistic features such as negation and anaphoric pronouns were also recorded as gazetteer entries. An overview of the IE pipeline is shown in Figure 46. The pipeline consists of a: 1) GATE English Figure 46: Our Information Extraction Pipeline. Tokeniser, 2) Modified Sentence Splitter, 3) Hepple POS Tagger Assigns Part of Speech category to each token, 4) ANNIE Gazetteer finite state lookup for all verbs and key-phrases, 5) NE transducer Named Entity identification (e.g. Location, Person, Date), 6) JAPE Preprocessing separates Gazetteer Lookup annotations into individual annotation sets (e.g. here JAPE Cunningham, Maynard & Tablan (2000) pattern rules benefit from previous linguistic/semantic annotations to perform speech act annotation at the sentential level). Speech Acts were extracted based on a combination of hand-coded JAPE grammars and a Finite State Gazetteer Lookup of trigger phrases. A JAPE grammar constitutes a cascade of finite state transducers over patterns of annotations which may vary from Tokens (including POS tag categories) to simple default Named Entities in GATE (e.g. Person) to intermediate speech act annotations. The output of one JAPE transducer becomes the input of the next transducer. Each JAPE transducer consists of a collection of phases which in turn contain pattern/action rules. The left hand side (LHS) of the rule is written in Regular Expression style, whereas the right hand side (RHS) consists of annotation-binding variables within a block of JAVA code, which can subsequently be manipulated as desired. JAPE rules can fire in various ways depending on the desired behavior e.g., based on textual ordering, priority or longest match. The majority of Speech Act annotations are based on rules in the form of LHS > RHS, where the LHS consists of a combination of syntax order within a pattern and the RHS checks for annotation intersections (as shown in Figures 47 and 48 ). After Gazetteer Lookup in the pipeline, the default GATE NE transducer is used to extract generic Named Entities which may be useful at later stages. Following this, another two transducers are applied to further refine annotation and perform Co-reference resolution on the pronoun 74, as shown below: Once the annotation sets for each Action, Object and Subject (if applicable) outlined in the Speech Act model have been captured, they are combined. This process is carried out by the Speech Act JAPE transducer within the pipeline. An example of a JAPE grammar rule for a Request-Task is shown below: 73 ANNIE A Nearly New Information Extraction Engine 74 you with antecedents of type Person Deliverable 1.3 Version

The NEPOMUK project. Dr. Ansgar Bernardi DFKI GmbH Kaiserslautern, Germany

The NEPOMUK project. Dr. Ansgar Bernardi DFKI GmbH Kaiserslautern, Germany The NEPOMUK project Dr. Ansgar Bernardi DFKI GmbH Kaiserslautern, Germany ansgar.bernardi@dfki.de Integrated Project n 27705 Priority 2.4.7 Semantic knowledge based systems NEPOMUK is a three-year Integrated

More information

The NEPOMUK project. Dr. Ansgar Bernardi DFKI GmbH Kaiserslautern, Germany

The NEPOMUK project. Dr. Ansgar Bernardi DFKI GmbH Kaiserslautern, Germany The NEPOMUK project Dr. Ansgar Bernardi DFKI GmbH Kaiserslautern, Germany ansgar.bernardi@dfki.de Integrated Project n 27705 Priority 2.4.7 Semantic knowledge based systems NEPOMUK supports personal knowledge

More information

SemSearch 2008, CEUR Workshop Proceedings, ISSN , online at CEUR-WS.org/Vol-334/ QuiKey a Demo. Heiko Haller

SemSearch 2008, CEUR Workshop Proceedings, ISSN , online at CEUR-WS.org/Vol-334/ QuiKey a Demo. Heiko Haller QuiKey a Demo Heiko Haller Forschungszentrum Informatik (FZI), Germany heiko.haller@fzi.de Abstract. QuiKey is a light-weight tool that can act as an interactive command-line for a semantic knowledge base.

More information

The Personal Knowledge Workbench of the NEPOMUK Semantic Desktop

The Personal Knowledge Workbench of the NEPOMUK Semantic Desktop The Personal Knowledge Workbench of the NEPOMUK Semantic Desktop Gunnar Aastrand Grimnes, Leo Sauermann, and Ansgar Bernardi DFKI GmbH, Kaiserslautern, Germany gunnar.grimnes@dfki.de, leo.sauermann@dfki.de,

More information

Using idocument for Document Categorization in Nepomuk Social Semantic Desktop

Using idocument for Document Categorization in Nepomuk Social Semantic Desktop Using idocument for Document Categorization in Nepomuk Social Semantic Desktop Benjamin Adrian, Martin Klinkigt,2 Heiko Maus, Andreas Dengel,2 ( Knowledge-Based Systems Group, Department of Computer Science

More information

Mymory: Enhancing a Semantic Wiki with Context Annotations

Mymory: Enhancing a Semantic Wiki with Context Annotations Mymory: Enhancing a Semantic Wiki with Context Annotations Malte Kiesel, Sven Schwarz, Ludger van Elst, and Georg Buscher Knowledge Management Department German Research Center for Artificial Intelligence

More information

Semanta Semantic Made Easy

Semanta Semantic  Made Easy Semanta Semantic Email Made Easy Simon Scerri, Brian Davis, Siegfried Handschuh, and Manfred Hauswirth Digitial Enterprise Research Institute, National University of Ireland Galway, IDA Business Park,

More information

Product Documentation SAP Business ByDesign February Marketing

Product Documentation SAP Business ByDesign February Marketing Product Documentation PUBLIC Marketing Table Of Contents 1 Marketing.... 5 2... 6 3 Business Background... 8 3.1 Target Groups and Campaign Management... 8 3.2 Lead Processing... 13 3.3 Opportunity Processing...

More information

IBM Notes Client V9.0.1 Reference Guide

IBM Notes Client V9.0.1 Reference Guide IBM Notes Client V9.0.1 Reference Guide Revised 05/20/2016 1 Accessing the IBM Notes Client IBM Notes Client V9.0.1 Reference Guide From your desktop, double-click the IBM Notes icon. Logging in to the

More information

SIMPLIFi Compliance Software User Manual

SIMPLIFi Compliance Software User Manual SIMPLIFi Compliance Software User Manual Version 1.7 2013 Simplifi-Solutions Ltd, Environmental Center, Unit B5, The Gordon Manley Building, Lancaster University, LA1 4WA Telephone: 01524 510431 Email:

More information

MOODLE MANUAL TABLE OF CONTENTS

MOODLE MANUAL TABLE OF CONTENTS 1 MOODLE MANUAL TABLE OF CONTENTS Introduction to Moodle...1 Logging In... 2 Moodle Icons...6 Course Layout and Blocks...8 Changing Your Profile...10 Create new Course...12 Editing Your Course...15 Adding

More information

One of the fundamental kinds of websites that SharePoint 2010 allows

One of the fundamental kinds of websites that SharePoint 2010 allows Chapter 1 Getting to Know Your Team Site In This Chapter Requesting a new team site and opening it in the browser Participating in a team site Changing your team site s home page One of the fundamental

More information

Office 365 Training For the

Office 365 Training For the Office 365 Training For the 1 P age Contents How to Log in:... 3 Change Your Account Password... 3 Create a Message... 4 Add a Signature... 4 Learn About Inbox Rules... 5 Options > Automatic Replies...

More information

Organising your inbox

Organising your inbox Outlook 2010 Tips Table of Contents Organising your inbox... 1 Categories... 1 Applying a Category to an E-mail... 1 Customising Categories... 1 Quick Steps... 2 Default Quick Steps... 2 To configure or

More information

Final Project Report

Final Project Report 16.04.02 Final Project Report Document information Project Title HP Tool Repository of SESAR standard HP methods and tools Project Number 16.04.02 Project Manager DFS Deliverable Name 16.04.02 Final Project

More information

Semantic Web Company. PoolParty - Server. PoolParty - Technical White Paper.

Semantic Web Company. PoolParty - Server. PoolParty - Technical White Paper. Semantic Web Company PoolParty - Server PoolParty - Technical White Paper http://www.poolparty.biz Table of Contents Introduction... 3 PoolParty Technical Overview... 3 PoolParty Components Overview...

More information

Unit 8: Working with Actions

Unit 8: Working with Actions Unit 8: Working with Actions Questions Covered What are actions? How are actions triggered? Where can we access actions to create or edit them? How do we automate the sending of email notifications? How

More information

SAP. Modeling Guide for PPF

SAP. Modeling Guide for PPF Modeling Guide for PPF Contents 1 Document Organization... 3 1.1 Authors... 3 1.2 Intended Group of Readers... 3 1.3 References... 3 1.4 Glossary... 4 2 Modeling Guidelines - Application Analysis... 6

More information

Term Definition Introduced in: This option, located within the View tab, provides a variety of options to choose when sorting and grouping Arrangement

Term Definition Introduced in: This option, located within the View tab, provides a variety of options to choose when sorting and grouping Arrangement 60 Minutes of Outlook Secrets Term Definition Introduced in: This option, located within the View tab, provides a variety of options to choose when sorting and grouping Arrangement messages. Module 2 Assign

More information

Your username is the first portion of your address (first initial and last name) Your password is your date of birth in the form MMDDYY

Your username is the first portion of your  address (first initial and last name) Your password is your date of birth in the form MMDDYY ZIMBRA TRAINING This document will cover: Logging In Getting to Know the Layout Making Your Mail More Efficient Viewing by Message or Conversation Using Tags and Flags Creating Folders and Organizing Mail

More information

Introduction to Compendium Tutorial

Introduction to Compendium Tutorial Instructors Simon Buckingham Shum, Anna De Liddo, Michelle Bachler Knowledge Media Institute, Open University UK Tutorial Contents http://compendium.open.ac.uk/institute 1 Course Introduction... 1 2 Compendium

More information

Customer Helpdesk User Manual

Customer Helpdesk User Manual Customer Helpdesk User Manual TABLE OF CONTENTS 1 INTRODUCTION... 3 2 HANDLING OF THE PROGRAM... 3 2.1 Preface... 3 2.2 Log In... 3 2.3 Reset Your Password... 4 2.4 Changing Personal Password... 4 3 PROGRAM

More information

Pulse LMS: User Management Guide Version: 1.86

Pulse LMS: User Management Guide Version: 1.86 Pulse LMS: User Management Guide Version: 1.86 This Guide focuses on the tools that support User Managers. Please consult our separate guides for processes for end users, learning management and administration

More information

Outlook Web Access Exchange Server

Outlook Web Access Exchange Server Outlook Web Access Exchange Server Version 2.0 Information Technology Services 2008 Table of Contents I. INTRODUCTION... 1 II. GETTING STARTED... 1 A. Logging In and Existing Outlook Web Access... 1 B.

More information

Introduction Building and Using Databases for historical research December 2012

Introduction Building and Using Databases for historical research December 2012 1. Introduction This is a non-tutor led course that can be completed at your own pace and at a time of your own choosing. We strongly recommend that you complete each module and its component sections

More information

Designing a System Engineering Environment in a structured way

Designing a System Engineering Environment in a structured way Designing a System Engineering Environment in a structured way Anna Todino Ivo Viglietti Bruno Tranchero Leonardo-Finmeccanica Aircraft Division Torino, Italy Copyright held by the authors. Rubén de Juan

More information

EXPLORING COURSE TOOLS

EXPLORING COURSE TOOLS EXPLORING COURSE TOOLS Now that we have covered the My Blackboard screen, let s explore the different tools that you might find within your course. NOTE: Your instructor controls which tools are available.

More information

Electronic Committees (ecommittees) Frequently Asked Questions v1.0

Electronic Committees (ecommittees) Frequently Asked Questions v1.0 3 Electronic Committees (ecommittees) Frequently Asked Questions v1.0 SABS 2012-12-06 Table of Contents 1 Contents 1 Login and access... 3 1.1 How to access the ecommittee workspace... 3 1.1.1 Via the

More information

Microsoft Outlook 2003 Microsoft screen shots used in accordance with Microsoft rules to be viewed at URL

Microsoft Outlook 2003 Microsoft screen shots used in accordance with Microsoft rules to be viewed at URL Microsoft Outlook 2003 Microsoft screen shots used in accordance with Microsoft rules to be viewed at URL http://www.microsoft.com/permission/copyrgt/cop-img.htm#screenshot Compiled by: Charmaine Morris

More information

Creating a Course Web Site

Creating a Course Web Site Creating a Course Web Site What you will do: Use Web templates Use shared borders for navigation Apply themes As an educator or administrator, you are always looking for new and exciting ways to communicate

More information

Student Guide INTRODUCTION TO ONLINE RESOURCES

Student Guide INTRODUCTION TO ONLINE RESOURCES Student Guide INTRODUCTION TO ONLINE RESOURCES Date: 08. June. 2017 By: Technical Support Team STUDENT GUIDE southwales.unicaf.org 1)Introduction...4 2)Student Panel (SIS)...4 2.1)Student Panel (SIS) Login...4

More information

An Annotation Tool for Semantic Documents

An Annotation Tool for Semantic Documents An Annotation Tool for Semantic Documents (System Description) Henrik Eriksson Dept. of Computer and Information Science Linköping University SE-581 83 Linköping, Sweden her@ida.liu.se Abstract. Document

More information

Contact Information: K.J. McCorry

Contact Information: K.J. McCorry K.J. McCorry is the CEO of Officiency Enterprises Inc., a professional productivity, efficiency and sustainability consulting company based out of Boulder, Colorado. Officiency, Inc. has worked since 1996

More information

User Experience Report: Heuristic Evaluation

User Experience Report: Heuristic Evaluation User Experience Report: Heuristic Evaluation 1 User Experience Report: Heuristic Evaluation Created by Peter Blair for partial fulfillment of the requirements for MichiganX: UX503x Principles of Designing

More information

Electronic Grants Administration & Management System - EGrAMS

Electronic Grants Administration & Management System - EGrAMS Electronic Grants Administration & Management System - EGrAMS Introduction EGrAMS is an enterprise-wide web-based scalable, configurable, business rule driven and workflow based end-to-end electronic grants

More information

ISQua Collaborate User Guide for Surveyors

ISQua Collaborate User Guide for Surveyors 2017 ISQua Collaborate User Guide for Surveyors THE INTERNATIONAL SOCIETY FOR QUALITY IN HEALTH CARE LTD TABLE OF CONTENTS About ISQua Collaborate... 2 Getting started... 2 Logging in... 3 Dashboard...

More information

CSC 5930/9010: Text Mining GATE Developer Overview

CSC 5930/9010: Text Mining GATE Developer Overview 1 CSC 5930/9010: Text Mining GATE Developer Overview Dr. Paula Matuszek Paula.Matuszek@villanova.edu Paula.Matuszek@gmail.com (610) 647-9789 GATE Components 2 We will deal primarily with GATE Developer:

More information

CONTENTS 1) GENERAL. 1.1 About this guide About the CPD Scheme System Compatibility. 3 2) SYSTEM SET-UP

CONTENTS 1) GENERAL. 1.1 About this guide About the CPD Scheme System Compatibility. 3 2) SYSTEM SET-UP CONTENTS 1) GENERAL 1.1 About this guide. 1 1.2 About the CPD Scheme 2 1.3 System Compatibility. 3 2) SYSTEM SET-UP 2.1 Setting up your CPD year. 5 2.2 Requesting a date change for your CPD year. 9 2.3

More information

Confluence User Training Guide

Confluence User Training Guide Confluence User Training Guide Below is a short overview of wikis and Confluence and a basic user training guide for completing common tasks in Confluence. This document outlines the basic features that

More information

Parmenides. Semi-automatic. Ontology. construction and maintenance. Ontology. Document convertor/basic processing. Linguistic. Background knowledge

Parmenides. Semi-automatic. Ontology. construction and maintenance. Ontology. Document convertor/basic processing. Linguistic. Background knowledge Discover hidden information from your texts! Information overload is a well known issue in the knowledge industry. At the same time most of this information becomes available in natural language which

More information

Usability Report. Author: Stephen Varnado Version: 1.0 Date: November 24, 2014

Usability Report. Author: Stephen Varnado Version: 1.0 Date: November 24, 2014 Usability Report Author: Stephen Varnado Version: 1.0 Date: November 24, 2014 2 Table of Contents Executive summary... 3 Introduction... 3 Methodology... 3 Usability test results... 4 Effectiveness ratings

More information

Version 1.4. FaxCore User Manual

Version 1.4. FaxCore User Manual Version 1.4 FaxCore User Manual Table of Contents Introduction... 1 Contacting FaxCore... 1 Getting Started... 2 Understanding FaxCore s User Interface... 4 Settings: Your User Profile... 4 Personal Information

More information

Outlook 2010 One. Wednesday, August 7, 9-11 am. Agenda:

Outlook 2010 One. Wednesday, August 7, 9-11 am. Agenda: Page 1 Outlook 2010 One Wednesday, August 7, 9-11 am Agenda: Outlook Search Options Working with Attachments Creating a Signature Marking a Message as Read Flag an item for Follow-Up Reply, Reply All &

More information

Monash University Policy Management. User Guide

Monash University Policy Management. User Guide Monash University Policy Management User Guide 1 Table of Contents 1. GENERAL NAVIGATION... 4 1.1. Logging In to Compliance 360 - Single Sign On... 4 1.2. Help... 4 1.2.1. The University Policy Bank...

More information

2013 edition (version 1.1)

2013 edition (version 1.1) 2013 edition (version 1.1) Contents 1 Introduction... 3 2 Signing in to your Office 365 account... 3 2.1 Acceptable Use Policy and Terms of Use... 4 3 Setting your profile and options... 4 3.1 Settings:

More information

Outlook 2016 Guide. A Complete Overview for Connect Users

Outlook 2016 Guide. A Complete Overview for Connect Users Outlook 2016 Guide A Complete Overview for Connect Users Chapter 1: Introduction...8 Chapter 2: Getting Around Outlook...8 Quick Access Toolbar... 8 The Ribbon... 8 Backstage View... 9 Dialog Box Launcher...

More information

TagFS Tag Semantics for Hierarchical File Systems

TagFS Tag Semantics for Hierarchical File Systems TagFS Tag Semantics for Hierarchical File Systems Stephan Bloehdorn, Olaf Görlitz, Simon Schenk, Max Völkel Institute AIFB, University of Karlsruhe, Germany {bloehdorn}@aifb.uni-karlsruhe.de ISWeb, University

More information

EMS WEB APP User Guide

EMS WEB APP User Guide EMS WEB APP User Guide V44.1 Last Updated: August 14, 2018 EMS Software emssoftware.com/help 800.440.3994 2018 EMS Software, LLC. All Rights Reserved. Table of Contents CHAPTER 1: EMS Web App User Guide

More information

QuickStart Guide. System Setup, Customization + Add-ons & Integrations. Big Contacts, LLC 01/30/2017. Page 1

QuickStart Guide. System Setup, Customization + Add-ons & Integrations. Big Contacts, LLC 01/30/2017. Page 1 QuickStart Guide System Setup, Customization + Add-ons & Integrations Big Contacts, LLC 01/30/2017 Page 1 This guide will show you how to get off to a quick and successful start with BigContacts Section

More information

CHAPTER 19: MANAGING SERVICE QUEUES

CHAPTER 19: MANAGING SERVICE QUEUES Chapter 19: Managing Service Queues CHAPTER 19: MANAGING SERVICE QUEUES Objectives Introduction The objectives are: Understand the basics of queues and the flow of cases and activities through queues.

More information

CS 160: Evaluation. Professor John Canny Spring /15/2006 1

CS 160: Evaluation. Professor John Canny Spring /15/2006 1 CS 160: Evaluation Professor John Canny Spring 2006 2/15/2006 1 Outline User testing process Severity and Cost ratings Discount usability methods Heuristic evaluation HE vs. user testing 2/15/2006 2 Outline

More information

USER MANUAL. Calendar 365 TABLE OF CONTENTS. Version: 4.0

USER MANUAL. Calendar 365 TABLE OF CONTENTS. Version: 4.0 USER MANUAL TABLE OF CONTENTS Introduction... 1 Benefits of Calendar 365... 1 Pre-requisites... 2 Installation... 2 Installation Steps... 2 Configuration Steps... 5 Calendar Management... 19 Calendar Activities...

More information

CS 160: Evaluation. Outline. Outline. Iterative Design. Preparing for a User Test. User Test

CS 160: Evaluation. Outline. Outline. Iterative Design. Preparing for a User Test. User Test CS 160: Evaluation Professor John Canny Spring 2006 2/15/2006 1 2/15/2006 2 Iterative Design Prototype low-fi paper, DENIM Design task analysis contextual inquiry scenarios sketching 2/15/2006 3 Evaluate

More information

Office365 End User Training & Self-Service Migration Manual Simplified

Office365 End User Training & Self-Service Migration Manual Simplified Office365 End User Training & Self-Service Migration Manual Simplified Version 1.0 University Systems and Security 5/25/2016 1 P a g e Table of Contents 2 P a g e Table of Contents Introduction to Office365...

More information

VTH/FirstClass Quick Reference Guide: Your Desktop

VTH/FirstClass  Quick Reference Guide: Your Desktop VTH/FirstClass Email Quick Reference Guide: Your Desktop The First Class Desktop is where everything begins for your Virtual Town Hall. The FC Desktop displays your mailbox, contacts, calendars, archives

More information

Up and Running Software The Development Process

Up and Running Software The Development Process Up and Running Software The Development Process Success Determination, Adaptative Processes, and a Baseline Approach About This Document: Thank you for requesting more information about Up and Running

More information

HP Project and Portfolio Management Center

HP Project and Portfolio Management Center HP Project and Portfolio Management Center Software Version: 9.30 HP Demand Management User s Guide Document Release Date: September 2014 Software Release Date: September 2014 Legal Notices Warranty The

More information

Student Guide. By UNICAF University

Student Guide. By UNICAF University vnhgfj Student Guide By UNICAF University 1 2 Table of Contents 1) Introduction... 5 2) Student Panel (SIS)... 5 2.1) Student Panel (SIS) Login... 5 2.1.1) Definitions... 5 2.1.2) Registration Email...

More information

Microsoft OneNote and Toshiba Customer Solution Case Study

Microsoft OneNote and Toshiba Customer Solution Case Study Microsoft OneNote and Toshiba Customer Solution Case Study Web agency improves productivity and benefits from more effective collaborative working with Toshiba notebooks and OneNote Shared access to real-time

More information

EFFICIENT INTEGRATION OF SEMANTIC TECHNOLOGIES FOR PROFESSIONAL IMAGE ANNOTATION AND SEARCH

EFFICIENT INTEGRATION OF SEMANTIC TECHNOLOGIES FOR PROFESSIONAL IMAGE ANNOTATION AND SEARCH EFFICIENT INTEGRATION OF SEMANTIC TECHNOLOGIES FOR PROFESSIONAL IMAGE ANNOTATION AND SEARCH Andreas Walter FZI Forschungszentrum Informatik, Haid-und-Neu-Straße 10-14, 76131 Karlsruhe, Germany, awalter@fzi.de

More information

WebEx User Guide Cloud Connected Audio Service Meeting Centre. Version 0.9

WebEx User Guide Cloud Connected Audio Service Meeting Centre. Version 0.9 WebEx User Guide Cloud Connected Audio Service Meeting Centre Version 0.9 February 19, 2018 Contents 3 Purpose 3 Target Audience 3 WebEx Meeting Room Types 5 Review of Main Pages 6 Log In Page 7 Main Page

More information

MS Word MS Outlook Level 1

MS Word MS Outlook Level 1 MS Word 2007 MS Outlook 2013 Level 1 Table of Contents MS Outlook 2013... 1 Outlook 2013 Interface... 1 The Ribbon in Outlook 2013... 2 Sneak a Peek... 2 Pin a Peek... 3 Managing the Size of the Outlook

More information

Workshop Scheduler Admin Manual

Workshop Scheduler Admin Manual Workshop Scheduler Admin Manual This application and the documentation were developed by the Center for Academic Excelence group of Appalachian State University. Copyright 2016. All rights reserved. rev

More information

6 TOOLS FOR A COMPLETE MARKETING WORKFLOW

6 TOOLS FOR A COMPLETE MARKETING WORKFLOW 6 S FOR A COMPLETE MARKETING WORKFLOW 01 6 S FOR A COMPLETE MARKETING WORKFLOW FROM ALEXA DIFFICULTY DIFFICULTY MATRIX OVERLAP 6 S FOR A COMPLETE MARKETING WORKFLOW 02 INTRODUCTION Marketers use countless

More information

#define 4 User Guide. Version 4.0. CGI Group Inc.

#define 4 User Guide. Version 4.0. CGI Group Inc. #define 4 User Guide Version 4.0 CGI Group Inc. Table of Contents Section Slides Wiki 5-7 Tickets 8-12 Files 13-15 Planning 16-21 Cooperate 22-24 Settings 25-31 Agile Features 32-34 Desktop Applications

More information

CMISGo Web v16.1 User Guide

CMISGo Web v16.1 User Guide CMISGo Web v16.1 User Guide Document versioning control DATE STATUS DOCUMENT VERSION AUTHOR DESCRIPTION January 2016 Final 16.1.1 Advanced Learning March 2018 Final 16.1.1 Niall Dixon Minor change (pg

More information

With IBM BPM 8.5.5, the features needed to express both BPM solutions and case management oriented solutions comes together in one offering.

With IBM BPM 8.5.5, the features needed to express both BPM solutions and case management oriented solutions comes together in one offering. Case Management With the release of IBM BPM 8.5.5, case management capabilities were added to the product. It must be noted that these functions are only available with IBM BPM Advanced and the Basic Case

More information

Episerver CMS. Editor User Guide

Episerver CMS. Editor User Guide Episerver CMS Editor User Guide Episerver CMS Editor User Guide 17-2 Release date 2017-03-13 Table of Contents 3 Table of contents Table of contents 3 Introduction 11 Features, licenses and releases 11

More information

Outlook - an Introduction to Version 2003 Table of Contents

Outlook - an Introduction to  Version 2003 Table of Contents Outlook - an Introduction to E-mail Version 2003 Table of Contents What is Outlook Starting Outlook The Navigation Pane Getting Help Creating and Sending a Message Using the College Exchange Directory

More information

AN INTRODUCTION TO OUTLOOK WEB ACCESS (OWA)

AN INTRODUCTION TO OUTLOOK WEB ACCESS (OWA) INFORMATION TECHNOLOGY SERVICES AN INTRODUCTION TO OUTLOOK WEB ACCESS (OWA) The Prince William County School Division does not discriminate in employment or in its educational programs and activities against

More information

Eleven+ Views of Semantic Search

Eleven+ Views of Semantic Search Eleven+ Views of Semantic Search Denise A. D. Bedford, Ph.d. Goodyear Professor of Knowledge Management Information Architecture and Knowledge Management Kent State University Presentation Focus Long-Term

More information

CHAPTER 18: CLIENT COMMUNICATION

CHAPTER 18: CLIENT COMMUNICATION CHAPTER 18: CLIENT COMMUNICATION Chapter outline When to communicate with clients What modes of communication to use How much to communicate How to benefit from client communication Understanding your

More information

Blackboard 5. Instructor Manual Level One Release 5.5

Blackboard 5. Instructor Manual Level One Release 5.5 Bringing Education Online Blackboard 5 Instructor Manual Level One Release 5.5 Copyright 2001 by Blackboard Inc. All rights reserved. No part of the contents of this manual may be reproduced or transmitted

More information

Employee self-service guide

Employee self-service guide Employee self-service guide August 2016 (V.2) Contents Important note... 4 Login... 5 How do I know I am on the correct site and my connection is secure?... 5 How do I login?... 6 Username and password...

More information

Learning Series. Volume 8: Service Design and Business Processes

Learning Series. Volume 8: Service Design and Business Processes Learning Series Volume 8: Service Design and Business Processes NOTICES ServicePRO Learning Series Edition November 2014 HelpSTAR and ServicePRO are registered trademarks of Help Desk Technology International

More information

Digital Newsletter. Editorial. Second Review Meeting in Brussels

Digital Newsletter. Editorial. Second Review Meeting in Brussels Editorial The aim of this newsletter is to inform scientists, industry as well as older people in general about the achievements reached within the HERMES project. The newsletter appears approximately

More information

Version 5. Recruiting Manager / Administrator

Version 5. Recruiting Manager / Administrator Version 5 Recruiting Manager / Administrator 1 Contents 1.0 Introduction... 4 2.0 Recruitment at a Glance... 6 3.0 Viewing Applicant Numbers... 8 4.0 Activities After Closing Date... 10 5.0 Shortlisting...

More information

Act! User's Guide Working with Your Contacts

Act! User's Guide Working with Your Contacts User s Guide (v18) Act! User's Guide What s Contact and Customer Management Software?... 8 Act! Ownership Change... 8 Starting Your Act! Software... 8 Log on... 9 Opening a Database... 9 Setting Up for

More information

This report will document the key themes arising from the testing, and make recommendations for the development of the site.

This report will document the key themes arising from the testing, and make recommendations for the development of the site. Cloudworks usability testing February 2011 In this laboratory test four participants were given a series of nine short tasks to complete on the Cloudworks site. They were asked to verbalise their thought

More information

PROJECT PERIODIC REPORT

PROJECT PERIODIC REPORT PROJECT PERIODIC REPORT Grant Agreement number: 257403 Project acronym: CUBIST Project title: Combining and Uniting Business Intelligence and Semantic Technologies Funding Scheme: STREP Date of latest

More information

D4.6 Data Value Chain Database v2

D4.6 Data Value Chain Database v2 D4.6 Data Value Chain Database v2 Coordinator: Fabrizio Orlandi (Fraunhofer) With contributions from: Isaiah Mulang Onando (Fraunhofer), Luis-Daniel Ibáñez (SOTON) Reviewer: Ryan Goodman (ODI) Deliverable

More information

Content Enrichment. An essential strategic capability for every publisher. Enriched content. Delivered.

Content Enrichment. An essential strategic capability for every publisher. Enriched content. Delivered. Content Enrichment An essential strategic capability for every publisher Enriched content. Delivered. An essential strategic capability for every publisher Overview Content is at the centre of everything

More information

Introduction to Moodle

Introduction to Moodle Introduction to Moodle Preparing for a Moodle Staff Development Session... 2 Logging in to Moodle... 2 Adding an image to your profile... 4 Navigate to and within a course... 6 Content of the basic template

More information

THE USE OF PARTNERED USABILITY TESTING TO HELP TO IDENTIFY GAPS IN ONLINE WORK FLOW

THE USE OF PARTNERED USABILITY TESTING TO HELP TO IDENTIFY GAPS IN ONLINE WORK FLOW THE USE OF PARTNERED USABILITY TESTING TO HELP TO IDENTIFY GAPS IN ONLINE WORK FLOW Dianne Davis Fishbone Interactive Gordon Tait Department of Surgery, University of Toronto Cindy Bruce-Barrett Strategic

More information

OpenMDM Client Technologies Overview

OpenMDM Client Technologies Overview OpenMDM Client Technologies Overview Table of Contents 1. Technological Approach... 2 1.1. Full Web Stack... 2 1.2. Full Desktop Stack... 2 1.3. Web Stack with Device Helpers... 2 1.4. Shared Web and Desktop

More information

ADVANTA group.cz Strana 1 ze 24

ADVANTA group.cz Strana 1 ze 24 ADVANTA 2.0 System documentation How to configure the system Advanta Part 1. Quick Start Initial Set- up Document Version 1.2. (System version 2.2.2.h) Advanta allows companies using project management

More information

Luxor CRM 2.0. Getting Started Guide

Luxor CRM 2.0. Getting Started Guide Luxor CRM 2.0 Getting Started Guide This Guide is Copyright 2009 Luxor Corporation. All Rights Reserved. Luxor CRM 2.0 is a registered trademark of the Luxor Corporation. Microsoft Outlook and Microsoft

More information

Instructions NPA project mini websites

Instructions NPA project mini websites Instructions NPA project mini websites Version 1.0 This document provides guidance for using the project mini websites on the NPA programme website. The Content Management System (CMS) for the mini website

More information

CASCOM. Context-Aware Business Application Service Co-ordination ordination in Mobile Computing Environments

CASCOM. Context-Aware Business Application Service Co-ordination ordination in Mobile Computing Environments CASCOM Context-Aware Business Application Service Co-ordination ordination in Mobile Computing Environments Specific Targeted Research Project SIXTH FRAMEWORK PROGRAMME PRIORITY [FP6-2003 2003-IST-2] INFORMATION

More information

COLLABORATIVE EUROPEAN DIGITAL ARCHIVE INFRASTRUCTURE

COLLABORATIVE EUROPEAN DIGITAL ARCHIVE INFRASTRUCTURE COLLABORATIVE EUROPEAN DIGITAL ARCHIVE INFRASTRUCTURE Project Acronym: CENDARI Project Grant No.: 284432 Theme: FP7-INFRASTRUCTURES-2011-1 Project Start Date: 01 February 2012 Project End Date: 31 January

More information

HGC SUPERHUB HOSTED EXCHANGE

HGC SUPERHUB HOSTED EXCHANGE HGC SUPERHUB HOSTED EXCHANGE EMAIL OUTLOOK WEB APP (OWA) 2010 USER GUIDE V2013.6 HGC Superhub Hosted Email OWA User Guide @ 2014 HGC. All right reserved. Table of Contents 1. Get Started... 4 1.1 Log into

More information

Outline. 1 Introduction. 2 Semantic Assistants: NLP Web Services. 3 NLP for the Masses: Desktop Plug-Ins. 4 Conclusions. Why?

Outline. 1 Introduction. 2 Semantic Assistants: NLP Web Services. 3 NLP for the Masses: Desktop Plug-Ins. 4 Conclusions. Why? Natural Language Processing for the Masses: The Semantic Assistants Project Outline 1 : Desktop Plug-Ins Semantic Software Lab Department of Computer Science and Concordia University Montréal, Canada 2

More information

The Web Service Sample

The Web Service Sample The Web Service Sample Catapulse Pacitic Bank The Rational Unified Process is a roadmap for engineering a piece of software. It is flexible and scalable enough to be applied to projects of varying sizes.

More information

CEDMS User Guide

CEDMS User Guide CEDMS 5.3.1 User Guide Section Page # Section 1 User Interface 2 CEDMS DM Toolbar 2 Navigation Pane 3 Document List View Pane 3 Add-on Pane 3 Section 2 Saving and Importing Documents 4 Profile Form 4 Saving

More information

Conceptual Data Structures (CDS) Tools

Conceptual Data Structures (CDS) Tools Integrated Project Priority 2.4.7 Semantic based knowledge systems Conceptual Data Structures (CDS) Tools Deliverable D1.2 Version 1.0 15.01.2008 Dissemination level: PP Nature P Due date 15.01.2008 Lead

More information

Tania Tudorache Stanford University. - Ontolog forum invited talk04. October 2007

Tania Tudorache Stanford University. - Ontolog forum invited talk04. October 2007 Collaborative Ontology Development in Protégé Tania Tudorache Stanford University - Ontolog forum invited talk04. October 2007 Outline Introduction and Background Tools for collaborative knowledge development

More information

SEMANTIC WEB POWERED PORTAL INFRASTRUCTURE

SEMANTIC WEB POWERED PORTAL INFRASTRUCTURE SEMANTIC WEB POWERED PORTAL INFRASTRUCTURE YING DING 1 Digital Enterprise Research Institute Leopold-Franzens Universität Innsbruck Austria DIETER FENSEL Digital Enterprise Research Institute National

More information

EVACUATE PROJECT WEBSITE

EVACUATE PROJECT WEBSITE FP7-313161 A holistic, scenario-independent, situation-awareness and guidance system for sustaining the Active Evacuation Route for large crowds EVACUATE PROJECT WEBSITE Deliverable Identifier: D.12.1

More information

Usability Tests and Heuristic Reviews Planning and Estimation Worksheets

Usability Tests and Heuristic Reviews Planning and Estimation Worksheets For STC DC Usability SIG Planning and Estimation Worksheets Scott McDaniel Senior Interaction Designer 26 February, 2003 Eval_Design_Tool_Handout.doc Cognetics Corporation E-mail: info@cognetics.com! Web:

More information

ProMenPol Database Description

ProMenPol Database Description Project No.: 44406 Project Acronym: ProMenPol Project Title: Promoting and Protecting Mental Health Supporting Policy through Integration of Research, Current Approaches and Practices Instrument: Co-ordination

More information