<MMI/> Marine Metadata Interoperability. Sensor Metadata Interoperability Workshop Report October 19-20, 2006 Portland, Maine, USA

Size: px
Start display at page:

Download "<MMI/> Marine Metadata Interoperability. Sensor Metadata Interoperability Workshop Report October 19-20, 2006 Portland, Maine, USA"

Transcription

1 <MMI/> Marine Metadata Interoperability Sensor Metadata Interoperability Workshop Report October 19-20, 2006 Portland, Maine, USA Authors John Graybeal, Stephanie Watson, Anthony Isenor, Luis Bermudez, Matthew Howard, and the Workshop Team Leads: Anne Ball, Mike Botts, Steve Havens, Melanie Meaux, Greg Reed Publication Date: July 3, 2007

2 Executive Summary On October 19 and 20, 2006, the Marine Metadata Interoperability (MMI) Initiative held a Sensor Metadata Interoperability (SMI) workshop in Portland, Maine. The workshop s goal was to establish consensus on a content standard, or some other interoperable approach, for representing sensor metadata. To this point, oceanographers have used many metadata content standards and specifications to describe sensors, with little interoperability or coordination between those specifications. This workshop was designed to improve that situation. The workshop engaged 52 participants from 42 organizations, with expertise in many technical and scientific areas. Its participants came from different backgrounds, but with similar sensor metadata interoperability issues. Most participants were technical, although some scientists with responsibility for sensor metadata attended. Almost all participants were senior professionals in their respective fields, reflecting the target audience and subject of the workshop. The workshop included two tracks. The first, Evaluating Sensor Metadata Content Standards, sought to evaluate 5 specifications in real-world sensor metadata situations. The second, Sensor Metadata Training, provided participants with broad technical training on sensor metadata practices and options. At the beginning of the first day, participants attended presentations and received direction that guided the workshop s activities. The evaluation track stayed in breakout groups until the final plenary discussions, reviewing and working with their assigned metadata specification. Meanwhile, the training track met as a group, attending additional presentations, working with training materials, and finally discussing needs and recommendations for sensor interoperability. In the final session, the two groups rejoined to present their outcomes, and then discussed recommendations. The workshop produced many more (and more varied) recommendations than expected, and almost all the recommendations were widely embraced. The workshop explicitly generated many detailed recommendations and action items, and initiated definition of still more items in the final report. Participants emphasized the critical need for computable, use-oriented standards, in addition to the (currently dominant) discovery-oriented standards. In some cases organizations or individuals volunteered to lead a recommended activity, but many remained unallocated, typically due to lack of available resources. The workshop participants agreed that many different environmental science communities can benefit from the work that was outlined. As a result, the workshop recommended identifying funds from multiple sources to pursue this effort. This workshop is one step toward collaboratively ensuring the interoperability of sensor metadata. We also endorse the results of the Alliance for Coastal Technologies workshop on Enabling Sensor Interoperability, which immediately preceded and complemented our own. Future steps consist of community implementation of the action items listed herein. All workshop materials and outcomes have been posted on the MMI website at Sensor Metadata Interoperability Workshop Report Executive Summary i

3 Workshop Frequently Asked Questions Workshop Frequently Asked Questions We summarize here the workshop goals and results. The following questions highlight our goals before the workshop, while the answers summarize the workshop reports and guide the reader to additional information. Did the workshop reach consensus on a content standard or approach to use for describing sensors in ocean observing systems? No, it did not; the current world of both system requirements and existing specifications proved too diverse. Some useful distinctions were made, and are described in the appendix Brief Comparison of Specifications. Did the workshop provide a matrix of strengths and weaknesses of the various content standards? Some of this information was collected in the Team Reports appendix. However, the processes and membership of each team were not similar enough to allow direct comparison, and the evaluation teams did not meet in joint session. MMI has agreed to adopt this task as an ongoing project. Does the report list common good/best practices that were identified? Yes, many of these are listed in the Workshop Guidance section of the Conclusions. This list is by no means comprehensive, but will be enhanced by future guidance providers like MMI. Did the participants gain insight into sensor metadata practices and options? Did participants leave the workshop with a reasonable understanding of the current state of development of sensor metadata practices that could be applied to ocean observing systems? Yes, from survey results most participants gained insight, though better results were clearly possible. Many participants had a very good understanding of these practices beforehand, and the workshop provided considerable context, both explicitly and implicitly. (See also the Workshop Survey Results and Workshop Lessons Learned appendices.) Did participants leave with the ability to create some form of sensor metadata? Yes, to a significant degree. This was not achieved to the extent the workshop intended, though many of the training participants were very pleased with what they learned. Did the workshop identify potential problems when submitting sensor information to specific clearinghouses? Yes, a number of problems were identified, mainly that most clearinghouses do not accept detailed or use-oriented metadata about sensors. As a result, we dropped this demonstration from the workshop, and did not discuss clearinghouse-related topics. Has the workshop provided an additional level of guidance/lessons learned to supplement metadata development or clearinghouse submission processes? Yes, the guidance in the Conclusions section will provide considerable assistance to metadata developers, and enhancement of this guidance is likely. At least one clearinghouse directly received significant input on its metadata processes. Table of Contents Executive Summary... Workshop Frequently Asked Questions... ii 1. Introduction Goals and Background Preparation Presentations Brief Description of Specifications Workshop Conclusions Workshop Guidance Reusing and Blending Types of Metadata Extensions and Profiles Workshop Recommendations Content Standards and Specifications Vocabularies Additional Information on Vocabularies Enabling Future Work Priorities and Justifications Possible Outcomes Appendices A. Agenda B. Preparation B1. Track One B2. Track Two C. Materials & Tools D. Team Reports D1. International Standards Organization (ISO) 19115/ D2. Content Standard for Digital Geospatial Metadata D3. DIF/Auxiliary Description of Instruments D4. SensorML D5. TransducerML E. Workshop Survey Results Survey Analysis Survey Numerical Rankings Survey Comments F. Workshop Lessons Learned G. Participant List ii iii i

4 I. Introduction The Marine Metadata Interoperability (MMI) Initiative is a collaborative effort, funded by the National Science Foundation (NSF) and other interested contributors, to promote the exchange, integration and use of marine data. The organization began its formal existence in October, 2004, and has received significant additional funding since then, including a 3-year grant from NSF that took effect July 1, MMI held its first workshop in August 2005 on Advancing Domain Vocabularies, and has developed several tools and a growing web site of metadata resources. The MMI project held its second workshop in Portland, Maine on October 19-20, The Sensor Metadata Interoperability workshop focused on developing best practices for sensor metadata. It provided two participant tracks in order to connect with both intermediate and advanced technical users. (Because the workshop promotions emphasized technical content, it did not offer any basic training materials.) Intermediate users received detailed training in an XML-based content standard, and advanced users evaluated multiple content standards in targeted groups. This report describes the purposes, organization, conduct, and outcomes of the MMI Sensor Metadata Interoperability workshop. I Introduction 1.1 Goals and Background 1.2 Preparation 1.1 Goals and Background The ability of scientists to find and use data is increasingly dependent on the availability of well-described, searchable references to the data and its provenance. In ocean observing systems, the availability of meaningful sensor metadata indicating instrument models, configurations, and calibrations, among many other parameters is a key criterion for scientific usefulness of the data. These descriptions must be sufficiently interoperable that they persistently accompany representations of the data, particularly as it is indexed for search and access in national and international data clearinghouses. The overall goal of the Sensor Metadata Interoperability (SMI) workshop was to increase the interoperability of sensor descriptions, as an important component in the data stream description. All systems in the chain of oceanographic data must support some level of sensor information, whether it is just the sensor type (e.g., CTD or wind sensor), or a fully characterized sensor data processing chain. The workshop was intended to influence sensor metadata choices made by ocean observing system developers, as well as the choices made by the developers of standards and clearinghouses that must interoperate for effective, systematic data systems development. Five factors influenced the focus of this MMI workshop on sensor metadata: 1) the development of two major observatory networks, which rely upon interoperable sensor metadata; 2) the existence of two active ocean observing infrastructures that are using advanced sensor descriptions (Monterey Bay Aquarium Research Institute s Monterey Ocean Observing System, and Satlantic s commercial WETSAT product); 3) the near-standard status of several sensor metadata specifications (SensorML, TransducerML, and IEEE 1451); 4) the fortuitous development of a sensor iv 1

5 interoperability workshop by the Alliance for Coastal Technologies (ACT) in the same time frame; and 5) the Global Change Master Director s development of a new specification for sensor metadata. The first three factors suggested the timeliness of this workshop: before the major observatories begin major development, but with some exemplars in hand from which lessons can be learned. The immediately preceding ACT workshop on Enabling Sensor Interoperability addressed complementary issues, and included some of the experts who were necessary for the MMI workshop. The ACT workshop addressed the need for integrative protocols at the hardware, firmware and systems levels in order to attain instrument interoperability among and between ocean observing systems. The ACT workshop took place just before the MMI workshop and at the same location, allowing strong cross-fertilization of expertise and ideas. Further, the Global Change Master Directory(GCMD) group was developing another descriptive specification for capturing sensor metadata, to be used in conjunction with existing DIF standard to capture metadata for user-entered GCMD records. The MMI workshop provided a unique forum for presenting the new specification and receiving feedback from experts. The workshop plenary presentations (described below) were intended to capture this background, providing a broad context for the specific issue of sensor metadata interoperability. Detailed discussion of the Preparation is deferred to an appendix of this report, as most of the details are not critical to understanding the conclusions. We note here that to a significant degree, we were hosting two parallel and sometimes intersecting workshops, and expected to need correspondingly increased preparation time. Hosting two major tracks let us target the evaluation track to sophisticated participants with sensor metadata experience, and offer a valuable training workshop experience to others who were not already familiar with sensor metadata. This arrangement also allowed the two groups to interact, both during breaks and in the plenary sessions. This proved extremely valuable for the workshop results and the individual participants, although some cross-fertilization goals were not fully realized. 1.2 Preparation The workshop included two complementary tracks, each with their own goals. The goals of Track One, Evaluating Sensor Metadata Standards, included: Establishing consensus on a sensor metadata content standard identifying good practices for content standards of this type, characterizing strengths and weaknesses of specific content standards, increasing understanding of the details of each content standard, addressing issues raised by Track 2 participants, and communicating key conclusions to Track 2 participants. From left to right: Greg Reed, Melanie Meaux, Anne Ball, Bob Arko, Mike Botts, Steve Havens, (front row) Matthew Howard, John Graybeal, Luis Bermudez The goals of Track Two, Sensor Metadata Training, included: Maximizing participants insight into sensor metadata value, practices, and options, understanding the issues involved in defining and agreeing on content standards for observatories, identifying problems when submitting sensor information to specific clearinghouses, providing questions and input to the content standard discussions in Track 1, and evaluating guidance and training materials used in this track. MMI,

6 2. Presentations 2. Presentations Anne Ball of the U.S. National Oceanographic and Atmospheric Administration (NOAA) described the challenges and status of the Metadata Expert Team of the U.S. Integrated Ocean Observing System (IOOS). IOOS is a system of systems that routinely and continuously provides quality controlled data and information on current and future states of the oceans and Great Lakes. Data Management and Communications (DMAC) is one of the three key subsystems of IOOS. Since quality metadata is an essential ingredient to the success of DMAC, it has appointed a Metadata Expert Team, which, among other goals, provides metadata recommendations to DMAC, as well as coordinates and liaises with metadata activities to ensure that DMAC needs are met. MMI is helping to fulfill some of the needs, which overlap a great deal with those of the international ocean observing community. Alan Chave (Woods Hole Oceanographic Institution) described the cyberinfrastructure of the Ocean Research Interactive Observatory Networks Ocean Observing Initiative (OOI) and highlighted the importance of metadata. As a co-author of the Conceptual Architecture documents for the OOI Cyberinfrastructure, he emphasized the numerous and changing relationships of the resources in the system, including the sensors and platforms making the measurements. The OOI Cyberinfrastructure must enable the consistent and accurate collection of metadata describing those resources. In Track 2 (the training track), panel presentations were made on active ocean observing infrastructures that incorporate sensor metadata interoperability. Projects presented included Steve Adams on Satlantic s approach, Benoit Pirenne on the NEPTUNE Canada project, and John Graybeal on the Monterey Ocean Observing System and Shore Side Data System. Luis Bermudez also presented on Ocean Observing System Technologies (OOSTech) efforts and how they relate to sensor metadata. (The presentation by John Cree on Reseau, which could not be presented due to time constraints, is highlighted on the following page.) After the panel, two specific markup languages for sensors were presented: Alexandre Robin (University of Alabama at Huntsville) provided an introduction to SensorML, and Steve Havens (Iris Corporation) introduced TransducerML. These last two presentations provided the basic technical foundation for the training component of the workshop. The RésEau The RésEau Building Canadian Water Connections project was developed by Environment Canada, a department of Canada s federal government, to demonstrate the sharing, discovery, access and use of water information over the Internet. The project accomplishes this through the use of standards and specifications endorsed by the Canadian Geospatial Data Infrastructure and the Open Geospatial Consortium. The project integrates water information from a distributed network of partners across Canada, and provides a central portal to access all the data ( ca/reseau/). A leading premise of the project was the use of standards for metadata and services. The Project requires Federal Geographic Data Committee (FGDC) Content Standard for Digital Geospatial Metadata (CSDGM) conformant metadata to describe data sets, and the National Biological Information Infrastructure biological extension profile of FGDC CSDGM for biological collection and product level metadata. Search and discovery services are provided via the Geoconnection Discovery Portal using Z39.50 and (under test) an OGC stateless catalog interface. End user applications use OGC s Web Map Service (WMS) and Web Feature Service (WFS), as well as Sensor Observation Service (SOS) for delivery of observations and measurements. Finally, SensorML is the nominal standard for defining observing systems and variables. This was chosen to meet requirements to describe instruments and monitoring sites, report observations and measurements, integrate real-time and non real-time information, and disseminate information and maintain iteroperability. With SensorML, RésEau describes instrument identification and classification, related information and links, a history of events at the stations, and the variables measured and outputs. Extensible Stylesheet Language Transformations (XSLT) are used to produce a unified output of the SensorML information. SensorML and other standards were commonly adopted by all partners due to a requirement established as part of the partnership fund. This ensures that all partners produce data and metadata following prescribed standards/profiles. All contributors adhere to the same standards and principles in their data and information storage, cataloguing, and access methods. In the RésEau system, data and information rest at the source, so the information is stored and managed only once, but is available in a highly distributed context. The Application Framework and Station Discovery Framework diagrams show the components of RésEau, and the RésEau web site describes the goals, approaches, and results of the project. 4 Sensor Metadata Interoperability Workshop Report

7 3. Brief Description of Specifications 3. Brief Description of Specifications This description was developed by the workshop organizers and approved by the workshop participants in a post-workshop survey. Five standards, or proposed standards, were considered for this workshop. Basic information on these specifications is presented in Table 1; an italicized Type entry indicates the document is not yet approved by a standards body. Table 1: The Content Specifications Name Type Creating Organization/Approver (if different) CSDGM (Content Standard for Digital Geospatial Metadata) Standard FGDC (Federal Geographic Data Committee) (standard also known as FGDC-STD ) Remote Sensing Extension Extension FGDC Standards Working Group ISO 19115:2003 Standard ISO (International Organization for Standardization) Marine Community Profile Profile AODCJF (Australian Ocean Data Centre Joint Facility) DIF (Directory Interchange Format) AD-I (Auxiliary Description of Instruments) Standard Extension NASA Goddard GCMD (Global Change Master Directory) NASA Goddard GCMD SensorML Standard University of Alabama-Huntsville/ OGC (Open Geospatial Consortium) (awaiting approval) TransducerML Standard IRIS Corporation/OGC (awaiting approval) Of these specifications, DIF, ISO and FGDC CSDGM are more oriented toward broadly describing content, particularly data sets, while SensorML and TransducerML are more targeted to computable descriptions of sensors and of the data processing. Of the three discovery-oriented specifications, the CSDGM may be the most widely used, as it was mandated by the United States government for data systems that have been developed using federal funding. ISO is now widely used in Europe, and efforts are under way to align the CSDGM version 3 with the ISO standard. DIF is the simplest and oldest core standard, with relatively few fields, but is widely used to exchange metadata between data systems. The CSDGM Remote Sensing Extension is the oldest and most mature extension, and is widely used to describe data from satellites. The CSDGM allows many detailed and computable specifications for remote sensors. It does not gracefully support the description of in-situ sensors, and no known extension does this. (The process for developing approved CSDGM extensions tends to be laborious and lengthy, but of course extensions can be used even if they are not approved.) As an extension to CSDGM, the Remote Sensing Extension inherits some of that standard s controlled vocabulary and non-computability issues. (In the core CSDGM, sensor metadata can be stored as a part of data lineage or data quality, both of which mostly rely on free text descriptions. While free-text may be sufficient for human interpretation, it hinders machine interoperability.) The AD-I extension for DIF was recently designed by the Global Change Master Directory team to support entry of instrument and platform metadata into GCMD. This was its first exposure outside of the GCMD development environment, but it has been undergoing development and testing for some time. AD-I is not intended to describe individual sensors, but is used to characterize the class of sensor used to produce a data set. As this specification is still evolving, issues like vocabulary terms are still being addressed. The Marine Community Data Profile for ISO was recently introduced to the IODE Steering Committee for their review, and is available to interested parties. It is designed to improve upon some of the elements of ISO that were incomplete for marine applications. The Marine Community Profile was not targeted for sensor data per se, and so was not directly applied in the workshop. The ISO standard was used to express some sensor metadata, but since this standard was not targeted at sensors either, augmentation with a more focused specification was considered an attractive alternative. Note that a proposed North American Profile for ISO has recently been released for comment, but at the time of the Workshop, it was unavailable. SensorML and TransducerML were both developed expressly to describe sensors in very detailed ways, and both are now being considered as potential standards by the Open Geospatial Consortium (OGC) within the Sensor Web Enablement (SWE) set of specifications. SensorML was reorganized over the last 18 months to reflect a process-oriented perspective, and is nearing the latter stages of approval. TransducerML was developed by a private corporation. In neither case are the artifacts openly distributed (at least partly due to constraints of the OGC standardization process), although members of the OGC may access the most recent versions. In both cases very sophisticated applications have been built that leverage these specifications. Both specifications are relatively young, and initial users of both have been constrained by the lack of documentation and user-friendly interfaces. Many questions arose at the workshop about the possible harmonization of these two standards, but proprietary (and perhaps other) considerations have so far precluded that result. 6 7 Sean Whelan, WHOI

8 4. Workshop Conclusions 4.1 Workshop Guidance 4.2 Workshop Recommendations 4.3 Enabling Future Progress 4. Workshop Conclusions These conclusions were generated from the workshop results, which included raw lists of Action Items and Recommendations (posted on the MMI web site), and the results from each breakout team (included in the Team Reports appendix). The conclusions were reorganized and refined by the workshop organizers and participants, and ratified by the workshop participants in a post-workshop survey. Individual comments from the survey were also incorporated wherever possible. The first section provides Workshop Guidance for metadata creators, and for metadata system developers, that came from workshop comments and conclusions. The next section identifies specific action items Workshop Recommendations that the workshop participants believe are important to pursue. In many cases volunteers were identified during the workshop, and other volunteers have been identified since then and added to this list. Finally, the Enabling Future Progress section describes the perspective of the participants on the value and organization of funding for these efforts. Recommendations that conform with those in the Alliance for Coastal Technologies workshop report {Thompson, 2007 #1} are prefixed by an asterisk (*). The recommendations in the two reports proved entirely complementary. (The two workshops did not consider each others recommendations as they met.) 4.1 Workshop Guidance These conclusions were drafted by the workshop organizers, and ratified by the workshop participants in a post-workshop survey. The narrative fills out the specific recommendations made by workshop participants in the various sessions. Note that the Marine Metadata Interoperability web site contains additional guidance [1], and more guidance will be added based on these recommendations and extensions to them Reusing and Blending Participants stated repeatedly that adding new content standards should be avoided wherever possible, and that reusing existing standards, either as is or through extension, was vastly preferable. Many of the concrete recommendations were offered to encourage reuse by making it simpler and more compelling. For many, retarding the proliferation of standards was a very high priority. All other things being equal, the most widely used standard that meets the requirements was considered the preferred one to use, because it increases interoperability of and access to the information. The level of use could be evaluated across all projects, or just across projects in the domain of interest, since most sharing will occur within the domain of interest. Many believed that which standard was chosen was not particularly critical, because crosswalks and search tools will enable interoperable discovery and repurposing of metadata into other standards. For participants with more detailed metadata, though, the requirements were more specific, and the selection 1 MMI Guidance: was more important. (See Types of Metadata ) Participants also noted known weaknesses of crosswalks and search engines, including inconsistent results and information loss. One technique for reusing existing standards involves blending standards within a single description. (For example, the ISO content standard could be extended by pointing to, or incorporating, a SensorML description of a sensor.) Although this already happens to a limited degree in some contexts, the major standards do not have embedded mechanisms to support this. The workshop agreed such a hybrid approach may work better than settling on or extending a single standard, since it leverages all the work spent on the more specialized standard. Although hybrid content standards were not extensively explored during the workshop itself, the ISO breakout group did an exercise connecting its standard to SensorML, and responded favorably to the possibilities. The workshop made a concrete recommendation to further pursue hybridization options. One approach to limit content standard proliferation, and increase reuse of existing standards, is to reference all of the standards in one place, and to provide comparisons between them. The Marine Metadata Interoperability section on content standards [2] already lists known sensor-related content standards, and the Workshop Recommendations call for providing comparative information. A second strategy to increase interoperability is to identify commonality between the standards, and attempt to create a single model representing the aggregate information they represent. Such a unified model for sensors could enable several strategic gains, encouraging greater interoperability (or even merging) among standards, and making comparisons easier. This strategy has also been adopted as a recommendation, and Matthew Arrott of the LOOKING project has agreed to lead such an effort Types of Metadata There are many ways to classify metadata, as discussed in an MMI article [3] and elsewhere. Naturally, some of these arose during workshop discussions. While most of the metadata classification schemes are at best rough characterizations, we describe the two schemes that were relevant in this workshop. Discovery or Use. Three sets of comparable classification terms were used during key workshop discussions: Discovery vs Use Descriptive (or Search) vs Deployment and Configuration Informational (or Human) vs Computable Each of these characterizations split metadata into two groups, the first of which provides more general or broad information about a data item or data collection, and the second of which provides detailed information that computers need to process the data. The terms in the first pair, Discovery and Use, are often used despite their 2 MMI list of content standards: 3 How are metadata classified?, Other classifications include Syntactic vs Semantic and Real-Time vs Delayed. 8 9

9 There are many ways to classify metadata... Two important classifications are: discovery vs. use static vs. dynamic imprecision. Discovery metadata is information that can be used to find data of interest, while Use metadata is that information that a computer needs in order to work with data, for example to access it and present it to the user. (So Use metadata must include both syntactic metadata, like whether the data items are ASCII or binary values, and semantic metadata, like the units of measurement for the data items.) Information that describes the Deployment of the system, and its Configuration, makes up typical components of Use metadata. But as the MMI metadata classification article describes, terms like Discovery and Use are somewhat ambiguous, and they aren't sufficient to characterize a key aspect of metadata: whether it is computable. In this context, computability means information can be acted upon by programs in predictable ways. Programs can then be written that use the metadata to perform data processing more intelligently and automatically. For example, it would be difficult to write software that makes decisions based on the contents of a free text description field, because everyone will fill out the field differently. Similarly, even though you can write software to analyze the information in a free-text keyword field, you can t be sure you have identified all the synonyms that an individual may have used to label their data. To be effectively computable, the contents of a field should be narrowly circumscribed and deterministic, so similar data will be consistently labeled with similar metadata; and the field itself must be defined in a way the computer understands (for example, an XML schema). The concept of the Semantic Web [4] builds on information that is computable in these ways. While most of the participants considered computable metadata an important priority, clearly the dominant communities provide metadata primarily for discovery. Indeed, three of the dominant content standards are not designed for computability, as described in the Brief Description of Specifications on page 6. Much of the workshop discussion about combining content standards addressed this key point. Static or Dynamic. Another characterization of metadata is whether it is Static or Dynamic. Metadata that describe immutable qualities of the sensor are described as static (e.g., its model number and serial number). Metadata that could change, even while the sensor is running (e.g., sampling speed, accuracy, and data output format), are described as dynamic. These terms can also be confused. Items like sensor configuration parameters typically vary only occasionally, but on some sensors they can be changed often, even while the instrument is operating. Depending on the particular installation, such metadata could be considered static or dynamic. The biggest concern about some of the standards was the difficulty individuals would have actually creating metadata, particularly to comprehensively describe a sensor. The workshop participants felt the most desirable approach had three components. First, manufacturers would have the responsibility to enter the immutable static metadata. Just as users currently expect a PDF file describing 4 Berners-Lee, Tim; Fischetti, Mark (1999). Weaving the Web. Harper SanFrancisco, chapter 12. ISBN how a device or application works, system developers would expect to receive a descriptive metadata file, following a common content standard, with every product. Then, the instrument must go through a number of life cycle stages (calibration, registration, deployment, operation) which dynamically create metadata. If the user community has to manually document all the necessary metadata, users will quickly give up. In the second component, therefore, automated systems must automatically augment the metadata for an instrument at each stage of the instrument s life cycle, whenever it is possible to do so. Finally, individual users should provide the metadata that can only be created manually, but should be aided by user-friendly tools that minimize workload to the maximum extent possible. How Much, When, and Where How much metadata? The standard tradeoff when developing metadata systems and standards is the amount of metadata that is required, versus the resources needed to provide that metadata. The more metadata that is provided, and the more precise that metadata, the more useful is the metadata; but at what cost? This leads to tension between requiring a comprehensive list of metadata very powerful but requiring a lot of work to implement and requiring a minimum set of metadata not so useful but at least everyone will do it. Some of this tension can be mitigated. For example, moving more of the responsibility for providing metadata to manufacturers and automated observing systems increases the metadata per unit of labor, and decreases the labor required of each system operator or scientist. Many of the Workshop Recommendations reflect this principle. (Along the same lines, moving more of the responsibilities for providing working examples and software, to the developers of content standards and major cyberinfrastructures, will decrease the labor of each component system developer or observing system platform.) Another way to understand how much metadata a system requires is by considering the interoperability that system wants to provide. Arguably, this approach is much better as it places the requirements of the constructed system first, with the level of effort reflecting the requirements. The MBARI-hosted NSF SENSORS Workshop in 2004 issued a report[5] that defined levels of interoperability that an observing system could achieve. For more advanced interoperability, a higher level of metadata sophistication will be required. (Guidelines could be developed based on this structure, and eventually certifications could be offered from an organization that evaluates interoperability.) Unfortunately, this approach also leads to a minimum level of overall interoperablity, because universal interoperability will be limited to the interoperability provided by the least capable system. 5 Edgington, Duane R., and Davis, Daniel, eds., 2004, SENSORS: Ocean Observing System Instrument Network Infrastructure, NSF SENSORS Project Workshop Report, MBARI, Moss Landing, CA. ( The standard tradeoff when developing metadata systems and standards is the amount of metadata that is required, versus the resources needed to provide that metadata Steven F. DiMarco, Texas A&M University

10 Just as the data in a system has a life cycle, so does the collection of metadata. Evolution of metadata. Just as the data in a system has a life cycle, so does the collection of metadata. We may consider data collection as a series of activities that result in the creation of a data asset. These activities likely have associated metadata, which then combine to form the complete metadata description for the resulting data asset. With each activity, more metadata is generated about the data asset produced by that sensor. Typical activities include: Sensor preparation and configuration Sensor calibration Sensor deployment on a platform Platform deployment in the field (or on another platform) Sensor samples and reports data Sensor configuration changes (power on/off, sampling rates, output formats, algorithms) Data is stored Data is analyzed and quality controlled and comments made upon it Data is processed into new data (with its own life cycle) Data is referenced (e.g., in a publication) Although only five of these activities directly involve the sensor, all of them involve the data asset resulting from the sensor, and the association of additional metadata to the original static metadata. With this perspective, it is clear that metadata for a particular asset evolves over time, and that multiple processes must be designed to ensure all the metadata is available to the end user. Questions of what to capture, how to capture it, and how to convey it to the right place must be addressed in designing the system. As suggested by the list above, the context of a sensor's deployment can be important to making use of its output. In fact, the complete provenance of the data coming from the sensor is valuable for many analyses. In the physical realm, the provenance includes context like the platform deployment characteristics, while in the digital world, it includes knowledge about each processing step that occurred along the way to the final product. Where to put it. In the beginning of its life, metadata may be attached to an instrument or accompany it in a separate file, or may be provided via a web site. In any case, it should always be possible for a person or computer to be able to look at a sensor or other data source and find the correct, current metadata that describes it. One task that metadata-rich systems must perform is relating data to its corresponding metadata. In a simple system this could be accomplished by including metadata with the data. When many data records are produced, and the metadata record is much bigger than the data record, the system must be able to relate the data record to the metadata that describes it. The approach generally favored at this workshop was to put both static and dynamic metadata on the internet, so that system components and others can readily access it. How to reference it. Another piece of metadata to consider is the information that uniquely identifies the metadata description. In order to uniquely identify an item like a metadata description file, a system for creating a unique identifier or ID (that is, a string or number that is not duplicated elsewhere in that context) is necessary. Unique IDs will be necessary to identify not just metadata description files, but items such as sensors, software, data streams, and data sets. Properly versioned metadata description identifiers enable a data system to achieve two goals: establishing that two descriptions are essentially the same, even if they have minor differences; and recognizing that a description is fundamentally different. A good data system design uses the unique metadata identifiers to link each data record with the most appropriate descriptive metadata. To take a concrete example, consider the metadata description that applies to a data stream from a sensor on a mooring. This metadata description can be identified using a unique identifier. If the metadata has a minor error, for example a misspelling in some descriptive text, a new version of the metadata description should be created, and that new version should now be used as the description of record. Something must differentiate between the two versions; either the unique identifier, or the version information, for the new metadata record must be different than the one for the previous metadata record. However, the system can track the relationship between the two, and so can tell which ones were in effect when data records were generated, and which one should be used as the best description (typically the new one) for each data record. Now consider the case where the sensor is modified to produce a different data output format. In this case, the new metadata description will only apply to the new data output. Here, the changed metadata record will again have a new unique identifier, and in this case the data system matches the previous data outputs only with the previous description, and the new data outputs only with the new metadata description. The metadata record unique identifier, including any version information, should be generated by the system or process responsible for creating and versioning the metadata descriptions. Note that in the data model for some systems the unique identifier will be required to change for any change to the metadata effectively including the versioning within the unique identifier mechanism while other systems will keep the unique identifier unchanged so long as it describes the same concept Extensions and Profiles Metadata standards are typically developed to allow changes. Modifications are often necessary to meet particular requirements of the organization creating metadata descriptions. For a trivial example, members of an organization may be required to put the organization s name in the Sensor Owner field. This requirement means the Owner field is a mandatory part of the organization s metadata structure. To discuss the workshop s recommendations on potential modifications to a standard, we first describe the terms extension and profile, which label the In order to uniquely identify an item like a metadata description file, a system for creating a unique identifier is necessary

11 Profiles are the community-specific application of a standard. modifications that can take place. Extensions are additions to the standard that allow users to provide information in additional fields that were not mentioned in the original standard. In standards such as ISO 19115, extensions include: addition of a new metadata section alteration of the domain of a metadata element (for example, assigning a code list to specify what responses are allowed for that metadata element) addition of terms in a code list addition of a new metadata element to an existing metadata element addition of a new metadata entity changing the obligation of a metadata element from optional to mandatory (but not the reverse, which would break the core standard) Constraints are considered a specialized subset of extensions, in which additional restrictions are placed on the standard. (In the above list items 2 and 6 are constraints.) In this case the term extension is describing the addition of information to the standard, even though the metadata instances that follow the standard are restricted. Profiles are the community-specific application of a standard. In a sense, profile = metadata content standard + extensions. Profiles must meet the core requirements of the metadata content standard (that is, provide the mandatory elements that the standard requires) but can include extensions (described above). Since we also know a metadata content standard is composed of the core metadat set, a profile also can be thought of as profile = core metadata set + optional elements + extensions. The developers of most content standards expect and encourage the development of extensions and profiles, and may direct how they are to be specified and/or registered. A community that adopts a profile increases the interoperability of its metadata internally. It even increases its interoperability with communities that use other profiles, because the use of the core metadata elements is shared. The workshop emphasized the value of developing extensions based on existing sensor standards. Through extensions, any of the following approaches could be used to add sensor metadata to an existing standard: Complete sensor data can be embedded in the metadata, Data pertinent to a specific instance of a sensor (location etc.) can be embedded, while general information (sensor type, model, make etc.) can be provided via a pointer to an external source, and More comprehensive sensor data can be provided using a pointer to an external source. For example, the Biological Data Profile is a profile of the Federal Geographic data Committee (FGDC) Content Standard for Digital Spatial Metadata (CSDGM, the term by which we will refer to this standard). The Marine Data Profile (used in this workshop) is a proposed profile to the ISO standard; AD-I is a proposed extension to the DIF standard. An important way that content standards may be constrained is through the use of vocabularies. Vocabularies can be used to fill out particular fields within the standard. The vocabulary used may be specified within the standard itself (for example, some fields in ISO define possible entries); or the standard may describe how to specify the vocabulary or vocabularies used (netcdf COARDS/CF allows users to specify the standard vocabulary ); or the standard may be silent about vocabularies (the CSDGM is fairly open about how many fields are filled out). As noted above, extensions are a common way to narrow the options for filling out fields requiring textual responses. The workshop participants strongly recommended providing mechanisms for specifying vocabularies that were used in filling out the fields of the specification. The best approach allows multiple vocabularies within a metadata instance, using an appropriate notation. As a less desirable option, enabling the instance to specify a single particular vocabulary was still considered better than not providing for the specification of a vocabulary. Making Life Easier Many recommendations to content standard and specification developers addressed making their specification more useful, more usable, and more widely used. Almost all the recommendations were for improvements that would increase the usability of the specification for both developer and end user. One recommendation from many participants was for the development of a graphical user interface (GUI) to enter, view, and edit the information provided via a specification. Among other requirements, the GUI should be intuitive, and provide documentation about the content standard and its fields. Long after the CSDGM was first developed, a wide range of GUIs finally exist that mostly address these requirements. The specifications now proposed that do not have GUIs thereby lose acceptance and increase frustration in the community. Another recommendation to increase acceptance of specifications was to provide the equivalent of help desk support. This support may be provided voluntarily by the community using the specification, but it is critical to getting initial users (the early adopters) over the initial problems associated with learning the specification. Complexity of the specifications played a clear role in their adoption, or lack thereof. While the workshop participants represent a community that is comfortable dealing with complexity, all appreciated the degree to which unvarnished complications made a specification much less appealing and much less useful. To the extent the specification is trying to thoroughly address complicated goals, the community using it should expect correspondingly complex issues to arise (e.g., best practices in using Xlinking, or soft-typing vs hard-typing). Finally, developers and users were encouraged to follow the guidance and best practices associated with the specification (such as the guidance provided in this report). Developers were put on notice that more information along these lines needs to be created, and more examples need to be deployed. An important way that content standards may be constrained is through the use of vocabularies Science staff, R/V Kilo Moana, WHOI

12 Steven F. DiMarco, Texas A&M University 4.2 Workshop Recommendations These recommendations were developed from the recommendations and priority action items captured at the workshop, the in-workshop survey, the lessons from each breakout group, and the post-workshop survey. They have been reorganized for clarity, and lessons about specific specifications were moved to following sections or omitted. In many cases an action item was identified and responsible individuals or organizations were identified. These results are shown in square brackets as [By: Institution1-Institution2/Lead Volunteer, Other Volunteers-Institution3...]. In some cases an action item was known to be in progress. This is noted as {Active}, with a footnote to the referenced activity. See smiactivities for the latest update on activities described in this section Content Standards and Specifications A star indicates a similar item is in the Alliance for Coastal Technologies sensor workshop report. C1 *Create a feature matrix of specifications. [By: MMI/Graybeal. Some resources may also be devoted to this by NOAA/IOOS.] C1.1. *Document key characteristics of content specifications and best practices for their development (e.g., clear, unambiguous definition of all fields; make existing references available). [By: To be determined.] C1.2. *Point to relevant references on each specification: wikipedia entry, associated vocabularies, Dummies Guides, Conceptual documents like UML views of schema, white papers, tutorials, examples of descriptions in specific domains (oceanography,...), example use cases and application scenarios, User s experiences with the specifications (e.g., comment boxes/ bulletin boards (need promotion), FAQs, additional documentation [By: To be determined.] C1.3. *Document best practices for filling out content specifications. Describe the minimal acceptable fields to fill out in each content specification. (Must balance the desire for low cost of entry against the desire for greater functional interoperability.) [By: To be determined.] C2 *Determine if there is a clear direction to proceed with manufacturers to offer their specifications in a standard way. Use the common data model and sensor description registry as appropriate. [By: MMI/John Graybeal and ACT/Scott McLean] C3 *Create a sensor description registry for storing descriptions of sensor models (from manufacturers or system developers). Consider (i) whether this should be the same thing as the global repository for descriptions of sensor instances, in which each individual resource could be registered, and (ii) whether this should also register template descriptions (examples) for others to adapt and use. Requirements include: must allow external resources to point to any given sensor description enable feedback to the author of the description instance or the specification must persist sensor model description for a long time [By: TAMU-SCOOP/Gerry Creager] C3.1. *Define requirements and best practices for sensor registries. Reference or adopt relevant ISO registry work. [By: To be determined.] C4 *Create working examples of sensor model descriptions and put them in the sensor description registry. [By: LDEO-TAMU-PDC-WHOI-ACT] C5 Create a repository for permanent registration of sensor instances and their metadata. Requirements include: must allow external resources to point to any given sensor instance description must persist sensor instance description for a long time [By: To be determined. Jens Klump/GFZ Potsdam implementing an instance.] C6 Create validating templates for each specification and put them in a sensor description template registry. [By: To be determined.] C7 *A common data model should be developed in the Unified Modeling Language (UML) to represent the data aspects of IEEE 1451, TransducerML, and SensorML. Goals of the data model are to come to a concrete understanding of what each specification incorporates in its model, and to represent those contents uniformly in UML. [By: Matt Arrott to chair; specification developers to provide any UML artifacts of their specification; many volunteers interested in participating.] C8 Determine the potential for interoperation between multiple specifications, for example referring to SensorML or TransducerML metadata from within ISO, CSDGM, or DIF instances, or vice versa. (This would make it possible to not redefine all referenced fields within the referencing specifications.) Things to consider: is the referenced data embedded or external? is the referenced information understandable in the context of the referencing file/content specification, or is it just a 'blob'? what happens if a link goes away? are there security issues associated with linking to another location and bringing data in (or with preventing the application from following the link) [By: To be determined.] Create a sensor description registry for storing descriptions of sensor models

13 Characterize the best practices for creating and maintaining vocabularies. C8.1. Identify crosswalks between existing content specifications. {Active[6]} [By: IOOS DMAC Metadata Expert Team/Julie Bosch, members-mmi/ Graybeal. In progress] C9 *Identify or create content specifications that allow the definition of policy for a given instrument (who can use it, under what circumstances, how they can use it, how products are used or distributed). [By: To be determined. See also the Survey Comments on these Conclusions, which provides infomation.] C10 Evaluate the availability of specifications for describing computational models, and enabling their interoperability with real-time data. (Note that SensorML and TransducerML both enable some level of process description.) [By: To be determined.] C11 *Identify/recommend a system for creating globally unique identifiers to label each of the following resources: sensors; applications; metadata descriptions; data streams; and data sets. Different unique identifier systems may be appropriate for each resource. [By: To be determined. Under discussion in multiple technical forums.] Vocabularies V1 Continue MMI s existing community service representing vocabulary terms with URIs. (Terms for GCMD, CF, UCUM, AGU index terms, and OGC parameters are currently maintained.) [By: MMI/Luis Bermudez] V2 Create a hosted, moderated vocabulary registry that follows recommended best practices. Reference or adopt relevant ISO registry work. [By: To be determined.] V2.1 *Characterize best practices for creating and maintaining vocabularies (e.g, moderation practices exist and are publicly defined, terms clearly defined, consistent word ordering) [By: To be determined.] V2.2 Create a comparison checklist of existing vocabularies and document their characteristics (where to go for expert support, additional documentation, etc.) [By: MMI] V2.3 Provide guidance as to the best vocabularies for particular users/applications. [By: MMI] V3 Create a community schema for representing vocabularies and their terms using URIs, or similar. The schema should be entered into IANA, and terms made accessible via a resolver service. [By: MMI/Bermudez] V3.1 Consider how Wikipedia or other systems might be referenced as an authority within the community vocabulary schema. [By: To be determined.] V3.2 Encode the most important vocabularies within the community vocabulary schema. [By: To be determined.] V4 Create a formal specification of requirements for a vocabulary resolver service. [By: Bob Arko, Luis Bermudez, Alexandre Robin, Steve Havens.] V5 Identify vocabularies that are needed to characterize the sensor domain, and any existing instances of such vocabularies. (An initial list of existing and needed vocabularies is in the Vocabularies Needed section below.) [By: MMI/Graybeal, to publish initial material for community review and improvement.] V5.1 Characterize vocabularies as specific to the sensor domain, or more general. [By: To be determined.] V6 Consider how to create a community process to agree on vocabularies. The process must be endorsed and participated in by the communities in that domain. [By: To be determined.] V6.1 Assess how the characterization of the vocabulary (whether it is specific to sensors or more general) affects how its terms are incorporated. [By: To be determined.] V6.2 Initiate community processes to create needed vocabularies. [By: To be determined.] Additional Information on Vocabularies The information in this section was developed by synthesizing results from the workshop, and was approved by the workshop participants in a post-workshop survey. This workshop s goals included identifying existing controlled vocabularies used with these specifications, and identifying the need for additional controlled vocabularies. The workshop discovered that almost no controlled vocabularies were specified or used in the content specifications the workshop reviewed. Even the specifications that were explicitly designed for computability had not fully completed the process of specifying vocabularies, or more accurately, specifying acceptable and specific techniques to reference vocabularies and their terms. (See Comparison of Specifications for more information on differences between the specifications.) The workshop recommendations identify the steps needed to create and access vocabularies using standard protocols. As those steps are accomplished, content specifications can be updated to reflect best practices, and will hopefully keep pace. MBARI The workshop recommendations identify the steps needed to create and access vocabularies using standard protocols. 6 See Content Standards Field Comparison Matrix: Introduction at csmatrix

14 Controlled vocabularies covering a wide range of topics are needed, to complete many different fields in a metadata description. What Vocabularies Are Needed Controlled vocabularies covering a wide range of topics are needed, to complete many different fields in a metadata description. Below, needed vocabularies have been organized into groups with similar characteristics. Note that MMI has an extensive list of vocabularies[7] that fit some of these needs, but so far there are relatively few community-wide, comprehensive vocabularies that follow wellestablished management procedures. Fundamental Concepts. The workshop identified the following fundamental concepts that should be addressed by vocabularies: variables measured (i.e., phenomena) units value types (e.g., boolean, float, int, double; many vocabularies have this information) These concepts occur in most data systems, and some vocabularies already exist in each category; a few are cited below. Sensor Instance Identification. The following categories of sensor information require vocabularies that list the known instances; often this is a service provided by a registry or catalog. These capabilities may require some agreement with the manufacturers, or a universal service that is updated by many users (for example, the way Apple s itunes deals with song titles). manufacturer name (probably needs to be associated with a date, given how names change over time) sensor model name (specific to each manufacturer and highly changeable over time, even within a single manufacturer and for a given sensor) Types and Roles. These slightly broader concepts are also needed: instrument/sensor types (note that potential categorization schemes are highly sensitive to user interests; some of the schemes are listed below) functional capability intended user capability for attaching devices type of postprocessing required data source type (e.g., actuator, detector, software) measurement process type (e.g., sampled, dredged, radiative reflectance, weighed, imaged; the CF standard vocabulary may be informative here) platform types (e.g., AUV, mooring, float; see vocabularies in previous sections) sensor roles (e.g., primary, backup, inactive) deployment type (e.g., mission, campaign, expedition) unique identifier type (e.g., locally unique ID, globally unique ID, MAC ID, UUID) Content Specification Fields. The following terms may be needed to define the tags used in the content specification, especially in extensions or profiles of the specification. (As such, they reflect potential guidance to specification developers.) Of course, if a field like Calibration Status is added, terms to fill out that field 7 MMI list of vocabularies: will also be needed (Uncalibrated, Calibrated by Manufacturer, Calibrated Against Standard, Calibrated Against Field Data). sensor configuration terms (for calibration, number of samples, scan angle) terms for computed values (e.g., standard deviation, DOP, mean, median, percentage) governance terms (to characterize sensor interfaces and sensor policies). Vocabularies Already In Use The following controlled vocabularies were used in the workshop s sensor content specifications: Global Change Master Directory (GCMD) Science Keywords - discovery metadata for the ISO theme keyword. International Hydrographic Bureau (IHB) area names (note these are useful for localized area names). Unified Code for Units of Measurement (UCUM): units of measure by reference Potential Vocabularies to Use The following controlled vocabularies were identified as candidates that could be used with these specifications: GCMD Instrument Keywords (this list may not be complete, and may be too general) GCMD Platform Keywords (this list is not yet available as a stand-alone vocabulary) MMI Platform Ontology (this work is not yet complete) British Oceanographic Data Centre (BODC) parameter vocabularies, GCMD Location Keywords (note these are not useful for localized area names) Climate Forecast (CF) standard names for units, European Petroleum Survey Group (EPSG) Geodetic Parameters 4.3 Enabling Future Work The workshop agreed on the following recommendation in its final session: F1 * The workshop strongly endorses advancing interoperability among metadata frameworks, and recommends establishing funding from multiple sources to pursue this effort. This recommendation sprang from the recognition of: the use of many existing standards, and proposed standards, within the community; the need to have interoperable systems built using these specifications; the many activities that are needed to pursue interoperability; and the importance of conducting those activities with a broad community involvement and perspective. Some of the activities described in these conclusions have been started by organizations interested in sensor metadata interoperability, but many other activities will require additional resources. The workshop members did not discuss strategies for pursuing the additional The workshop strongly endorses advancing interoperability among metadata frameworks, and recommends establishing funding from multiple sources to pursue this effort

15 The pursuit of these recommendations as isolated, one-off activities would divide community efforts and slow community progress. funding. Many of the recommendations above fit into an overall strategic approach, and could be funded as stand-alone projects or as part of a broad set of activities. A critical characteristic for overall success, however, was that each activity be pursued as part of a coherent community-based effort, rather than as individual activities targeted to specific projects. The pursuit of these recommendations as isolated, oneoff activities would divide community efforts and slow community progress. The workshop also did not address candidates to receive additional funding and lead the projects. Although for obvious reasons the workshop participants were most familiar with the MMI project, participants viewed many other communityfocused organizations as potential collaborators and leaders. At the organizational level suggestions included the Alliance for Coastal Technologies, the Open Geospatial Consortium, and various standards bodies. Ocean observing projects include both ORION's Ocean Observing Initiative and IOOS' Data Management and Communication efforts in the United States, and the Sea Data Net and ESONet projects in Europe. Technical projects with an active metadata interest include MMI, the Global Change Master Directory, and the Geosciences Network (GEON); environmental cyberinfrastructure projects like LOOKING, LEAD, and NEON are also active. Companies who are already engaged in these discussions include Satlantic and Iriscorp. We note the Alliance for Coastal Technologies also includes many relevant funding recommendations in its Sensor Interoperability Workshop Report. We expect that funding allocations would take place according to the usual processes, and hope that funding organizations would take into account the priorities, needs, and strategies in this report when making funding available Priorities and Justifications The workshop participants also agreed the following recommendation was necessary: F2 Develop priorities and justifications for funding. Of course, it is likely that organization(s) pursuing funding would have considered priorities and provided justifications for that funding. But in addition, there is value in a community-developed project to organize and prioritize the efforts, since this could help direct additional funds to the most important activities. This workshop report has been organized as a first effort toward these ends. The Workshop Recommendations have been prioritized to reflect the highest emphasis in the final list of action items, with related items combined as appropriate. Justifications are provided throughout the report, and in a brief form after each recommendation Possible Outcomes The workshop organizers anticipate these recommendations can be pursued in three ways: 1. An organization or project sees a funding opportunity or strategy in this report and responds to it individually, perhaps citing some material from this report as justification. 2. An organization or project responds to a funding opportunity by representing themselves as meeting a community need, citing the material and approaches in this report for justification, and identifying the ways they will be responsive to community objectives. 3. Organizations, including those cited in this report, will form collaborative, community-based alliances to respond to funding opportunities, emphasizing and emphasizing their interest in reflecting the needs of the entire community. While participants at this workshop encourage all three responses, we especially commend the stronger community connections established by approaches (2) and (3). The greater the community involvement in the development of sensor metadata approaches and solutions, the more sophisticated will be those solutions, and the less likely will be the introduction of competing and non-interoperable approaches. Leslie C. Bender III, Texas A&M University We hope that funding organizations would take into account the priorities, needs, and strategies in this report when making funding available

16 Appendices A. Agenda B. Preparation C. Materials & Tools D. Team Reports E. Workshop Survey Results F. Workshop Lessons Learned G. Participant List Appendices Note: Many of these appendices use information presented to workshop participants. In most cases, we have attempted to preserve the terms used in the original presentation, even though we have since tried to become more disciplined. Thus, the use of the terms content standard, standard, and FGDC may often be more appropriate expressed as content specification, specification, and CSDGM for consistency with the rest of the document. Materials produced since the workshop have been edited to reflect the practices of the report. Additional original material from the workshop may be found at the workshop web site, Appendix A. Agenda Day 1: Thursday, October 19, : PLENARY SESSION Welcome/Logistics/Agenda Focus of Workshop: Assessing Standards and Identifying Issues Keynote Presentation: The IOOS DMAC Metadata Expert Team: Challenges and Status Anne Ball, Co-Chair, IOOS DMAC Metadata Expert Team Keynote Presentation: Environmental Observatories: ORION Cyberinfrastructure Sensors & Metadata Alan Chave, Woods Hole Oceanographic Institution 0910: Track 1: Evaluating Sensor Metadata Standards Detailed review of task: Describe two sensors for the provided use cases. Use the assigned content standard as faithfully as possible. Attempt to complete the work to a usable standard (e.g., the description can be validated). Schedule: Assignment due at beginning of Day 2. Primary Goals: See how well the standard works. Identify strengths and weakness of the standard. Prepare a presentation of the results. Secondary Goals are to identify, use, and similarly analyze applicable vocabularies, and to identify other issues BREAK Go into breakout teams Each group tries to complete all use cases Vocabularies used as needed to fill out descriptions (note missing but needed vocabularies) 1200 LUNCH 0910: Track 2: Sensor Metadata Interoperability Training 9:10 AM Session 1: About Metadata: Context and Solutions Track II Panel: End-to-end Systems Focusing on Sensors (OOSTech, MOOS(SIAM/SSDS), Satlantic, Neptune) Satlantic: Steve Adams, Satlantic NEPTUNE: Benoit Pirenne, NEPTUNE Canada OOSTech: Luis Bermudez, MBARI/MMI MOOS: John Graybeal, MBARI BREAK 10:30 AM Session 2: Introduction to Hand-On Exercise Hands on Overview Matt Howard, TAMU [10 min] Review of Workshop Tools Matt Howard, TAMU [30 min] Introduction to Two Markup Languages SensorML (Alexandre Robin, UAH) [60 min] TransducerML (Steve Havens, Iriscorp) 1:00 PM Continue Working in Breakout Groups 1:00 PM Hands-On Training I and II Training Step I: Viewing and editing XML Documents (Matt Howard, TAMU) [30 min] Training Step II: Testing XML Documents (Luis Bermudez, MBARI/MMI) [120 min] 24 25

17 4:00 PM: Track 1 Plenary Problem Identification Use Case Questions Suggestions for Group 4:30 PM: Back to Work in Breakout Groups 3:30 PM BREAK 3:45 PM Hands-On Training III and IV Training Step III: Transforming XML documents DIF to FGDC stylesheet, GCMD sensor information submission, SensorML to GCMD Melanie Meaux, NASA/GCMD [40 min] SensorML to FGDC, Other FGDC stylesheets David Sallis, NOAA NCDDC [40 min] Stylesheets in the ResEau Project John Cree [15 min] Training Step IV: Other Techniques and Tools Alexandre Robin, UAH [25 min] Day 2: Friday, October 20, : BREAKFAST 0830: Track 1 Continue in Breakout Groups: Prepare Initial Report Summary of Results Successes and Problems Answers to Questions from Track : Track 2: Plenary Feedback on Standards and State of the Practice Lessons Learned 11:00 AM BREAK Plenary Discussion: How To Move Forward To Agree on Standards (What agreement is possible?) To Develop Missing Pieces (What pieces are clearly missing?) To Get the Most Bang for the Buck (How can we combine forces?) 12:00 PM: LUNCH DAY 2 (FRIDAY) AFTERNOON 1:00 PM: Plenary Discussion: Final Guidance/Recommendations Areas of agreement Inputs to Steering/Report Generation Teams Identification of Collaborative Opportunities working teams on priority problems existing forums, mail lists, activities standards development efforts demonstration projects technical workshops 2:30 PM: WORKSHOP ENDS (formal meeting) 3:00 PM: Post-Workshop Analysis To be attended by Workshop Steering Committee and other interested members, and to include: Summarize and consolidate recommendations Create initial reports on each standard Augment and initiate collaborative opportunities Identify other opportunities for progress (strategic planning) Action item list for workshop closure 26 27

18 Appendix B. Preparation Appendix B. Preparation B1. Track One Purpose and Design 25 workshop participants were involved in Track One. These participants were divided into groups, with each group concentrating on the application of a specific metadata specification. Each group had an identified leader. The leader had past familiarity with the specific metadata specification being investigated by the group. The remaining members of the group were selected on a quasi-random basis. Although some participants had past experience with the specific specification used by the group, this was not a requirement for group membership. This type of group composition initially relies heavily on the leader. The leader possessed the knowledge to start the process of placing the metadata content into the specification. However, once started the group dynamic is then established based on the inquisitiveness of the other members. The other members then begin the process of eliciting information from the leader in an attempt to understand both the specification and how the group will proceed with metadata content placement within the specification. In this way, the group begins to explore both the specification as defined in a descriptive record context, and also the specification as defined by supporting documentation. Identification of Specifications and Vocabularies The particular specifications to be utilized by the workshop participants were identified in advance of the workshop. Workshop organizers considered numerous specifications presently being used in the marine community. To be considered, the specification needed to be in use and mature enough for a group of non-experts to utilize. Essentially, this identified a requirement for supporting documentation before a specification would be considered. The specification selection and use case examples were prepared in parallel. Of course, particular use cases would be more or less suited to a particular specification. For geospatial marine metadata describing data assets, the ISO and CSDGM standards are obvious choices. However, workshop organizers wanted to emphasize descriptions that focused on the description of instrumentation, and the applications to which such descriptions would be applied. Although the ISO and CSDGM standards are perhaps more oriented towards geospatial data set descriptions (for data discovery and interpretation), these standards were included in the workshop. Their inclusion was in part due to the wide use of these standards in the marine metadata community, but also in part as an experiment to exercise the use of these standards in a metadata description scenario that is less traditional for them. Since organizers knew the use cases would deal with instrumentation, the SensorML and TransducerML specifications were also selected. These sensorcentric specifications have direct applicability to the instrumentation metadata descriptions. Finally, the DIF metadata standard, together with the NASA Goddard Auxiliary Description of Instruments (AD-I) specification, was selected. The AD-I is currently under development as an extension to the DIF standard used in the Global Change Master Directory (GCMD) clearinghouse, and that relationship helped prompt this selection. The workshop provided a mechanism for the AD-I development team to gain feedback from the application of the specification by typical users. Use of specific vocabularies was not mandatory. Of course, some specifications have content vocabularies defined within the specification. However, the organizers did not specify the use of specific external vocabularies, and wanted to learn what vocabularies were already in use, and what vocabularies were needed to make the specification effective. Identification of Instrument Classes and Instances The workshop organizers wanted to concentrate on instrumentation metadata descriptions. The use case development first identified classes of instruments. Traditional oceanographic instrumentation such as the CTD (conductivitytemperature-depth, or conductivity-temperature-density) sensor was included. The CTD is a direct measurement devise, where the sensor is in direct contact with the medium being measured, and in its default configuration produces fairly simple readings. CTDs are additionally interesting in that they do not always measure depth (sometimes it is set as a constant), and can be outfitted with additional sensors. The second instrument, the Acoustic Doppler Current Profiler (ADCP), is also in the medium being sensed, but the instrument senses beyond the proximity of the device to a distance of many meters. It measures ocean currents, or more precisely the motion of particles in the ocean. The ADCP also produces quite complex measurements, in that it measures motion vectors at multiple altitudes in the water column. For a remote sensing example, the CODAR system was selected. CODAR, which stands for Coastal Ocean Dynamics Applications Radar, is a High Frequency (HF) radar system that remotely measures ocean surface currents. By emitting electromagnetic waves from shore and monitoring their reflections using another shore antenna, surface wave speed and direction can be measured by the systems (two systems are required to fully measure both parameters). Finally, it was recognized that supporting information from other instruments plays a critical role in oceanographic data collection, and the GPS was selected as a typical example. The GPS is distinct in that it relies on additional platforms, the satellites providing the GPS signal, in order to perform its function, while the unit delivering the infomation is essentially in situ on a surface platform. Having selected the classes of instrumentation, the organizers then specified instrument instances as follows: CTD: Seabird Microcat 37 ADCP: RDI Sentinel Workhorse Long Ranger High Frequency (HF) Radar: CODAR Ocean Sensors HF Radar GPS: Garmin GPS17-HVS 28 29

19 Information specific to the instrumentation, in fact describing an individual instance of 3 of these instruments (CTD, ADCP, and GPS) was provided to participants prior to the workshop. Development of Use Cases Having identified four specific instruments, the organizers then concentrated on the development of general use cases for these instruments. The use cases outlined a real-world situation where the instrumentation could be used. The use cases were as follows: 1. Search via Catalog: Metadata from a raw data stream will be fed directly into a data catalogue and clearinghouse (e.g., the Geospatial One Stop). The data is reported directly from a sensor on a fixed platform. Groups need to describe any sensor-related information needed to allow potential users to search for the data by variable name or sensor type. The metadata description for the sensor should assume the clearinghouse is fully capable of representing whatever information is deemed necessary for the search. 2. Find Data, Use Data (Automatically): Data from a sensor is made available via the Internet by displaying an icon on a map. Describe the sensor-related information needed to allow a visitor to find and use the original data used to create the icon. Include as a goal, automatically accessing the data based on the metadata associated with the icon. 3. Deployment Tracking, Maintenance (Usage History), Data Processing, Sensor Control: A sensor is deployed as part of a deep ocean observatory. The sensor may be mobile and deployed at multiple depths during a mission. Physical access to the sensor is very limited. Describe the sensor and its context to meet the following goals: Many people must be able to remotely access the sensor, for example to configure it. Data from the sensor is automatically delivered at frequent intervals, and must be received, presented, and processed by a host system on shore. Users must be able to use the sensor deployment history to maintain it in an operational state, including troubleshooting and repair. 4. Deep Historical Analysis: A sensor s data set must be analyzed a year after it has been collected. The analysis must address the accuracy of the sensor s readings, including the quality control of the data and the quality assurance procedures applied to the sensor. Describe metadata to be captured to allow these analyses to be performed. The four instruments and four use cases could theoretically result in sixteen metadata descriptions, but the expectation was that a single description of an instrument could meet the needs of all 4 use cases above, and that descriptions of the latter two instruments might be illustrative rather than complete. B2. Track Two Purpose and Design As noted above, the primary goal of Track Two was to maximize participants insight into sensor metadata practices and options. Track Two was an opportunity for participants to learn more about sensor metadata, so that they could become more active practitioners and contribute further to the field of sensor metadata for ocean observatories. The general approach of Track Two involved introductory presentations, followed by hands-on training, and ending with plenary discussions (many involving Track One participants). The introductory presentations provided context and focused on SensorML, TrandsducerML, sensor metadata in end-to-end data solutions for observatories, and the tools for the hands-on training. The training was divided into four sessions. Trainers roamed the classroom to provide individual help to participants. The first two sessions employed the free XML editor, Oxygen ( as well as example XML files describing a CTD and an anemometer. These sessions focused on viewing and editing XML documents, and testing (or validating) XML documents, respectively. The first two sessions gave participants the opportunity to familiarize themselves with Oxygen, XML, and how XML could be used to package sensor metadata. The third session addressed transforming XML documents. Three contributors, Melanie Meaux (NASA GCMD), David Sallis (NOAA), and John Cree (Environment Canada) prepared presentations demonstrating how an XML document could be transformed (to display differently or to prepare a file for submission to a clearinghouse) using XML stylesheets. The fourth session included a demonstration of other techniques to manipulate XML documents and tools to use with XML documents. Throughout the training, questions and comments were collected from participants. Development of Example Files To ensure that participants had relevant files they could easily use during the first two sessions of the training, MMI prepared XML documents, according to the SensorML core schema, describing a CTD sensor and a wind sensor. (Both sensors are typically found on an oceanographic buoy.) The CTD file was complete, but the wind file was intentionally left incomplete, so that workshop participants would have to correct it for validation. To correct the wind file, workshop participants were given a text description of the Davis Anemometer Participants were then asked to complete the file by including the anemometer information in the appropriate manner, before validating it. Tool Development In the second session of the hands-on training, participants could use the validate feature of Oxygen or they could use an online metadata validator tool developed by MMI. The validator tool was developed to allow simple copy and paste into a text box and online validation of the text against the SensorML core schema

20 Although the tool was initially established to validate text against the SensorML core schema, it can be easily adapted for validation against other XML schema. Interaction Between the Tracks Although the workshop was designed to include two tracks, these tracks were crafted to interact. The challenges and insights from Track One groups were presented simultaneously to the Track Two participants, which enabled Track Two participants to appreciate current technologies in this field. Concurrently, Track Two participants informed the Track One discussion, both by submitting questions, and by offering comments toward the evaluations that occurred in the plenary discussion. Track Two participants also reported on the lessons learned from the exercises they performed, which helped inform the plenary discussions. Other Preparation While not exactly preparation for the workshop, two arrivals strongly influenced the preparation process. The recent arrivals of Luis Bermudez son, Santiago, and Stephanie Watson s son, Harper, were very much in the minds of some participants. Appendix C. Materials and Tools Track 1: Materials Problem Definition: Instruments and Use Cases Participants were given materials describing the problem they were to address. These materials are available at the workshop site, and are described in the previous appendix. Participants also received a list of potentially helpful terms for describing sensors. Team Template Each Track One team was directed to pages on the MMI site with the following outline: References Use case descriptions Questions about the content standard Observations about the process Observations about the content standard (includes recommendations) Survey questions Examples The intention was that all of the results from the workshop would be filled out on the web pages, but time did not allow for sufficient training of the recorders. Appendix C. Materials and Tools Santiago Harper Track 2: Tools To facilitate the training exercises, MMI implemented a simple form-driven metadata validation tool on its web site. Users could paste their XML into the form and click a button to see if the XML was valid according to the latest SensorML specification. A standard Xalan parser was used to perform the validation against the SensorML schema. While this utility is not suitable for serious development use, it can be a helpful technique to enable users without a validating XML parser (Oxygen was the default tool used in the workshop) to participate without a heavyweight commitment of resources. The form can be adopted to validate against other metadata schema should that prove desirable. The on-line validator is still maintained on-line; for details see Track 2: Example Files Two example metadata descriptions were available before and during the workshop. One file provided an example for a wind sensor, while another described a CTD (conductivity-temperature-depth) sensor. Both metadata descriptions were captured in SensorML. The wind sensor description contained only namespace specifications, while the CTD description described namespaces, description of the instrument, unique identification of the instrument, specification of how to interface with the instrument, and parameters measured by the instrument. The example CTD and wind files used in Track Two can be found at: mmiworkshop06/materials/track2materials/steps1and

21 Appendix D. Team Reports International Standards Organization (ISO) 19115/19139 CSDGM DIF/Auxiliary Description of Instruments SensorML TransducerML Appendix D. Team Reports D1. International Standards Organization (ISO) 19115/19139 This group had two members with very strong knowledge of the standard, and two additional members with a slight knowledge of the standard. Background The ISO standard specifies the fields used to describe geospatial metadata. ISO defines a Geographic MetaData XML (gmd) encoding, an XML Schema implementation derived from (i.e., designed to encode) ISO Both ISO and ISO have been released as ISO standards (the latter after the workshop took place). Using ISO 19115, sensor data can be stored as a part of data lineage or data quality, both of which mostly rely on free text descriptions. Sometimes sensor information can be provided as a theme keyword, especially when data is often referred to using sensor s name, as in the case of MODIS and AVHRR. While free text may be sufficient for human interpretation, it hinders machine interoperability. ISO s extensibility through profile development is a potential solution to this problem. ISO allows communities to tailor schemas for their specific needs when the original schema is insufficient. Although profile development offers freedom and flexibility to communities/ developers, in order to promote interoperability, extensions could be developed based on existing sensor standards such as SensorML. Through extensions: complete sensor data can be embedded in the metadata, data pertinent to a specific instance of a sensor (location etc.) can be embedded, while general information (sensor type, model, make etc.) can be provided via a pointer to an external source, and complete sensor data can be provided using a pointer to an external source. Strengths The following strengths have been identified in working with this standard: International standard, Well documented, Interoperable, Crosswalks exist to CSDGM and DIF, Extensible, Modular, and Includes standard code lists (supports multilingual applications). Recommendations, Weaknesses, Issues, and Unresolved Questions The following recommendations, weaknesses, issues, and questions were identified: There is a fundamental difference between discovery (or content) and use metadata. ISO is a discovery metadata standard and, by itself, does not meet the sensor definition requirements. Why is ISO preferable over CSDGM? ISO 19115, like all ISO standards, requires a fee to download the specification document. Use metadata describes the syntactic and structural aspects of the data. It is recommended to use SensorML to populate the ISO Marine Community Profile s mandatory and core metadata elements. Controlled Vocabularies Needed The following controlled vocabularies were used: Global Change Master Directory (GCMD) Science Keywords - discovery metadata for the ISO theme keyword. International Hydrographic Bureau (IHB) area names - not useful for localized area names. A national gazetteer would be more useful. Other candidate controlled vocabularies that could be used: GCMD Instrument Keywords this list may not be complete and may be too general, GCMD Platform Keywords, British Oceanographic Data Centre (BODC) parameter vocabularies, GCMD Location Keywords not useful for localized area names, Unified Code for Units of Measure, Climate Forecast (CF) standard names for units, and European Petroleum Survey Group (EPSG) Geodetic Parameters There may be other controlled vocabularies that need to be identified by the sensor community. Other Comments on the Specification In summary, the group thought this content standard was somewhat useful for describing sensors. As well, the standard was thought to be somewhat applicable for describing sensors within a comprehensive ocean observatory cyber infrastructure, such as that being suggested for ORION. In terms of discovery of the existence of sensors and associated data, this standard was thought to be significant. The final recommendation from the group was to use SensorML to populate the mandatory and core metadata elements in the ISO Marine Community Profile. D2. Content Standard for Digital Geospatial Metadata (CSDGM) The CSDGM group was composed of experts with different experiences in the standard. Although the group also recognized that having a member knowledgeable on instrumentation would have been helpful, they also recognized the benefit to first building a CSDGM record for a sensor, before starting the process of answering the use case questions. Strengths The following strengths have been identified in working with this standard: The group acknowledged that the CSDGM is very flexible, so it is possible to document a sensor using different fields from the standard. Effectively, this would [ISO Team] While free text may be sufficient for human interpretation, it hinders machine interoperability

22 [CSDGM Team] It may be necessary to develop guidance on how to create consistent instrument-based metadata records. create metadata records in different forms, all describing the same sensor, having the same basic content but with different metadata fields being utilized. During the activities, the group recognized that the CSDGM was able to capture much (but not all) of the information about the CTD. The general conclusion was that the standard was good for basic data discovery. Recommendations, Weaknesses, Issues, and Unresolved Questions The following recommendations, weaknesses, issues, and questions were identified: Given the flexibility of the CSDGM (and thus the possibility of using different fields to document a given sensor), it may be necessary to incorporate different templates when comparing CSDGM sensor records to others. Or, it may be necessary to develop guidance on how to create consistent instrument-based metadata records. The group recommended that the standard needs more support for instrumentation metadata and positional information related to the instrumentation. However, for interoperability (machine-to-machine) the standard needs to be more constrained. The standard can contain the sensor descriptions, but variations in the exact placement of the content would result in interoperability issues. The standard needs greater constraints to impose limitations on the existing placement variations. However, the specific usefulness of the standard depends on the particulars of the instrument you are attempting to describe. They also made the observation that CSDGM geo-location information does not distinguish the position of the sensor from the position of the data. In the use cases, the sensor and data may be separated in space, thus requiring two locations to describe both sensor and data. The same problem would exist if the sensor and platform were not co-located. There was also the suggestion that the CSDGM metadata elements did not fully support sensor and instrument naming. Controlled Vocabularies Needed The group recognized the need for vocabularies external to the standard, in particular for instruments and sensors. A vocabulary for variables was also required. D3. DIF/Auxiliary Description of Instruments This is a project under development in NASA s Global Change Master Directory (GCMD) to allow instrument metadata in GCMD data submissions that use the DIF. This specification would serve to extend the DIF metadata currently accepted. With the exception of the lead, the group had little knowledge of the Auxiliary Description of Instruments specification. This resulted in a thorough examination of the speciication and supporting information. Each AD-I element came under considerable scrutiny. Strengths The following strengths have been identified in working with this specification: user interface and access methods query: Salinity/Density + CTD + Fixed platform + spatial coverage short list of inputs/fields results in easy AD-I (DIF) form completion intuitive search methods in DIF generalized nature of content makes searching feasible AD-I top level (Instrument_Category) intuitive Good tool for data discovery Recommendations, Weaknesses, Issues, and Unresolved Questions The following issues, recommendations, and questions were identified: Instrument Class appears to indicate a classification of instrumentation. However, supporting descriptions contain types of instruments (e.g., Current/ Wind meters), type of platforms (e.g., Profiler/Sounder), and sampling schemes (e.g., Corers, Samplers). This results in a single instrument (e.g., CTD) possibly being in four different Instrument Classes. The class conceptual model is suitable, but class field name introduces confusion. Some sensors fit in multiple classes. Classes at top level cross domains (e.g., instruments, deployment method, some are phenomenon specific; CTD falls under Profiler, conductivity sensors, gauges). First start with a conceptual model and then develop your vocabulary Feature matrix for specifications (specifications are created for different purposes) - what specification to be used for a particular purpose? Not suitable for describing specific instances of instruments/sensors (but that is not its purpose) Good tool for discovery of data Start with a conceptual model then develop the keywords Controlled Vocabularies Needed The following issues on controlled vocabularies were raised: Need to be careful about the vocabulary (additional work needed for instrument classes) Clear definition of new keywords (this is in progress) Other Comments on the Specification The workshop created useful feedback for the developers of the Auxiliary Description of Instruments (ADI) specification for DIF. Other Group discoveries included the class vs. instance content for AD-I Short_ Name. Short_Name is a required field that provides the link to the DIF record. One realization that the Group made was to facilitate the search, the Short_Name needs to be a generic name, not a specific name (i.e., a specific name such as something including the instrument model number). However, Short_Name must be unique within current AD-I definition. Cyndy Chandler, WHOI [DIF Team] First, start with a conceptual model, then develop the keywords or vocabulary

23 [DIF Team] The use-level metadata should include details such as processing coefficients, while such details are not required at the data discovery level. Steven F. DiMarco, Texas A&M University This also made the Group realize the levels of metadata. For example, at the uselevel, metadata should include details such as processing coefficients, while such details are not required at the data discovery level. The Group also noted that some AD-I field labels were difficult to interpret due in part to the ordering of the words (eg. Instrument_Associated_Sensors vs. Sensors_on_Instrument). Other details included the need for Online_Resource to have a web link to a generic description of the instrument. This is because the AD- I record describes a generic instrument (e.g., CTD), not a specific model. As well, Instrument_Logistics and Spectral/Frequency Information only apply to unique instruments (e.g., satellites). This is an element that does not apply to commercial instrumentation (e.g., in situ instruments). Instrument_Logistics is describing a specific rather than a generic instrument. This indicates an inconsistency within the AD-I, that has resulted because of its development for satellites, rather than insitu instruments. The Group also recognized that the metadata was invariant for these particular use cases. This is because the AD-I platform field was not required. Therefore, multiple platforms could be associated with a single instrument class. This means the single AD-I record covers all use cases for this application of AD-I. The AD-I Short_Name under instrument generated the most discussion for this evaluation. Since it is a class description, the AD-I doesn t change for different platform mounting of any instrument. D4. SensorML The SensorML team had several sophisticated data managers with basic SensorML experience. In the course of a detailed walkthrough with concrete examples, the team critically evaluated many aspects of this specification. Strengths The following strengths have been identified in working with this specification: Xlinking is powerful way to atomize information (but implementation needs to accommodate the need for versioning of linked/references (best practice)). Arrays can be handled by using an index for only the dynamic pieces. NASA funding was awarded to improve accessibility of SensorML. Soft-typing and hard-typing are very powerful features (but see issues). The specification provides a very complete and sophisticated metadata framework. As with other OGC standards, it will be openly available once approved. Recommendations, Weaknesses, Issues, and Unresolved Questions The following issues, recommendations and questions were identified: The use of the term "Metadata" to characterize just the general metadata is confusing. It is not very clear how to get started (e.g., need for the specification header); there is a need for a "SensorML for Dummies". The way to enforce validation is an implementation detail, but without that best practice, it is unclear how to make this specification work (need to define what codespace points to, does it point to a list or define through facets); there probably needs to be application-focused validation. Are there no controlled vocabularies in place yet? Apparently not, other than the Unified Code for Units of Measurement (UCUM) (users can use UCUM by reference) Soft-typing vs hard-typing is difficult concept for many people; best practices with clear instructions are needed. The Semantic Web Engine (SWE) Common section is helpful reading for understanding the basic mechanisms of the schema. Community-wide vocabularies need to be developed. The group endorsed the creation of a community-wide SML sensor registry (ACT may be a good place for registration) Should be endorsed by international open-standards community Use it to populate the ISO Marine Community Profile mandatory and core metadata Harmonize SensorML and TransducerML. Document complementary usage and see if they can be a part of a single specification. Controlled Vocabularies Needed The following controlled vocabularies were used: Unified Code for Units of Measurement (UCUM) The Group recognized the need for community-wide vocabularies on topics such as: sensor type sensor parameter (calibration, number of samples, scan angle, etc.) classification type (sensor type, intended user, etc.) process definition (actuator, detector, algorithm, etc.) phenomena units useful values (standard deviation, DOP, mean, median, percentage, etc.) identifier type (manufacturer, model, registration, expedition, etc.) role interface Other Comments on the Specification The group endorsed the Sensor ML specification, as coupled with Common Observation Model. The Group also endorsed the creation of a community-wide SensorML sensor registry, with the following recommendations: Should be modern, robust, and extensible; Should be able to accommodate multiple versions/editions (simpler records for basic discovery and detailed record for engineering); Encompasses a growing body of open-source tools for authoring records and executing processes; initially populated with a small set of exemplars, for example MGDS multibeam sonars and TAMU tide gauges, and with a forum for feedback to the authors [SensorML Team] The group endorsed the creation of a community-wide sensor registry

24 The following are additional notes by the team: The validdate (validtime?) component can be used to constrain descriptions that only apply within a given time window (e.g., for a mission). Time is of first importance but not shown on most of the images in the presentation University of Alabama Huntsville Space-Time Toolkit software can handle georegistration information and integrate it on-line. If algorithms are described in SensorML, how can they be discovered? Not clear Pattern of SensorML is Object-Association-Object. (Takes out impedance between XML and object modeling.) Most associations include the xlink references, which allow descriptions and other things. (Removes impedance between XML and relational databases.) Difference between device ID and serial number allows local unique IDs Soft-typing mostly what people should be using in their own schemas; hardtyping used when designing a new, general-purpose schema to be used in many different descriptions xs:id anchors used for xlink references Recommendations, Weaknesses, Issues, and Unresolved Questions The following recommendations, weaknesses, issues, and questions were identified: Stand alone namespace, needs remapping to other namespaces Needs phenomenon dictionary Needs transducer registry Extensible descriptions of instrument policy needed Clear distinction between SensorML and TransducerML, or mechanism to use them both within a sensor description, is sorely needed Controlled Vocabularies All elements are enumerated Units of measure are always meters, radians, and seconds Need use of common phenomenon/measurement dictionary Other Comments on the Specification Provides capabilities for tracking provenance Responding to questions, the following distinction between TransducerML and SensorML was offered by the group lead, one of the TML developers. D5. TransducerML The TransducerML group was not familiar with the specification, and most of the time was spent learning about it. Like SensorML, TransducerML is (as described in its documentation) a language for capturing and characterizing not only data from transducers, but information necessary for the processing and understanding of that data by the eventual recipient of the transducer data. Strengths The following strengths have been identified in working with this specification: Scalable, dynamic, extensible, efficient, complete, strict data-metadata connection (data are maintained with the metadata descriptors) Transducer- and application-agnostic Captures both sensors and transmitters, so can be used for commands also Useful for both static and streaming data Includes many detailed descriptors, such as: transducer properties and characteristics precise time-tagging of data calibration operational conditions device settings relationships among transducers in a multi-component system other information critical to data processing logical models behavioral models transfer functions 40 41

25 Appendix E. Workshop Survey Results Appendix E. Workshop Survey Results Four months after the workshop, the organizers distributed a survey to all the participants. The survey text is at the end of this appendix. Although this was long after the workshop, the results of the survey thereby addressed the long-term assessment of the participants. The survey also addressed post-workshop activities. These results were analyzed based on responses received as of April 20, Survey Analysis 39 responses were received from the 52 participants, for a response rate of 75%. Most respondents answered all the relevant questions, including many written comments. (Thank you!) All the specification teams and roles were represented by at least 4 responses, except the TransducerML team (with 2). Responses were averaged by substituting numerical values for the textual selections. (While averaging words is statistically dubious, the Principal Investigator did so anyway, to easily summarize the results.) See the Survey Numerical Rankings section on page 43 for the detailed description of the process used to analyze responses. Overall, the workshop was well received, with an average rating of Good. Logistical rankings were particularly high, but are not considered further in this section. Most technical aspects of the workshop, considered on their own, also rated close to or above Good. The highest-rated technical areas within the workshop was the Workshop Value To You area, and the Track 1 Discussions. The Conclusions in Draft Report also received a high rating. The weakest category within the workshop (by a wide margin) was the Track 2 Exercises, only somewhat above Average. The textual comments and lessons explain the weak ratings for the workshop training, as well as many of the other results noted here. Although not summarized below, the new question How did the workshop compare to your expectations? also elicited a relatively weak rating (5.9). When broken down by Teams and Roles, the average ratings are more variable due to the small number of responses. (In particular, the TransducerML ratings are of limited meaning, given 2 respondents.) The overall ratings were surprisingly consistent across teams and roles, however, with differences evident only in particular questions. The questions regarding follow-up activities were considered a significant indicator of workshop success, since maintaining post-workshop momentum is notoriously hard. All respondents have talked to people about the workshop, most of them Some or Often. Over 70% have used what they learned Some or Often, and a similar number expected, or likely expected, to use what they learned in the future. For this report we asked people to review the draft conclusions as part of their post-workshop survey. 45% of the respondents read and supported them without reservation; an equally encouraging result is that another 26% read them and supported them with some reservations, which they provided. No one said they could not support the conclusions. Survey Numerical Rankings We analyzed the multiple-choice responses by category and summarized them by the overall categories workshop experience (responses reflecting the workshop itself) and workshop engagement/follow-on (the responses about follow-on activities related to the workshop). The workshop experience category was also broken out by technical and logistical aspects, to try to learn whether the workshop achieved its technical goals. The 5 multiple choice responses were assigned the numerical weights 10 (for the best response), 7, 5, 3, 0. (For the Duration question, Just right was given a 10, Not quite enough and A little too much were weighted as 7, and Not nearly enough and Way too much were given a value of 3.) The non-linear weighting reflected the subjective nature of the words chosen, as determined by the Principal Investigator. They also enable comparison with the previous MMI workshop. (The non-linear weights do not significantly affect the relative ratings, as compared to a linear scale.) The white entries are averages of the specific questions of the same name. The colored entries are summaries of multiple questions. The principal numerical rankings are provided in the table E.1 below. Overall Workshop Experience 7.1 Total Averaged Entries 7.0 General Experience of Workshop 7.3 Technical Aspects 7.2 Plenary Speakers 7.4 Track 1 Task Suitability 7.1 Track 1 Discussions 7.6 Track 2 Trainers 6.9 Track 2 Exercises 5.9 Concluding Discussions 7.3 Workshop Value to Participant 7.6 Conclusions in Draft Report 8.0 Logistical Aspects 7.3 Technical Coordination/Scheduling 7.2 Duration 7.5 Logistics (non-technical) 7.7 Follow-On Participation / Engagement 8.0 Have you talked to people about it? 5.1 How often have you used it? 5.1 Do you expect to use it in the future? 7.7 Would you attend another?

26 Table E.1 Numerical Rankings by Topic for All Participants All results were also broken out by Team and by Role. Numerical rankings by Role and by Team are provided in the tables E.2 and E.3 below. As suggested by the graph in Figure E.1, a plurality felt the workshop needed more time Did we allow an appropriate length of time? Way Short Bit Short Exactly Right Bit Long Way Long Figure E.2 Opinions on Workshop Duration Role Leader Presenter Participant Overall Workshop Experience Total Averaged Entries General Experience of Workshop Technical Aspects Plenary Speakers Track 1--Task Suitability Track 1--Discussions Track 2--Trainers Track 2--Exercises Concluding Discussions Workshop Value to Participant Conclusions in Draft Report Logistical Aspects Technical Coordination/Scheduling Duration Logistics (non-technical) Follow-On Participation/Engagement Have you talked to people about it? How often have you used it? Do you expect to use it in the future? Would you attend another? Table E.3 Numerical Rankings by Topic and Role MMI, 2006 Team CSDGM DIF ISO SML TML Track 2 Overall Workshop Experience Total Averaged Entries General Experience of Workshop Technical Aspects Plenary Speakers Track 1--Task Suitability Track 1--Discussions Track 2--Trainers 7.0 Track 2--Exercises 6.1 Concluding Discussions Workshop Value to Participant Conclusions in Draft Report Logistical Aspects Technical Coordination/Scheduling Duration Logistics (non-technical) Follow-On Participation/Engagement Have you talked to people about it? How often have you used it? Do you expect to use it in future? Would you attend another? Table E.4 Numerical Rankings by Topic and Team 44 45

27 Survey Comments The survey comments have been categorized. All the significant comments from the survey were included, edited slightly for readability and anonymity. Comments marked with [B] and [W] indicate answers to the questions What was the best (worst) aspect of this workshop? Other comments were received to specific ratings, or in the other comments section. Comments marked with [+] were duplicated to appear in multiple categories. [...] indicates elided text (appearing elsewhere). General Interplay of the participants. [B] Enabled me to make some good contacts. [B] I think the overall outcome of the workshop was already known and this was just an exercise to justify the recommendations. [+] Excellent opportunity to review and compare, with other experts in the field of sensors and metadata, the applicability of various metadata standards to MMI and related projects. As a Track 1 team leader, I was not able to attend other sessions and learn firsthand about the topics under discussion. This is only a minor concern, as just being able to attend the workshop has raised my awareness of important interoperability issues (specifically Sensor ML) and how these can be applied at my local level. I was fairly new to the subject of metadata standards, so some more introductory/overview information would have been helpful. I realize this is a difficult problem to meet, because there is a mix of advanced and beginning users at the workshop. Concepts around vocabularies [B] Diversity, range and depth of professional and scientific experience. [...] [B] good group - well organized [B] Provided a good introduction to metadata standards and some real-life examples of how they are being used in the marine community. [B] Meeting others who are interested in metadata and passionate developers of metadata standards. Visiting Portland, Maine! [B] Personally, I like workshops. They provide time to get into details. [B] I got a chance to explore the topics presented both during the workshop and during the time the workshop wasn t happening, i.e. noon and break-times and evenings. The time to be single-minded is a rare luxury. [B] Networking with North American experts; ability to share ideas with the broader marine community (which can be difficult in Australia where one feels a sense of professional isolation) [B] Practical application of standards [B] Hands on work was very helpful to me given that I d just started working at GoMOOS at the time. It was a very concrete way to get up to speed. [B] Interaction with others in the field. [B] Exposure to and interaction with individuals developing and using various schema. Good brainstorming and information exchange. [B] It allowed me to better understand the needs of various people in terms of describing their sensors. [B] Getting a much better understanding of how the instrument centric immutable static metadata fits into the bigger picture of archiving and mining finished data products. Also an introduction to the different vocabularies and knowledge that a controlled vocabulary for oceanographic sensors does not exist was enlightening. [B] The workshop brought together the important players in marine metadata interoperability [B] I really enjoyed the workshop, learned a lot and hope to be more involved in the future. Good workshop - thanks It was great to have so many experts in the field of sensor networks in one place to learn about cutting-edge work in the field. [B] It was great to have access to representatives from the groups who are defining these standards (ex: Alex Robin and Steve Havens). I was able to go to them with specific questions I had. I think we should start working on a (or some) follow-on workshops. Overall, a productive and professionally organized workshop. I m glad to see the goals of metadata publication, at least in theory, are so far along. I feel we can pull our organization out of the dark ages. Start the planning earlier. Hands-on materials need to be prepared and exercised well in advance of the meeting. [+] Do it again, soon, with minor revisions. Workshop was so long ago, I actually forget what was presented and how but I do remember being glad I went. As usually the case with these workshops, I made one or two contacts with a few people that saved me days of work in the long run so that, while the workshop itself was helpful, the personal contacts and a few bits of information obtained from conversations on coffee break end up being extremely helpful. The discussions were fantastic. Just to get a group like this together is a true value. I think the exposure to technologies, projects and participants will help our efforts to develop interoperability in the future. As an example, I have to admit that I had not heard of sensorml - now I m integrating this into our system. Focused breakout on SensorML - with author of the standard in the room - was very informative and useful - led to concrete action items and ideas for future work [...] [B] I understand that I was invited primarily as a contributor. So one of the real questions is whether others gained from my contributions General Logistics Regardless of other meetings, Portland ME is a difficult venue for me to get to reliably. The BEST I can plan on is to fly to Boston and drive 2 hrs each way to Portland. [...] Excellent logistics especially generous amounts of food! The best conference I have been to in that respect at least in the scientific community. [B] Lack of closure on exercises and discussion. [W] [+] I had travel quite a bit and was jet lagged out of my mind most of the time...thereby missing some social opportunities. But then I am from Hawaii, so unless MMI plans its meeting in Honolulu or San Francisco, that would have been the case. [W] Very nice setting; nice hotel. [...] Appreciated the East Coast location - convenient to have the meetings at the hotel - appreciated the hotel direct billing [B] [...]Meeting was the right size - not small and clubby, but not giant auditorium-style either [B] I went to the Web site and it would not proceed without Javascript being enabled. For security reasons, I do not have Javascript enabled. Logistics such as the ability to plug one s laptop in were inadequate. Also, I found myself going from plenary room to breakout session, back and forth, for Track 1 and carrying my laptop around - This was inconvenient. I would have liked to stay in the same room, at least during the morning or afternoon, and it 46 47

28 would really have been nice to stay in the same room all day. This would be possible if the tracks were held sequentially. [...] Talks The presentations were outstanding I learned quite a bit from each speaker and really improved my understanding of many of the projects that are going on in this field. [B] Could have discussed more on the emerging technologies, rather than spending time on historical metadata experiences and stove-piped systems. [W] However, the talks were quite informative and I met many experts and interesting people at the Workshop. The presentations were very good and I learned a lot. There was a very good discussion at the end about various standards appropriateness for use with sensors. [B] Training Track 2 exercises were too elementary, I expected to learn more on SensorML and also wanted to see the discussion moving towards Sensor Web Enablement and the OGC standards. Everything was great except that xml/oxygen exercise turned into a typing session. At the beginning a lot of questions were asked about namespaces, urns, etc and the lead simply said we aren t going to deal with that. But that seemed to be an important part of the reusability issue to be able to leverage other sources, then customize for those unique things. [...] The track 2 training didn t go as well for me as I had hoped. Not sure why. Maybe not basic enough. Tutorial section was not entirely relevant as many participants were already familiar with the material. For future workshops a poll of participants about interest in several topics might find the right balance. As a trainer, I would have needed to know better the level of XML knowledge of the audience. It was much lower than I expected and I believe my examples were too complicated. [+] Again, needed more info on the uses of namespaces, urns, etc. [W] Track Two XML session- assumed participants had extensive experience with XML [W] I didn t progress as far as I had hoped. I had to leave too soon, or maybe the workshop ended too soon. [W] The exercise did not seem worthwhile. [W] XML training [W] It was too short to really teach most aspects of SensorML [W] I don t think that the training exercise was a good use of time. The directions in the beginning describing what we were supposed to accomplish were not clear. [W] Exercises could have been more focused; maybe if the trainer had led the whole group through the exercise together. [W] There was insufficient time to cover the material. People had widely varying levels of experience making it difficult to start with everyone on the same page. [W] basic exercises [W] In my opinion too much of basic exercises like xsl transformation but very helpful suggestions. Start the planning earlier. Hands-on materials need to be prepared and exercised well in advance of the meeting. [+] Process Our group was also slowed down by technical difficulties, because we were not adept at editing in the Plone environment. As a trainer, I would have needed to know better the level of XML knowledge of the audience. It was much lower than I expected and I believe my examples were too complicated. [+] The presentations on the various schemes, strength and weaknesses. This Consumer Reports service was very valuable. [B] [+] The time left for exploration and discovery of a Sensor ML [B] Overall discussion and outcome. [B] The first time I participated in a work shop that was organized that way. I copied your organizational model at our last workshop, as I got fully convinced by the value provided by a number of external experts. I would have kept it bit smaller, though. [B] My expectations were high going into the workshop (based on workshop 1). One more day would have been good. Tying together the different track work would have been a plus. Seeing the metadata for the sensor displayed in FGDC, ISO, TML, SensorML,... Lack of closure on exercises and discussion. [W] [+] The fact that there was not much followup on it until this survey request. Since I was not asked to followup with anything right away, I have forgotten most of what I learned at the workshop. [W] I think the initial directions could have been clearer. In our Track 1 group, we initially struggled with what are we trying to do here?. In the future, it may be helpful if the organisers first had someone totally unfamiliar with the instructions, read and comment on the instructions. (ie vet the instructions before hand). Timing - That is a difficult question (number 4 above). The more time you give, the more time we would use. [W] It was a bit rushed. [W] There didn t seem to be a time for all the track 1 folks to get back together and compare notes, and then combine the experience of the track one and track 2 groups. [W] Some additional introductory information about the standards. [...] [W] I didn t like the fact that Tracks 1 and 2 were conducted simultaneously and that I had to pick one or the other. I was familiar with FGDC, DIF, and ISO but not SensorML and other metadata standards. However, there was no way to both comment on standards that I knew, as well as learn about other standards. [W] As I said above, I liked the way you organized the workshop a lot. Next time, I would allow more time for the different groups, as comparison of the different metadata standards was rather poor. [+] When I click on the link Question 9 to view the draft conclusions, the results opened in this same window and my form was cleared all of the option buttons, but not the text boxes, when I returned to it. That s a bit annoying! I generally avoid parallel processing and trying to do too many things (even two things like Track 1 and Track 2!). Basically, I think this meetings could have been somewhat longer for those who wanted to cover both tracks like myself. Then, people could still have chosen Track 1 in (hypothetical) Day 3, 4, while the newbies could have prepped through Day 1 and 2 of the meet and be ready starting Day 3 to graduate from Track 2 to Track 1. That would have been a more satisfying experience for me personally anyway. Some time to discuss with certain experts, who were in other groups (but this can be done outside the workshop). [W] There was a lot of talk about metadata crosswalks. This is very powerful - I think there is a benefit in having a hands on demo mapping one standard to another in such a way that we may take home the tools to implement. This may be true for other aspects of metadata as well, e.g., how to best publish controlled vocabularies. [...] [W] 48 49

29 Standards Evaluation The presentations on the various schemes, strength and weaknesses. This Consumer Reports service was very valuable. [B] [+] Standards evaluation tasks [B] Have a feedback from different users on the use and choice of different metadata standards to describe same kinds of data. [B] [...] The standards evaluation by use case was an excellent idea and produced good, usable results. All in all a good model for future workshops. It seemed that our breakout group discussions were unfocused and it was difficult to arrive at conclusive and well thought out recommendations or feedback. Our group consisted mostly of those who do not use the metadata standards hands on, nor were any of us familiar with the sensors (to be described by metadata). That being said, I thought we still had good discussions and learned a lot from each other. The exercise was good but perhaps too specific, it was finally difficult to compare the quality of all the standards used in the workshop. Perhaps a comparison of existing metadata using different standards to describe the same data would be more efficient. As I said above, I liked the way you organized the workshop a lot. Next time, I would allow more time for the different groups, as comparison of the different metadata standards was rather poor. [+] Not enough time to complete tasks for Track 1. Perhaps some preparatory assignments could offset the workload during the workshop. [W] Lack of closure on exercises and discussion. [W] [+] Conclusions I think the overall outcome of the workshop was already known and this was just an exercise to justify the recommendations. [+] I have some reservations on the survival of the fittest notion on the metadata acceptance. I think metadata standards that have been popular, were because of the technical know-how at that point of time when they emerged and once some big institutions adopt them, others follow and it becomes the unofficial defacto, however, accepting that as the standard might not allow current innovations or technologies to be incorporated and impedes the progress. I feel that metadata is something that is constantly changing as new ways of interpreting the data emerges and also when new data dissemination needs are identified. Hence we need to find ways to provide flexibility(plugins, extensions etc.)and reusability. I feel that concentrating only on the syntactical aspects of the metadata leaves individuals to remain unsatisfied however much we try to standardize, as some elements in the syntactic metadata will either be under expressive, confusing, or unnecessary for a particular project. I think knowledge acquisition tools are more suitable for such tasks where while giving enough freedom to express one s data in the way he/she/ organization perceives it, also it gives a standard terminological base on which it could be founded on. I think the report does mention some of these aspects and I think that is a reasonably good way way to go. Some information in code lists should not be part of a standard but should be maintained in registers of codes and parameters, as is being done for sensors. There should be a register of identifiers. ISO governs registers related to digital geospatial information. Help desk support is a good idea in principle but will work in practice only if the help desk is funded for that purpose. ISO provides metadata elements that allow the definition of policy for a given instrument ISO/TC211 maintains a database of terms defined in its standards. Such a database (and possibly registers) is needed for any controlled vocabulary In Table X, the Remote Sensing Extensions have been endorsed by FGDC and are catalogued as FGDC-STD (* index_html). U. S. Office of Management and Budget circular A-119 (1998; * gov/omb/circulars/a119/a119.html) prescribed that the U. S. government should use existing voluntary consensus standards wherever possible, rather than creating its own. Since then, FGDC has concentrated on adopting existing standards that cover areas in which there is a need for standardization. In particular, in the area of metadata, it is adapting those parts of that deal with areas in the FGDC standards. Creating a GUI to enter information before we have a controlled vocabulary does not make sense. Creating a common data model in UML adds another complexity that could hinder implementation for all of us that are not familiar with UML. Complete - Informative - Sorry to see so many volunteers to be determined in the workshop recommendations section. Hard to answer conclusions in draft report, as comparison of standards was underrepresented. General conclusions are ok. The conclusions were very well written and captured what I heard at the workshop. I hope that these get shared with other groups and plan to share them myself within the DMAC community. Recommendations on vocabularies and content standards are excellent. I would be interested in participants advice regarding the following outlooks: which standard is going to be the more relevant to describe sensors, what about the others ones, a list of tools to implement these standards, and examples of interoperability between projects or institutes. The content standards should be described in a way that a non programmer can understand. The process to implement the draft conclusions is not addressed. How will consensus on the common data model and vocabularies be reached? It seems that this is a task much bigger than any workshop or a handful of workshops can address. Although I did not look at the conclusions in detail, I may refer to them in the future if they are kept available online. Top of page - you might want to add something to the effect that vendor supplied software used to set instrument configurations should record metadata in a standard fromat(xml?) not a proprietary one. Page 18 - What about EPIC key codes for measured variables. These are kind of cryptic but they have beeen around for a while I have several comments on the draft Workshop Conclusion document. 1) Section titled How to Reference it. Paragraph starting To take a concrete example - I don t think a simple misspelling should be used in the example. This is because a misspelling may have implications for data processing (this is the example given) or may not have such implications. For an example of the latter, if we just have a spelling mistake in the Abstract description of an asset, would this require a new UID? I don t think so - it would require a new version number or new update date, but the record still describes the same asset and in this case, no new UID would be required. I think the example should be reworded in some way to make this clearer. 2) Section titled Extensions and Profiles - The equations given both equate to profile. I am wondering if it would be useful to break out of the second equation, the part that pertains to the first. For example, the equations could be: metadata content standard + extensions = profile metadata content standard = core metadata set + optional elements Personally, I find two equations that both equate to profile as somewhat confusing. 3) Section titled Content Standards - I would suggest numbering, lettering, or in some way uniquely identifying the recommendations. That way, they could be referenced explicitly in other documents. 4) Section titled Vocabularies - There is a broken sentence. Search for document their Create a and you should find it. 5) Section titled Needed - Missing space. Search for contentstandard. 6) Section titled Appendix N: Standards Comparison - The first sentence makes it sound like the reader is going to read about a comparison. I would suggest deleting the first sentence. A general comment on the same 50 51

30 Appendix - The use of the word standard troubles me. Whether or not this is a valid comment is a question, but I personally don t see DIF as a standard at the same level as I see FGDC or ISO I would tend to refer to DIF as a specification, but not a standard. I suppose I could be convinced it is a NASA internal standard, but that is not what we are concentrating on in this report. I think the report and Workshop use of the word standard really refers to an internally recognized standard. DIF does not qualify in this context. I would suggest examining the text with the word specification in mind and very careful use of the word standard. As pointed out in the final discussion SML/TML are Implementation Specifications and DIF/ISO/FGDC are Content standards. Although there is much similarity, I don t think that all should be addressed as Content Standards. That being said, I don t have a better solution! Perhaps a simple notation in the conclusion is sufficient? Imposing responsibility upon sensor manufacturers to supply compliant metadata may be reasonable for the simple cases, but some instruments are so highly configurable that the number of parameter combinations resulting in uniquely different data field mappings are effectively unlimited. In such situations where supplying static metadata for each such combination is untenable, an obvious solution is for vendor-supplied configuration and calibration software to spit out compliant metadata whenever configuration changes. The problem is that development of such tools can have significant cost. Users complain that metadata is too complicated to manage, but they are unwilling to pay for it. Many end-users don t even recognize the need for metadata. Manufacturers compete on price and are unable to absorb the incremental cost of standards compliance and remain competitive. The cost of metadata standards compliance (and the broader issue of common configuration and control interfaces) must be borne by the cyberinfrastructure programs that require it. The question then becomes one of system interfaces. Simply funding manufacturers to develop configuration applications that generate compliant metadata will achieve the desired output (metadata), but without design oversight and interface specification, such applications may not easily integrate with non-interactive observatory control processes. Allocating full design and development responsibility to cyberinfrastructure developers may be equally unworkable in having to cope with each instrument model having its own unique configuration and control interface. Perhaps the solution is to fund individual manufacturers to fully disclose their proprietary interfaces and to develop compliant open source drivers, quite possibly leveraging their existing proprietary code to do so. Above link to the conclusion is broken Conclusions link is broken (specifically getting an HTML 404 error)...i am assuming they are the same as written at the workshop. Appendix F. Workshop Lessons Learned The workshop team of Graybeal, Bermudez, Watson, and Howard knew this workshop would be particularly challenging. The logistical complexity was higher than the last workshop, which was significant enough; the preparations began late because of the strong desire to coordinate with the ACT workshop; and much of the preparation time for two of the participants was lost to new parenthood. Further, one of us would not be available during the workshop itself, another could only arrive the night before, and a third was engrossed in a preceding workshop most of the week leading up to this workshop. On top of that, this workshop addressed even more advanced technical material than the previous one, and involved more preparation of training materials. Thus, we were confident of lessons that we would be learning, before the workshop even began: Preparation, of training materials and group leads particularly, is very important. A dry run of the intended process will find many problems. Assign dedicated roles to each breakout team participant, and train them in their roles. Having someone on scene to handle logistics simplifies life considerably. Technical training always takes longer than you think. Allow time to regroup. The above tasks were unavoidably omitted, to some cost in the final result. Another logistical concern related to workshop scheduling and transportation. We knew the location was not exactly a central hub, but strongly wanted to connect to the Alliance for Coastal Technologies workshop being held at the same venue that week. At the same time, we didn't want to force people to stay the night Friday, so we ended the workshop earlier on the last day (2 PM) than strictly desirable. In retrospect, even with perfect preparation, the evaluations probably demanded more than a full 2-day workshop anyway. Having dispensed with the anticipated issues, the workshop overall was nonetheless quite successful, and some specific strategies described here contributed to that success. Successful strategies: Targeting advanced topics created a very high-powered audience. Identifying standards experts as group leads was effective, with some qualifications. Brainstorming potential action items was a great way to collect insights/expertise. Joint plenaries, breaks, and meals proved very effective collaboration opportunities. The forum provided numerous networking and strategic opportunities. Much logistical support was effectively provided by the hotel. There were several areas that did not work as envisioned. The suggestions provided below are intended to make any future workshop of this type more successful. The opportunities for improvement (beyond those noted above): Evaluation breakout groups did not go far enough. The original vision was that at least specification instances would be created for at least 2 sensors, and each would cover multiple use cases. Only a few groups produced any reasonably complete specification instances, and many of those involved more than a little hand-holding; no group satisfactorily addressed multiple use cases; and lessons were not rigorously collected. These points are discussed further below. Evaluation group results weren't equally useful, and couldn't be compared. The value of breakout results is strongly influenced by three things: expertise of the group members; willingness of the leads to produce the desired results; and entrainment, of both group members and leads, by the workshop organizers. Some groups produced detailed and insightful comments, others were in more of a training role, and the workshop organizers didn t provide a model for producing crisp, comparable results

31 A lot of insight and questions were never elicited. The hope was that the very knowledgeable attendees could be tapped for questions and criticisms regarding each specification, but this was clearly unrealistic. Things to improve that might fix this problem include: allow more time; provide more mechanisms to offer comments and questions; teach the facilitators to solicit those inputs; create expectations for the group leads to reply to those inputs; allocate time in plenary sessions specifically for that purpose. Logistics issues led to poorer results. Many things took a lot of the workshop organizers limited time, and as a result some opportunities were lost (no photos of the event, some presentations misplaced, some oversights not corrected in time, and so on). Fortunately most of these issues had only limited technical ramifications, but the overall effect was noticeable, and delayed production of the final report. The training target is too big to cover easily. Despite our attempts to direct advanced participants to Track 1 and basic participants to another workshop, training complaints covered the range from too simple to too hard, with many issues in between. The take-away lesson is that satisfactory training requires investing enough preparation time and on-site support to be prepared for a wide range of audience capabilities. People wanted to know what else was going on. A number of comments expressed frustration that the participant couldn t follow the other activities. Adding regular summary sessions, or more flexibility in group assignments, might improve this. Clearly state objectives for the groups, with milestones, and validate progress. The strategic priorities for the groups must be made clear and emphasized throughout the workshop. Allow walk-around time. At least one, and preferably two, workshop leads should have as their only activity walking around to the groups and guiding them toward their goals. With these changes, the evaluation process should be able to actually create the 2 specification instances as originally envisioned, and provide some insight as to how well the use cases can be readily met, with time left over in a 3-day workshop to compare these results across standards. Although this may seem an aggressive schedule, the workshop organizers believe it is reasonable and necessary to expect that an educated technician can fill out one of these specification instances in less than a day, so asking this very sophisticated team of people to fill out 2 over the course of a few days does not seem out of line. One of the most interesting survey suggestions addressed several of these points. Rather than scheduling the two tracks in parallel, if the workshop first provided a little more thorough training on metadata standards to those wanting training, then we could bring in the evaluation participants and let the newly trained join the evaluation if they wanted. The biggest disparity between organizers expectations and workshop reality was seen in the products of the standards evaluation process. It should be noted that this process and its outcome was for the most part favorably reviewed by participants, and the breakout groups in fact made a lot of progress in their available time. But this is one area where a number of changes (beyond allowing more time) could result in a radically more valuable outcome: Arrange dedicated roles aside from the 'specification expert': team lead, facilitator, note-taker, and communicator. By investing responsibilities in these dedicated roles, the groups will reflect much more the organizers and participants goals, and less the personality or agenda of the expert (though that was probably not a big factor in these groups). This also gives the expert more opportunity to simply observe, and to respond to concerns. Finally, it reduces the importance of any one individual to the success of a group. Require all participants to understand the basic principles of the specification. This means creating a document describing the specification (few of the specifications have one), distributing it at least a month before the workshop, and meeting with the teams to review it. Begin working together before the workshop. The group will become cohesive very quickly if it has worked together (or even just practiced) before the workshop begins. Find a way to have a remote collaboration preparation session or two with each group. Develop comparison criteria ahead of time. If the goal is to be able to compare specifications, develop a set of comparison criteria that can be filled out during the evaluation process. Ideally these are reviewed before the workshop or at its beginning. Ensure comparable starting points. For best results during the meeting, each group should start out equally poised for success. If one area is significantly behind, find a way to bring it up to speed before the workshop starts. Arrange joint sessions of all the groups. Make the groups compare notes and standardize their approaches (the roles provide a good avenue for this). Plan on time to identify and address issues Fiamma Straneo, WHOI

32 Appendix G. Participant List Name Organization Team Role Full Name/Affiliation Steve Adams Satlantic Track 2 Presenter Satlantic Dicky Allison WHOI Track 2 Woods Hole Oceanographic Institution J Bruce Andrews Track 2 Woods Hole Group Robert Arko MGDS Track 1 SML Marine Geoscience Data System Jeff Arnfield NOAA Track 1 CSDGM National Oceanic & Atmospheric Administration Matthew Arrott Track 2 Calit2, UCSD Ian Atkinson DART Track 2 Dataset Acquisition, Annotation, Accessibilty and e-research Technologies John Backes Sea-Bird Track 2 Sea-Bird Electronics, Inc. Anne Ball NOAA/NCDDC Track 1 CSDGM Team Lead* IOOS Metadata Expert Team Julien Barde MBARI Track 1 TML Monterey Bay Aquarium Research Institute Bora Beran CUAHSI Track 1 ISO Consortium of Universities for the Advancement of Hydrologic Science Luis Bermudez MBARI Track 1 DIF Trainer* Monterey Bay Aquarium Research Institute Julie Bosch NOAA/NCDDC Track 1 TML NOAA / National Coastal Data Development Center Mike Botts UAH Track 1 SML Team Lead University of Arizona, Huntsville / SensorML Richard Bouchard NDBC Track 2 National Data Buoy Center Eric Bridger GoMOOS Track 2 Gulf of Maine Ocean Observing System Alan Chave WHOI Track 1 TML Presenter Woods Hole / ORION CI Concept Design Team Dru Clark SIO Track 1 ISO Scripps / SIOExplorer at the Geological Data Center Gerry Creager SCOOP Track 1 SML SURA - Southeastern Universities Research institute John Cree Environment Canada Track 2 Presenter Reseau-Building Canadian Water Connections David Dana HOBI Labs Track 2 HOBI Labs, Inc. Ben Domenico UCAR Track 2 Unidata and THREDDS Surya Durbha Track 2 GeoResources Institute at Mississippi State University Janet Fredericks MVCO Track 2 Martha s Vineyard Coastal Observatory Tom Gale GoMOOS Track 2 Gulf of Maine Ocean Observing System John Graybeal MBARI Track 1 SML Organizer* Monterey Bay Aquarium Research Institute Rainer Haener GFZ Track 2 GeoForschungsZentrum Potsdam Steve Havens IrisCorp Track 1 TML Team Lead* IrisCorp / Transducer Markup Language Matthew Howard TAMU Track 2 Track Lead* Texas A&M / IOOS DMAC Thomas Ingold Northrup Grumman Track 2 Northrop Grumman IT TASC Anthony Isenor DRDC Track 1 DIF Defence Research and Development Canada Uday Kari PDC Track 1 SML Pacific Disaster Center Ken Keiser SCOOP Track 1 ISO SURA Coastal Ocean Observing Prediction Program Jens Klump GFZ Potsdam Track 2 GeoForschungsZentrum Potsdam James Manning NOAA/NEFSC Track 2 NOAA s Northeast Fisheries Science Center Luigi Marini NCSA Track 2 National Center for Supercomputing Applications Scott McLean Satlantic Track 1 DIF Presenter Satlantic Donald McMullen LTER Track 2 Sensor network autoscaling/lter lakes Melanie Meaux NASA/GCMD Track 1 DIF Team Lead* NASA s Global Change Master Directory Anna Milan NGDC Track 1 CSDGM National Geophysical Data Center, NOAA Ellyn Montgomery USGS Track 2 US Geological Survey Coastal Marine Geology Program Robert Owens NCDDC Track 1 CSDGM NOAA Coastal Data Development Center Roland Person ESONET Track 2 European sea Observatory Network Benoît Pirenne NEPTUNE Track 1 SML Presenter NEPTUNE Canada Bob Randall YSI Track 2 Yellow Springs Instruments Greg Reed AODCJF Track 1 ISO Team Lead Australian Ocean Data Centre Joint Facility Alexandre Robin UAH Track 2 Trainer University Of Alabama in Huntsville / SensorML Barry Schlesinger Track 1 CSDGM NASA/RITSS Ingo Simonis Ifgi Track 1 DIF Institute for Geoinformatics, University of Muenster Cheryl Solomon Track 1 CSDGM AIBS Brian Thompson GoMOOS Track 2 Reporter Gulf of Maine Ocean Observing System John Wilson INDS Track 2 Intelligent Network Data Server Photo credits: Front cover from top left to bottom right: CODAR, Regan Long, Coastal Ocean Currents Monitoring Program (COCMP) - Northern California CTD, Sean Whelan, WHOI ADCP, Fiamma Straneo, WHOI Mounted ADCP, Steven F. DiMarco, Texas A&M University Data Logger, Copyright 2000 MBARI CODAR Data, Coastal Ocean Currents Monitoring Program (COCMP) - Northern California Aanderaa RCM-9 Current Meter, Steven F. DiMarco, Texas A&M University Buoy, Sean Whelan, WHOI NAS 3E Nutrient analyzer, Steven F. DiMarco, Texas A&M University University of Hawaii buoy, Sean Whelan, WHOI Longranger ADCP, Fiamma Straneo, WHOI CTD, Dennis McGillicuddy, WHOI Back cover from top left to bottom right: CTD, Chris Linder, WHOI SMI Workshop Leads, MMI, 2006 RTOSS, Rob Reves-Sohn, WHOI A 3-m discus buoy, Leslie C. Bender III, Texas A&M University Report design by Katherine Spencer Joyce, WHOI Graphic Services 56 Sensor Metadata Interoperability Workshop Report

33 Funded by NSF Grant OCE Additional support provided by Southeastern Universities Research Association (SURA) and the Gulf of Maine Ocean Observing System

Call for Participation in AIP-6

Call for Participation in AIP-6 Call for Participation in AIP-6 GEOSS Architecture Implementation Pilot (AIP) Issue Date of CFP: 9 February 2013 Due Date for CFP Responses: 15 March 2013 Introduction GEOSS Architecture Implementation

More information

The IDN Variant TLD Program: Updated Program Plan 23 August 2012

The IDN Variant TLD Program: Updated Program Plan 23 August 2012 The IDN Variant TLD Program: Updated Program Plan 23 August 2012 Table of Contents Project Background... 2 The IDN Variant TLD Program... 2 Revised Program Plan, Projects and Timeline:... 3 Communication

More information

Five-Year Strategic Plan

Five-Year Strategic Plan Five-Year Strategic Plan 2016 2020 Contents A Message from the ERIS Board... 3 Introduction and Background... 4 Five-Year Plan Goal Areas... 7 Goal Area 1: State Environmental Agency Research Needs and

More information

PNAMP Metadata Builder Prototype Development Summary Report December 17, 2012

PNAMP Metadata Builder Prototype Development Summary Report December 17, 2012 PNAMP Metadata Builder Prototype Development Summary Report December 17, 2012 Overview Metadata documentation is not a commonly embraced activity throughout the region. But without metadata, anyone using

More information

Metadata Framework for Resource Discovery

Metadata Framework for Resource Discovery Submitted by: Metadata Strategy Catalytic Initiative 2006-05-01 Page 1 Section 1 Metadata Framework for Resource Discovery Overview We must find new ways to organize and describe our extraordinary information

More information

Data Partnerships to Improve Health Frequently Asked Questions. Glossary...9

Data Partnerships to Improve Health Frequently Asked Questions. Glossary...9 FAQ s Data Partnerships to Improve Health Frequently Asked Questions BENEFITS OF PARTICIPATING... 1 USING THE NETWORK.... 2 SECURING THE DATA AND NETWORK.... 3 PROTECTING PRIVACY.... 4 CREATING METADATA...

More information

Vocabulary-Driven Enterprise Architecture Development Guidelines for DoDAF AV-2: Design and Development of the Integrated Dictionary

Vocabulary-Driven Enterprise Architecture Development Guidelines for DoDAF AV-2: Design and Development of the Integrated Dictionary Vocabulary-Driven Enterprise Architecture Development Guidelines for DoDAF AV-2: Design and Development of the Integrated Dictionary December 17, 2009 Version History Version Publication Date Author Description

More information

HITSP Standards Harmonization Process -- A report on progress

HITSP Standards Harmonization Process -- A report on progress Document Number: HITSP 06 N 75 Date: May 4, 2006 HITSP Standards Harmonization Process -- A report on progress Arlington, VA May 4 th, 2006 0 What Was Done Reviewed obligations from federal contract Observed

More information

CMSP Discovery Vocabularies Workshop. A Joint WHOI/USGS/NOAA Workshop

CMSP Discovery Vocabularies Workshop. A Joint WHOI/USGS/NOAA Workshop CMSP Discovery Vocabularies Workshop A Joint WHOI/USGS/NOAA Workshop Agenda Wed. Dec. 1 12:00 Opening Lunch 1:00 Introductions and Review of Meeting: Andy Maffei, WHOI 1:30 Overview of the National Ocean

More information

Report to Plenary on item 3.1

Report to Plenary on item 3.1 World Meteorological Organization Cg-XVI/PINK 3.1(1) SIXTEENTH CONGRESS GENEVA, 2011 Submitted by: Chair, Committee C Date: 27.V.2011 Original Language: English Agenda item: 3.1 WORLD WEATHER WATCH PROGRAMME

More information

DATA Act Information Model Schema (DAIMS) Architecture. U.S. Department of the Treasury

DATA Act Information Model Schema (DAIMS) Architecture. U.S. Department of the Treasury DATA Act Information Model Schema (DAIMS) Architecture U.S. Department of the Treasury September 22, 2017 Table of Contents 1. Introduction... 1 2. Conceptual Information Model... 2 3. Metadata... 4 4.

More information

Standards Readiness Criteria. Tier 2

Standards Readiness Criteria. Tier 2 Document Number: HITSP 06 N 85 Date: June 1, 2006 Standards Readiness Criteria Tier 2 Version 1.0 May 12, 2006 HITSP Standards Harmonization Committee V 1.0 (5/12/2006) 1 Introduction...3 Background Information...3

More information

strategy IT Str a 2020 tegy

strategy IT Str a 2020 tegy strategy IT Strategy 2017-2020 Great things happen when the world agrees ISOʼs mission is to bring together experts through its Members to share knowledge and to develop voluntary, consensus-based, market-relevant

More information

Office of the Government Chief Information Officer XML SCHEMA DESIGN AND MANAGEMENT GUIDE PART I: OVERVIEW [G55-1]

Office of the Government Chief Information Officer XML SCHEMA DESIGN AND MANAGEMENT GUIDE PART I: OVERVIEW [G55-1] Office of the Government Chief Information Officer XML SCHEMA DESIGN AND MANAGEMENT GUIDE PART I: OVERVIEW [G-] Version. November 00 The Government of the Hong Kong Special Administrative Region COPYRIGHT

More information

National Data Sharing and Accessibility Policy-2012 (NDSAP-2012)

National Data Sharing and Accessibility Policy-2012 (NDSAP-2012) National Data Sharing and Accessibility Policy-2012 (NDSAP-2012) Department of Science & Technology Ministry of science & Technology Government of India Government of India Ministry of Science & Technology

More information

CONCLUSIONS AND RECOMMENDATIONS

CONCLUSIONS AND RECOMMENDATIONS Chapter 4 CONCLUSIONS AND RECOMMENDATIONS UNDP and the Special Unit have considerable experience in South-South cooperation and are well positioned to play a more active and effective role in supporting

More information

Designing a System Engineering Environment in a structured way

Designing a System Engineering Environment in a structured way Designing a System Engineering Environment in a structured way Anna Todino Ivo Viglietti Bruno Tranchero Leonardo-Finmeccanica Aircraft Division Torino, Italy Copyright held by the authors. Rubén de Juan

More information

Data Governance. Mark Plessinger / Julie Evans December /7/2017

Data Governance. Mark Plessinger / Julie Evans December /7/2017 Data Governance Mark Plessinger / Julie Evans December 2017 12/7/2017 Agenda Introductions (15) Background (30) Definitions Fundamentals Roadmap (15) Break (15) Framework (60) Foundation Disciplines Engagements

More information

CoE CENTRE of EXCELLENCE ON DATA WAREHOUSING

CoE CENTRE of EXCELLENCE ON DATA WAREHOUSING in partnership with Overall handbook to set up a S-DWH CoE: Deliverable: 4.6 Version: 3.1 Date: 3 November 2017 CoE CENTRE of EXCELLENCE ON DATA WAREHOUSING Handbook to set up a S-DWH 1 version 2.1 / 4

More information

"Charting the Course... ITIL 2011 Managing Across the Lifecycle ( MALC ) Course Summary

Charting the Course... ITIL 2011 Managing Across the Lifecycle ( MALC ) Course Summary Course Summary Description ITIL is a set of best practices guidance that has become a worldwide-adopted framework for IT Service Management by many Public & Private Organizations. Since early 1990, ITIL

More information

Notes for authors preparing technical guidelines for the IPCC Task Group on Data and Scenario Support for Impact and Climate Analysis (TGICA)

Notes for authors preparing technical guidelines for the IPCC Task Group on Data and Scenario Support for Impact and Climate Analysis (TGICA) Notes for authors preparing technical guidelines for the IPCC Task Group on Data and Scenario Support for Impact and Climate Analysis (TGICA) One of the core activities included within the mandate of the

More information

The Common Framework for Earth Observation Data. US Group on Earth Observations Data Management Working Group

The Common Framework for Earth Observation Data. US Group on Earth Observations Data Management Working Group The Common Framework for Earth Observation Data US Group on Earth Observations Data Management Working Group Agenda USGEO and BEDI background Concise summary of recommended CFEOD standards today Full document

More information

Warfare and business applications

Warfare and business applications Strategic Planning, R. Knox Research Note 10 April 2003 XML Best Practices: The United States Military The U.S. Department of Defense was early to recognize the value of XML to enable interoperability,

More information

The U.S. National Spatial Data Infrastructure

The U.S. National Spatial Data Infrastructure June 18, 2014 INSPIRE Conference 2014 The U.S. National Spatial Data Infrastructure past present and future Ivan B. DeLoatch Executive Director, Federal Geographic Data Committee U.S. Geological Survey

More information

POSTGRADUATE CERTIFICATE IN LEARNING & TEACHING - REGULATIONS

POSTGRADUATE CERTIFICATE IN LEARNING & TEACHING - REGULATIONS POSTGRADUATE CERTIFICATE IN LEARNING & TEACHING - REGULATIONS 1. The Postgraduate Certificate in Learning and Teaching (CILT) henceforth the Course - comprises two modules: an Introductory Certificate

More information

The What, Why, Who and How of Where: Building a Portal for Geospatial Data. Alan Darnell Director, Scholars Portal

The What, Why, Who and How of Where: Building a Portal for Geospatial Data. Alan Darnell Director, Scholars Portal The What, Why, Who and How of Where: Building a Portal for Geospatial Data Alan Darnell Director, Scholars Portal What? Scholars GeoPortal Beta release Fall 2011 Production release March 2012 OLITA Award

More information

Public Safety Canada. Audit of the Business Continuity Planning Program

Public Safety Canada. Audit of the Business Continuity Planning Program Public Safety Canada Audit of the Business Continuity Planning Program October 2016 Her Majesty the Queen in Right of Canada, 2016 Cat: PS4-208/2016E-PDF ISBN: 978-0-660-06766-7 This material may be freely

More information

July 13, Via to RE: International Internet Policy Priorities [Docket No ]

July 13, Via  to RE: International Internet Policy Priorities [Docket No ] July 13, 2018 Honorable David J. Redl Assistant Secretary for Communications and Information and Administrator, National Telecommunications and Information Administration U.S. Department of Commerce Washington,

More information

OpenChain Specification Version 1.3 (DRAFT)

OpenChain Specification Version 1.3 (DRAFT) OpenChain Specification Version 1.3 (DRAFT) 2018.10.14 DRAFT: This is the draft of the next version 1.3 of the OpenChain specification. Recommended changes to be made over the current released version

More information

TEL2813/IS2820 Security Management

TEL2813/IS2820 Security Management TEL2813/IS2820 Security Management Security Management Models And Practices Lecture 6 Jan 27, 2005 Introduction To create or maintain a secure environment 1. Design working security plan 2. Implement management

More information

OpenChain Specification Version 1.2 pc6 (DRAFT) [With Edit Markups Turned Off]

OpenChain Specification Version 1.2 pc6 (DRAFT) [With Edit Markups Turned Off] OpenChain Specification Version 1.2 pc6 (DRAFT) [With Edit Markups Turned Off] DRAFT: This is the near final draft of the 1.2 version of the OpenChain Specification. We have recently completed the final

More information

INSPIRE status report

INSPIRE status report INSPIRE Team INSPIRE Status report 29/10/2010 Page 1 of 7 INSPIRE status report Table of contents 1 INTRODUCTION... 1 2 INSPIRE STATUS... 2 2.1 BACKGROUND AND RATIONAL... 2 2.2 STAKEHOLDER PARTICIPATION...

More information

Recommendations of the ad-hoc XML Working Group To the CIO Council s EIEIT Committee May 18, 2000

Recommendations of the ad-hoc XML Working Group To the CIO Council s EIEIT Committee May 18, 2000 Recommendations of the ad-hoc XML Working Group To the CIO Council s EIEIT Committee May 18, 2000 Extensible Markup Language (XML) is being widely implemented and holds great potential to enhance interoperability

More information

INSPIRE overview and possible applications for IED and E-PRTR e- Reporting Alexander Kotsev

INSPIRE overview and possible applications for IED and E-PRTR e- Reporting Alexander Kotsev INSPIRE overview and possible applications for IED and E-PRTR e- Reporting Alexander Kotsev www.jrc.ec.europa.eu Serving society Stimulating innovation Supporting legislation The European data puzzle 24

More information

Arkansas MAV Conservation Delivery Network

Arkansas MAV Conservation Delivery Network Arkansas MAV Conservation Delivery Network General Operating Guidelines Introduction These guidelines provide broad direction in the establishment of the Arkansas Mississippi Alluvial Valley Conservation

More information

The New Electronic Chart Product Specification S-101: An Overview

The New Electronic Chart Product Specification S-101: An Overview The New Electronic Chart Product Specification S-101: An Overview Julia Powell Marine Chart Division, Office of Coast Survey, NOAA 1315 East West Hwy, Silver Spring, MD 20715 Julia.Powell@noaa.gov 301-713-0388

More information

How to use Water Data to Produce Knowledge: Data Sharing with the CUAHSI Water Data Center

How to use Water Data to Produce Knowledge: Data Sharing with the CUAHSI Water Data Center How to use Water Data to Produce Knowledge: Data Sharing with the CUAHSI Water Data Center Jon Pollak The Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) August 20,

More information

STRATEGY ATIONAL. National Strategy. for Critical Infrastructure. Government

STRATEGY ATIONAL. National Strategy. for Critical Infrastructure. Government ATIONAL STRATEGY National Strategy for Critical Infrastructure Government Her Majesty the Queen in Right of Canada, 2009 Cat. No.: PS4-65/2009E-PDF ISBN: 978-1-100-11248-0 Printed in Canada Table of contents

More information

Fundamentals to Creating Architectures using ISO/IEC/IEEE Standards

Fundamentals to Creating Architectures using ISO/IEC/IEEE Standards Fundamentals to Creating Architectures using ISO/IEC/IEEE Standards What to Architect? How to Architect? IEEE Goals and Objectives Chartered by IEEE Software Engineering Standards Committee to: Define

More information

Technical documentation. SIOS Data Management Plan

Technical documentation. SIOS Data Management Plan Technical documentation SIOS Data Management Plan SIOS Data Management Plan Page: 2/10 SIOS Data Management Plan Page: 3/10 Versions Version Date Comment Responsible 0.3 2017 04 19 Minor modifications

More information

Implementing the Army Net Centric Data Strategy in a Service Oriented Environment

Implementing the Army Net Centric Data Strategy in a Service Oriented Environment Implementing the Army Net Centric Strategy in a Service Oriented Environment Michelle Dirner Army Net Centric Strategy (ANCDS) Center of Excellence (CoE) Service Team Lead RDECOM CERDEC SED in support

More information

28 September PI: John Chip Breier, Ph.D. Applied Ocean Physics & Engineering Woods Hole Oceanographic Institution

28 September PI: John Chip Breier, Ph.D. Applied Ocean Physics & Engineering Woods Hole Oceanographic Institution Developing a Particulate Sampling and In Situ Preservation System for High Spatial and Temporal Resolution Studies of Microbial and Biogeochemical Processes 28 September 2010 PI: John Chip Breier, Ph.D.

More information

DITA for Enterprise Business Documents Sub-committee Proposal Background Why an Enterprise Business Documents Sub committee

DITA for Enterprise Business Documents Sub-committee Proposal Background Why an Enterprise Business Documents Sub committee DITA for Enterprise Business Documents Sub-committee Proposal Background Why an Enterprise Business Documents Sub committee Documents initiate and record business change. It is easy to map some business

More information

Web Services Take Root in Banks and With Asset Managers

Web Services Take Root in Banks and With Asset Managers Strategic Planning, M. Knox, W. Andrews, C. Abrams Research Note 18 December 2003 Web Services Take Root in Banks and With Asset Managers Financial-services providers' early Web services implementations

More information

Semantically enhancing SensorML with controlled vocabularies in the marine domain

Semantically enhancing SensorML with controlled vocabularies in the marine domain Semantically enhancing SensorML with controlled vocabularies in the marine domain KOKKINAKI ALEXANDRA, BUCK JUSTIN, DARROCH LOUISE, JIRKA SIMON AND THE MARINE PROFILES FOR OGC SENSOR WEB ENABLEMENT STANDARDS

More information

DMAC Report to the GCOOS BOD. GCOOS Board of Directors Meeting Biloxi, MS February 2008

DMAC Report to the GCOOS BOD. GCOOS Board of Directors Meeting Biloxi, MS February 2008 DMAC Report to the GCOOS BOD GCOOS Board of Directors Meeting Biloxi, MS 26-27 February 2008 GCOOS DMAC Membership 2008 Steve Anderson - Horizon Marine! Brenda Babin - LUMCON! Steve Beaudet - SAIC! Julie

More information

SHARING GEOGRAPHIC INFORMATION ON THE INTERNET ICIMOD S METADATA/DATA SERVER SYSTEM USING ARCIMS

SHARING GEOGRAPHIC INFORMATION ON THE INTERNET ICIMOD S METADATA/DATA SERVER SYSTEM USING ARCIMS SHARING GEOGRAPHIC INFORMATION ON THE INTERNET ICIMOD S METADATA/DATA SERVER SYSTEM USING ARCIMS Sushil Pandey* Birendra Bajracharya** *International Centre for Integrated Mountain Development (ICIMOD)

More information

Stewarding NOAA s Data: How NCEI Allocates Stewardship Resources

Stewarding NOAA s Data: How NCEI Allocates Stewardship Resources Stewarding NOAA s Data: How NCEI Allocates Stewardship Resources Eric Kihn, Ph.D. Director, NCEI Center for Coasts, Oceans and Geophysics (CCOG) Krisa M. Arzayus, Ph.D. Deputy Director (Acting), NCEI Center

More information

Data Virtualization Implementation Methodology and Best Practices

Data Virtualization Implementation Methodology and Best Practices White Paper Data Virtualization Implementation Methodology and Best Practices INTRODUCTION Cisco s proven Data Virtualization Implementation Methodology and Best Practices is compiled from our successful

More information

CONFERENCE OF EUROPEAN STATISTICIANS ACTIVITIES ON CLIMATE CHANGE-RELATED STATISTICS

CONFERENCE OF EUROPEAN STATISTICIANS ACTIVITIES ON CLIMATE CHANGE-RELATED STATISTICS Statistical Commission Forty-seventh session 8-11 March 2016 Item 3(k) of the provisional agenda Climate change statistics Background document Available in English only CONFERENCE OF EUROPEAN STATISTICIANS

More information

INFORMATION NOTE. United Nations/Germany International Conference

INFORMATION NOTE. United Nations/Germany International Conference INFORMATION NOTE United Nations/Germany International Conference Earth Observation: Global solutions for the challenges of sustainable development in societies at risk Organized by The United Nations Office

More information

Reliability Coordinator Procedure PURPOSE... 1

Reliability Coordinator Procedure PURPOSE... 1 No. RC0550 Restriction: Table of Contents PURPOSE... 1 1. RESPONSIBILITIES... 2 1.1.1. CAISO RC... 2 1.1.2. RC Working Groups... 2 1.1.3. Operationally Affected Parties... 2 1.1.4. RC Oversight Committee...

More information

MAASTO TPIMS Systems Engineering Analysis. Documentation

MAASTO TPIMS Systems Engineering Analysis. Documentation MAASTO TPIMS Project MAASTO TPIMS Systems Engineering Analysis Documentation Date: November 18, 2016 Subject: MAASTO TPIMS Systems Engineering Analysis and Supplementary Project Documentation Summary Introduction

More information

The purpose of National Cooperative Highway Research Program (NCHRP) project Task (77) was to provide the transportation community with a

The purpose of National Cooperative Highway Research Program (NCHRP) project Task (77) was to provide the transportation community with a 1 The purpose of National Cooperative Highway Research Program (NCHRP) project 25-25 Task (77) was to provide the transportation community with a better understanding of the range of NEPA guidance materials

More information

Academic Program Review at Illinois State University PROGRAM REVIEW OVERVIEW

Academic Program Review at Illinois State University PROGRAM REVIEW OVERVIEW Academic Program Review at Illinois State University PROGRAM REVIEW OVERVIEW For Research and Service Centers Submitting Self-Study Reports Fall 2017 INTRODUCTION Primary responsibility for maintaining

More information

OIX DDP. Open-IX Document Development Process draft July 2017

OIX DDP. Open-IX Document Development Process draft July 2017 OIX DDP Open-IX Document Development Process draft 04 11 July 2017 Table 1 - Version History Version Date Author Description d01 7 May 2017 Chris Grundemann Initial Draft d02 21 May 2017 Chris Grundemann

More information

the steps that IS Services should take to ensure that this document is aligned with the SNH s KIMS and SNH s Change Requirement;

the steps that IS Services should take to ensure that this document is aligned with the SNH s KIMS and SNH s Change Requirement; Shaping the Future of IS and ICT in SNH: 2014-2019 SNH s IS/ICT Vision We will develop the ICT infrastructure to support the business needs of our customers. Our ICT infrastructure and IS/GIS solutions

More information

D2.5 Data mediation. Project: ROADIDEA

D2.5 Data mediation. Project: ROADIDEA D2.5 Data mediation Project: ROADIDEA 215455 Document Number and Title: D2.5 Data mediation How to convert data with different formats Work-Package: WP2 Deliverable Type: Report Contractual Date of Delivery:

More information

Overview of Sentence Order Reference Document Development Process

Overview of Sentence Order Reference Document Development Process Overview of Sentence Order Reference Document Development Process Scott Came Justice Integration Solutions, Inc. September 14, 2004 Purpose The purpose of this document is to outline the process/methodology

More information

S-100 Product Specification Roll Out Implementation Plan. Introduction

S-100 Product Specification Roll Out Implementation Plan. Introduction S-100 Product Specification Roll Out Implementation Plan Introduction This intent of this plan is to provide status, challenges, timelines, and strategies for the suite of S-100 products under development

More information

The Open Group SOA Ontology Technical Standard. Clive Hatton

The Open Group SOA Ontology Technical Standard. Clive Hatton The Open Group SOA Ontology Technical Standard Clive Hatton The Open Group Releases SOA Ontology Standard To Increase SOA Adoption and Success Rates Ontology Fosters Common Understanding of SOA Concepts

More information

OOI CyberInfrastructure Architecture & Design

OOI CyberInfrastructure Architecture & Design OOI CI Architecture & Design Integrated Dictionary (AV-2) OOI CyberInfrastructure Architecture & Design Operational Node Connectivity Description OV-2 PDR CANDIDATE November 2007 Last revised: 11/13/2007

More information

Standards Designation and Organization Manual

Standards Designation and Organization Manual Standards Designation and Organization Manual InfoComm International Standards Program Ver. 2014-1 April 28, 2014 Issued by: Joseph Bocchiaro III, Ph.D., CStd., CTS-D, CTS-I, ISF-C Director of Standards

More information

Understanding and Using Metadata in ArcGIS. Adam Martin Marten Hogeweg Aleta Vienneau

Understanding and Using Metadata in ArcGIS. Adam Martin Marten Hogeweg Aleta Vienneau Understanding and Using Metadata in ArcGIS Adam Martin Marten Hogeweg Aleta Vienneau Adam Martin National Government Account Management R&D Open Data Marten Hogeweg National Government Professional Services

More information

Description Cross-domain Task Force Research Design Statement

Description Cross-domain Task Force Research Design Statement Description Cross-domain Task Force Research Design Statement Revised 8 November 2004 This document outlines the research design to be followed by the Description Cross-domain Task Force (DTF) of InterPARES

More information

lnteroperability of Standards to Support Application Integration

lnteroperability of Standards to Support Application Integration lnteroperability of Standards to Support Application Integration Em delahostria Rockwell Automation, USA, em.delahostria@ra.rockwell.com Abstract: One of the key challenges in the design, implementation,

More information

2 The IBM Data Governance Unified Process

2 The IBM Data Governance Unified Process 2 The IBM Data Governance Unified Process The benefits of a commitment to a comprehensive enterprise Data Governance initiative are many and varied, and so are the challenges to achieving strong Data Governance.

More information

Trust and Certification: the case for Trustworthy Digital Repositories. RDA Europe webinar, 14 February 2017 Ingrid Dillo, DANS, The Netherlands

Trust and Certification: the case for Trustworthy Digital Repositories. RDA Europe webinar, 14 February 2017 Ingrid Dillo, DANS, The Netherlands Trust and Certification: the case for Trustworthy Digital Repositories RDA Europe webinar, 14 February 2017 Ingrid Dillo, DANS, The Netherlands Perhaps the biggest challenge in sharing data is trust: how

More information

WM2015 Conference, March 15 19, 2015, Phoenix, Arizona, USA

WM2015 Conference, March 15 19, 2015, Phoenix, Arizona, USA OECD NEA Radioactive Waste Repository Metadata Management (RepMet) Initiative (2014-2018) 15614 Claudio Pescatore*, Alexander Carter** *OECD Nuclear Energy Agency 1 (claudio.pescatore@oecd.org) ** Radioactive

More information

Integrating IEC & IEEE 1815 (DNP3)

Integrating IEC & IEEE 1815 (DNP3) Integrating IEC 61850 & IEEE 1815 (DNP3) Andrew West Regional Technical Director, SUBNET Solutions, Inc. SUMMARY North America has a mature electric power grid. The majority of grid automation changes

More information

Global ebusiness Interoperability Test Beds (GITB) Test Registry and Repository User Guide

Global ebusiness Interoperability Test Beds (GITB) Test Registry and Repository User Guide Global ebusiness Interoperability Test Beds (GITB) Test Registry and Repository User Guide CEN Workshop GITB Phase 3 October 2015 Global ebusiness Interoperability Test Beds (GITB) 2 Table of Contents

More information

Physical Security Reliability Standard Implementation

Physical Security Reliability Standard Implementation Physical Security Reliability Standard Implementation Attachment 4b Action Information Background On March 7, 2014, the Commission issued an order directing NERC to submit for approval, within 90 days,

More information

Assessing the FAIRness of Datasets in Trustworthy Digital Repositories: a 5 star scale

Assessing the FAIRness of Datasets in Trustworthy Digital Repositories: a 5 star scale Assessing the FAIRness of Datasets in Trustworthy Digital Repositories: a 5 star scale Peter Doorn, Director DANS Ingrid Dillo, Deputy Director DANS 2nd DPHEP Collaboration Workshop CERN, Geneva, 13 March

More information

Rolling Deck to Repository: Opportunities for US-EU Collaboration

Rolling Deck to Repository: Opportunities for US-EU Collaboration Rolling Deck to Repository: Opportunities for US-EU Collaboration Stephen Miller Scripps Institution of Oceanography La Jolla, California USA http://gdc.ucsd.edu Co-authors: Helen Glaves British Geological

More information

Understanding the Open Source Development Model. » The Linux Foundation. November 2011

Understanding the Open Source Development Model. » The Linux Foundation. November 2011 » The Linux Foundation Understanding the Open Source Development Model November 2011 By Ibrahim Haddad (PhD) and Brian Warner, The Linux Foundation A White Paper By The Linux Foundation This paper presents

More information

GEOSS Data Management Principles: Importance and Implementation

GEOSS Data Management Principles: Importance and Implementation GEOSS Data Management Principles: Importance and Implementation Alex de Sherbinin / Associate Director / CIESIN, Columbia University Gregory Giuliani / Lecturer / University of Geneva Joan Maso / Researcher

More information

Consolidation Team INSPIRE Annex I data specifications testing Call for Participation

Consolidation Team INSPIRE Annex I data specifications testing Call for Participation INSPIRE Infrastructure for Spatial Information in Europe Technical documents Consolidation Team INSPIRE Annex I data specifications testing Call for Participation Title INSPIRE Annex I data specifications

More information

ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development

ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development Jeremy Fischer Indiana University 9 September 2014 Citation: Fischer, J.L. 2014. ACCI Recommendations on Long Term

More information

UGANDA NATIONAL BUREAU OF STANDARDS LIST OF DRAFT UGANDA STANDARDS ON PUBLIC REVIEW

UGANDA NATIONAL BUREAU OF STANDARDS LIST OF DRAFT UGANDA STANDARDS ON PUBLIC REVIEW UGANDA NATIONAL BUREAU OF STANDARDS LIST OF DRAFT UGANDA STANDARDS ON PUBLIC REVIEW S/No. STANDARDS CODE TITLE(DESCRIPTION) SCOPE 1. DUS ISO/IEC 29151:2017 technology -- Security techniques -- Code of

More information

Conformance Requirements Guideline Version 0.1

Conformance Requirements Guideline Version 0.1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 Editors: Conformance Requirements Guideline Version 0.1 Aug 22, 2001 Lynne Rosenthal (lynne.rosenthal@nist.gov)

More information

BPS Suite and the OCEG Capability Model. Mapping the OCEG Capability Model to the BPS Suite s product capability.

BPS Suite and the OCEG Capability Model. Mapping the OCEG Capability Model to the BPS Suite s product capability. BPS Suite and the OCEG Capability Model Mapping the OCEG Capability Model to the BPS Suite s product capability. BPS Contents Introduction... 2 GRC activities... 2 BPS and the Capability Model for GRC...

More information

Chapter X Security Performance Metrics

Chapter X Security Performance Metrics Chapter X Security Performance Metrics Page 1 of 10 Chapter X Security Performance Metrics Background For many years now, NERC and the electricity industry have taken actions to address cyber and physical

More information

CBS PROCEDURE ON REGIONAL REQUIREMENTS COORDINATION GROUPS. (Submitted by the Secretariat) Summary and Purpose of Document

CBS PROCEDURE ON REGIONAL REQUIREMENTS COORDINATION GROUPS. (Submitted by the Secretariat) Summary and Purpose of Document WORLD METEOROLOGICAL ORGANIZATION COMMISSION FOR BASIC SYSTEMS OPEN PROGRAMME AREA GROUP ON INTEGRATED OBSERVING SYSTEMS EXPERT TEAM ON SATELLITE UTILIZATION AND PRODUCTS SEVENTH SESSION GENEVA, SWITZERLAND,

More information

Contents. viii. List of figures. List of tables. OGC s foreword. 3 The ITIL Service Management Lifecycle core of practice 17

Contents. viii. List of figures. List of tables. OGC s foreword. 3 The ITIL Service Management Lifecycle core of practice 17 iii Contents List of figures List of tables OGC s foreword Chief Architect s foreword Preface vi viii ix x xi 2.7 ITIL conformance or compliance practice adaptation 13 2.8 Getting started Service Lifecycle

More information

ENISA s Position on the NIS Directive

ENISA s Position on the NIS Directive ENISA s Position on the NIS Directive 1 Introduction This note briefly summarises ENISA s position on the NIS Directive. It provides the background to the Directive, explains its significance, provides

More information

Enhancing Wrapper Usability through Ontology Sharing and Large Scale Cooperation

Enhancing Wrapper Usability through Ontology Sharing and Large Scale Cooperation Enhancing Wrapper Usability through Ontology Enhancing Sharing Wrapper and Large Usability Scale Cooperation through Ontology Sharing and Large Scale Cooperation Christian Schindler, Pranjal Arya, Andreas

More information

ASSURING DATA INTEROPERABILITY THROUGH THE USE OF FORMAL MODELS OF VISA PAYMENT MESSAGES (Category: Practice-Oriented Paper)

ASSURING DATA INTEROPERABILITY THROUGH THE USE OF FORMAL MODELS OF VISA PAYMENT MESSAGES (Category: Practice-Oriented Paper) ASSURING DATA INTEROPERABILITY THROUGH THE USE OF FORMAL MODELS OF VISA PAYMENT MESSAGES (Category: Practice-Oriented Paper) Joseph Bugajski Visa International JBugajsk@visa.com Philippe De Smedt Visa

More information

Jeffery S. Horsburgh. Utah Water Research Laboratory Utah State University

Jeffery S. Horsburgh. Utah Water Research Laboratory Utah State University Advancing a Services Oriented Architecture for Sharing Hydrologic Data Jeffery S. Horsburgh Utah Water Research Laboratory Utah State University D.G. Tarboton, D.R. Maidment, I. Zaslavsky, D.P. Ames, J.L.

More information

ISAO SO Product Outline

ISAO SO Product Outline Draft Document Request For Comment ISAO SO 2016 v0.2 ISAO Standards Organization Dr. Greg White, Executive Director Rick Lipsey, Deputy Director May 2, 2016 Copyright 2016, ISAO SO (Information Sharing

More information

Title Core TIs Optional TIs Core Labs Optional Labs. 1.1 WANs All None None None. All None None None. All None 2.2.1, 2.2.4, 2.2.

Title Core TIs Optional TIs Core Labs Optional Labs. 1.1 WANs All None None None. All None None None. All None 2.2.1, 2.2.4, 2.2. CCNA 2 Plan for Academy Student Success (PASS) CCNA 2 v3.1 Instructional Update # 2006-1 This Instructional Update has been issued to provide guidance on the flexibility that Academy instructors now have

More information

Metadata for Data Discovery: The NERC Data Catalogue Service. Steve Donegan

Metadata for Data Discovery: The NERC Data Catalogue Service. Steve Donegan Metadata for Data Discovery: The NERC Data Catalogue Service Steve Donegan Introduction NERC, Science and Data Centres NERC Discovery Metadata The Data Catalogue Service NERC Data Services Case study:

More information

Interoperability in Science Data: Stories from the Trenches

Interoperability in Science Data: Stories from the Trenches Interoperability in Science Data: Stories from the Trenches Karen Stocks University of California San Diego Open Data for Open Science Data Interoperability Microsoft escience Workshop 2012 Interoperability

More information

Joint Application Design & Function Point Analysis the Perfect Match By Sherry Ferrell & Roger Heller

Joint Application Design & Function Point Analysis the Perfect Match By Sherry Ferrell & Roger Heller Joint Application Design & Function Point Analysis the Perfect Match By Sherry Ferrell & Roger Heller Introduction The old adage It s not what you know but when you know it that counts is certainly true

More information

Certification Standing Committee (CSC) Charter. Appendix A Certification Standing Committee (CSC) Charter

Certification Standing Committee (CSC) Charter. Appendix A Certification Standing Committee (CSC) Charter Appendix A A1 Introduction A1.1 CSC Vision and Mission and Objectives Alignment with Boundaryless Information Flow: Our vision is a foundation of a scalable high integrity TOGAF certification programs

More information

For Attribution: Developing Data Attribution and Citation Practices and Standards

For Attribution: Developing Data Attribution and Citation Practices and Standards For Attribution: Developing Data Attribution and Citation Practices and Standards Board on Research Data and Information Policy and Global Affairs Division National Research Council in collaboration with

More information

Executive Order & Presidential Policy Directive 21. Ed Goff, Duke Energy Melanie Seader, EEI

Executive Order & Presidential Policy Directive 21. Ed Goff, Duke Energy Melanie Seader, EEI Executive Order 13636 & Presidential Policy Directive 21 Ed Goff, Duke Energy Melanie Seader, EEI Agenda Executive Order 13636 Presidential Policy Directive 21 Nation Infrastructure Protection Plan Cybersecurity

More information

ASPECT Adopting Standards and Specifications for. Final Project Presentation By David Massart, EUN April 2011

ASPECT Adopting Standards and Specifications for. Final Project Presentation By David Massart, EUN April 2011 ASPECT Adopting Standards Final Project Presentation By David Massart, EUN April 2011 The ASPECT Best Practice Network was supported by the European Commission s econtentplus Programme. Outline of the

More information

UNDAF ACTION PLAN GUIDANCE NOTE. January 2010

UNDAF ACTION PLAN GUIDANCE NOTE. January 2010 UNDAF ACTION PLAN GUIDANCE NOTE January 2010 UNDAF ACTION PLAN GUIDANCE NOTE January 2010 Table of Contents 1. Introduction...3 2. Purpose of the UNDAF Action Plan...4 3. Expected Benefits of the UNDAF

More information

APPENDIX B STATEMENT ON STANDARDS FOR CONTINUING PROFESSIONAL EDUCATION (CPE) PROGRAMS

APPENDIX B STATEMENT ON STANDARDS FOR CONTINUING PROFESSIONAL EDUCATION (CPE) PROGRAMS APPENDIX B STATEMENT ON STANDARDS FOR CONTINUING PROFESSIONAL EDUCATION (CPE) PROGRAMS Appendix B-1 STATEMENT ON STANDARDS FOR CONTINUING PROFESSIONAL EDUCATION (CPE) PROGRAMS The following standards are

More information

ISO 2146 INTERNATIONAL STANDARD. Information and documentation Registry services for libraries and related organizations

ISO 2146 INTERNATIONAL STANDARD. Information and documentation Registry services for libraries and related organizations INTERNATIONAL STANDARD ISO 2146 Third edition 2010-04-15 Information and documentation Registry services for libraries and related organizations Information et documentation Services de registre pour les

More information