Exploring Usability Requirements for Software Acquisition

Size: px
Start display at page:

Download "Exploring Usability Requirements for Software Acquisition"

Transcription

1 Software Acquisition Prepared By: CAE Integrated Enterprise Solutions Canada (CAE IES) CAE Inc 1135 Innovation Drive Ottawa, Ont., K2K 3G7 Canada Telephone: Fax: Contractor's Document Number: Version 02 PWGSC Contract Number: Contract # W , CORA Task#164 CSA: Tania Randall Contract Report DRDC-RDDC-2016-C086 March 2014

2 This S&T document is provided for convenience of reference only. Her Majesty the Queen in right of Canada, as represented by the Minister of National Defence ("Canada"), makes no representations or warranties, expressed or implied, of any kind whatsoever, and assumes no liability for the accuracy, reliability, completeness, currency or usefulness of any information, product, process or material included in this document. Nothing in this document should be interpreted as an endorsement for the specific use of any tool, technique or process examined in it. Any reliance on, or use of, any information, product, process or material included in this document is at the sole risk of the person so using it or relying on it. Canada does not assume any liability in respect of any damages or losses arising out of or in connection with the use of, or reliance on, any information, product, process or material included in this document. Her Majesty the Queen in Right of Canada, as represented by the Minister of National Defence, 2016 Sa Majesté la Reine (en droit du Canada), telle que représentée par le ministre de la Défense nationale, 2016

3 CAE Integrated Enterprise Solutions Canada (CAE IES) CAE Inc Innovation Drive Ottawa, Ont., K2K 3G7 Canada Telephone: Fax: CORA Task #164 EXPLORING USABILITY REQUIREMENTS FOR SOFTWARE ACQUISITION FOR TANIA RANDALL DEFENCE RESEARCH AND DEVELOPMENT CANADA ATLANTIC RESEARCH CENTRE 31 March 2014 Document No Version 02

4 APPROVAL SHEET Document No Version 02 Document Name: CORA Task #164 Software Acquisition 31 March 2014 i Version 02

5 REVISION HISTORY Revision Reason for Change Origin Date Version 01 Initial document issued 27 March 2014 Version 02 Clients Comments Addressed 31 March March 2014 ii Version 02

6 TABLE OF CONTENTS 1 INTRODUCTION Background Objective This Document UNDERSTANDING PROCUREMENT General Defence Acquisition Process Commercial vs. Custom Products Solicitation Bid Evaluation UNDERSTANDING USABILITY Usability in Context Systems Engineering Software Engineering Human Systems Integration & Human Factors Usability Standards Usability Measures PROCURING USABLE SYSTEMS Context-of-Use Context Scenarios Context Scenarios as a basis for Requirements Writing Usability Requirements Incremental Approaches Requirements based on Context-of-use & Standards Quantitative Usability Requirements Evaluating Usability Components RFPs Usability Components Overview Organizational Capability Process Quality Product Quality Quality in Use Usability Procurement Framework Procuring Usable Custom & Commercial Products RFP Checklist DISCUSSION & CONCLUSION REFERENCES APPENDIX A STANDARDS... A-1 31 March 2014 iii Version 02

7 A.1 Design Specification Standards... A-1 A.1.1 ISO 9241: Ergonomic requirements for office work with visual display terminals... A-1 A.1.2 Military Standard-1472 G: 2012 Human Engineering... A-2 A.1.3 ANSI/HFES Human Factors Engineering of Software User Interfaces... A-2 A.1.4 Nielsen s Heuristics... A-3 A.2 Design Process Standards... A-4 A.2.1 Military-Standard 46685A... A-4 A.2.2 ISO :2010 Ergonomics of human-system interaction -- Part 210: Human-Centred Design for Interactive Systems... A-4 A.3 Proprietary Standards... A-6 A.4 Reference Guides... A-6 A.4.1 Human Factors Design Standard (HFDS) for Acquisition of Commercial-offthe-Shelf (COTS) Subsystems, Non-Developmental Items (NDI) and Developmental Systems... A-6 A.4.2 Nuclear Regulatory Commission (NUREG) 0700 Human-System Interface Design Review Guidelines... A-6 A.4.3 Common Industry Specification for Usability Requirements (CISU-R)... A-7 A.4.4 ISO/IEC 25062:2006: Reporting Results... A-8 APPENDIX B PROPOSED EXPERIMENT... B-1 B.1 Objective... B-1 B.2 Participants... B-1 B.3 Variables... B-1 B.4 Apparatus... B-2 B.5 Design... B-2 B.5.1 Approach... B-2 B.5.2 Analysis... B-3 31 March 2014 iv Version 02

8 LIST OF FIGURES Figure 1-1: Conceptual Overview of Project Objectives, Related Tasks and Deliverables... 3 Figure 2-1: General Procurement Process (adapted from Markensten, 2003)... 4 Figure 2-2: DND Acquisition Process Model (Greenley et al., 2006)... 5 Figure 2-3: Major Steps in the Procurement Process (adapted from Markensten, 2003) 9 Figure 3-1: Situating Usability within Human Systems Integration Figure 3-2: Typical Software Development Life Cycle stages Figure 4-1: Usability Framework (ISO ; Bevan, 1997) Figure 4-2: Activities and Results in Specifying Usability Requirements (Adapted from Geis et al., 2003) Figure 4-3: Information included in Requirements at each Level of CISU-R Compliance Figure 4-4: Framework showing the Application of ISO in the Specification of Context-Related Requirements (Geis et al., 2003) Figure 4-5: Example Analysis Case showing the Application of ISO in the Specification of Software Design (Geis et al., 2003) Figure 4-6: Applying ISO in Association with the Dialogue-Techniques Design Solution (Geis et al., 2003) Figure 4-8: Four types of usability measures (adapted from NRC, 2007) Figure 4-9: Usability Procurement Framework Figure 4-10: Usability Involvement of Custom System Figure 4-11: Usability Involvement of COTS System Figure 4-12: Usability Involvement of MOTS/GOTS System Figure A-1: Software Usability Standards (ISO 9241) Structured from Generic to Specific (Geis et al., 2003).... A-2 Figure A-2: The Human-Centred Design Process (ISO :2010)... A-5 Figure A-3: Common Industry Format for Reporting Formative Usability... A-9 LIST OF TABLES Table 2-1: Spectrum of COTS Commercial Produce Definitions (adapted from DSB, 2009)... 8 Table 3-1: Usability Standards Table 4-1: Summary of Key RFP Requirements and Application to Products March 2014 v Version 02

9 EXECUTIVE SUMMARY This document, entitled, presents the systematic review and synthesis of relevant literature in support of a proposed methodology to derive usability requirements prior to the software acquisition process. This work was conducted by CAE Integrated Enterprise Solutions (CAE IES) under CORA task #164 for Defence Research and Development Canada (DRDC) Atlantic Research Centre, for Scientific Authority (SA), Tania Randall, contract # W Requests for Proposals (RFPs) for software procurement often include detailed functional requirements, but lack meaningful usability requirements to help ensure that the software selected not only enables the user to do the job at hand, but to do it as easily and intuitively as possible. While user-testing is one of the most reliable methods for assessing software usability, hands-on testing is often not feasible prior to selection in the acquisition process (due to software accessibility, time, resources, etc.). As a result, RFPs often focus on the more measurable, functional requirements for the software, and the usability (or lack thereof) of the software solution is not recognized until post-purchase. The current project approach expanded upon initial ideas provided by the SA by reviewing and synthesizing existing literature on usability, human factors design standards/guidelines, usability heuristics, and the procurement process to propose a framework to derive usability-based requirements for RFPs of Commercial-Off-the-Shelf (COTS) and Military-Off-the-Shelf (MOTS) systems. The review also considered information from bid proposal instructions used to rank the relative usability of software alternatives offered in response to a solicited RFP, without requiring hands-on usability testing with the software during evaluation. Results of the literature review suggest successful integration of usability requirements into an RFP requires expert knowledge of the interdependency between usability definition and evaluation. Essentially, usability testing can be used to verify current, or establish future usability objectives. The review outcomes are framed as main sections within the overall report: 1) procurement activities, 2) usability measurement and evaluation, and 3) methods for future research to validate the proposed framework effectiveness. Together, the outcomes form a proposed framework highlighting the integration of usability requirements and evaluation considerations into system design at each phase of the system procurement process. The framework embodies a combination of user-centered design, standards and software usability requirements designed to elicit bidder responses that allow RFP evaluators to rank the relative usability of two or more systems (e.g., software or web applications, etc.). The framework includes methods for bid evaluation against the specified requirements. While the framework is based on available literature, the actual response from bidders is outside of the procurer s control. As such it is recommended to validate the framework in practice by implementing usability requirements into future DRDC Atlantic software RFPs. This process is ideally coupled with usability testing of the procured product in controlled settings through execution of representative tasks. Finally, in situations where access to software is not possible, CAE has recommended an experimental plan to validate the use of tools to assess software usability compared to traditional user testing. 31 March 2014 vi Version 02

10 1 INTRODUCTION This document, entitled presents the systematic review and synthesis of relevant literature in support of a proposed methodology to derive usability requirements prior to the software acquisition process. This work was conducted by CAE Integrated Enterprise Solutions (CAE IES) under CORA task #164 for Defence Research Development Canada (DRDC) Halifax, for Scientific Authority (SA), Tania Randall, contract # W Background Requests for Proposals (RFPs) for software procurement often include detailed functional requirements, but lack meaningful usability requirements to help ensure that the software selected not only enables the user to do the job at hand, but to do it as easily and intuitively as possible. Usability is defined as the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context-ofuse (International Organization for Standards (ISO) , 1998). By definition, there is no such thing as the usability of a product (Geis, Dzida & Redtenbacher, 2003). A product's usability depends on the context in which the product is being used. Since the 1990 s, usability researchers and practitioners have increasingly urged procurers to establish requirements for the user interface development process in the RFP (Artman & Zällh, 2005; Hix, Hartson, Siochi & Ruppert, 1994; Jokela, Laine & Nieminen, 2013). In software system procurement, requirements for usability are either neglected, not properly considered or ill-defined (Ardito, Buono, Caivano, Costabile & Lanzilotti, 2013). The desire for user-friendly systems is often so implicit that customers do not regard it as a requirement in proposal requests, or they simply hand over the task of requirement articulation and elicitation to the developers (Hix, Hartson, Siochi & Ruppert, 1994). As a result, systems acquired through public tendering are often unusable (Jokela, 2010). The issue of procuring usable systems is twofold: On one hand, if the contract does not contain explicit requirement for usability, it is generally not budgeted for by the contractor, and will be among the first considerations cut if time or finances are constrained (Booher, 2003). On the other hand, procurers may ask for usability, but are not able to adequately define it, which impedes contractors ability to effectively respond to the RFP (Markensten, 2003). For instance, a three-month review of public authorities call-for-tenders identified six categories of usability requirements, however none were deemed to contain requirements specified in a valid and verifiable manner (Lehtonen, Kumpulainen, Liukkonen & Jokela, 2010). Similarly, in a comprehensive review of 180 HCI usability studies, authors frequently failed to actually measure usability, did not cover usability broadly, and did not use valid and reliable measures of usability for the user interface being studied (Hornbæk, 2006). Despite more than 20 years of usability research, usability measurement is not standard, and usability studies can be weakened by their choices of usability measures. 31 March Version 02

11 1.2 Objective The main objective of this document is to propose a methodology to derive usability requirements prior to the software acquisition process. A second objective is to provide bid proposal instructions that can be used to rank the relative usability of software alternatives offered in response to a solicited RFP, which does not require hands-on usability testing with the software during evaluation. To this end, the project assumed the following criteria: A software product procurement is needed and an RFP must be developed in order to solicit multiple bids, offering various software options; The client has already specified all of the software functional requirements, however the non-functional, usability requirements have yet to be determined; and Direct user access to the software options is not feasible as part of the evaluation process. 1.3 This Document This document provides an overview of system procurement and usability within the context of systems engineering. Highlighted throughout the report are key parallels between the procurement and system development process. Human Factors design specification and process standards are referenced throughout the document, and described in more detail in Appendix A After an initial overview of procurement and acquisition, the subsequent sections present usability in the context of systems engineering and life cycle development. The relationship among the requirements definition phases in product development and product procurement is presented as a roadmap for including usability processes and standards within RFPs. Knowledge on usability requirements, purchase usable COTS products, and improve usability at various levels of software development. To achieve the project objectives, CAE IES conducted a number of interdependent knowledge elicitation tasks to understand and document the problem space, usability requirements, and procurement activities. The corresponding project tasks are shown in Figure 1-1 as parallel-butrelated literature review streams which are defined by the objectives. While mostly distinct, knowledge gained during each literature review helped to guide subsequent tasks. As shown in Figure 1-1, this document is organized according to the main project objectives: 1. Define the relevant procurement and system acquisition activities relevant to Canadian government and Defence organizations; 2. Define usability concepts, measures and standards in context of software system procurement; and 3. Define usability characteristics and the application to software procurement. 31 March Version 02

12

13

14 The acquisition system in Canada involves the Department of National Defence (DND), PWGSC and Industry Canada. In Contrast to the Department of Defense (DoD) System Acquisition Management (DoD instruction ), the DND will not publish its first annual Defence Acquisition Guide until June As such, the process described here is summarized from previous a CAE (Professional Services) contract report to DRDC describing HSI in Defense acquisition (Greenley, 2006). Acquisition within DND/CF is guided by two core processes: the Defence Management System (DMS) and the Material Acquisition and Support Process. The DMS process involves a series of major phases that can be grouped into pre-acquisition activities, acquisition, in-services and disposal (Figure 2-2). Figure 2-2: DND Acquisition Process Model (Greenley et al., 2006) The following activities are involved in DND pre-acquisition: Capability Planning is the highest level of planning that occurs within the Canadian Forces and occurs as a core component of the annual business cycle. Capability Planning results in the identification of deficiencies and/or opportunities required by the Canadian Armed Forces (CAF) to be able to accomplish the missions and necessary planning scenarios. The capability planning cycle can lead to the identification of capability and/or system development projects, which can include major or minor systems acquisitions. The Concept Development and Experimentation (CD&E) process is conducted by military CD&E centres, that are led by military personnel within the CF community. These experimentation centres conduct studies to evaluate new concepts, which include a combination of technology, personnel, organizational structures, and associated doctrine, tactics, techniques and procedures. A concept will be explored to determine improved ways for the CAF to achieve their mission, and will result in requirements for acquisition projects, as well as in changes to doctrine, organizational structures, and types of personnel. The results arising from CD&E outputs influence many facets of the CAF. The Research and Development (R&D) process is conducted by the laboratories of Defence R&D Canada. DRDC scientific programs research and develop new technologies and new knowledge. A new technology that is researched and developed may often be further explored in terms of its operational impact through CD&E centres, while a concept for a new technology that is suggested by CD&E activities may often be researched and developed by 31 March Version 02

15 DRDC to further determine its technological feasibility. As a result, there is a natural iteration and interaction amongst the CD&E and R&D communities, and both serve to answer questions and generate inputs to the Defence acquisition process. The following activities are involved in the DND acquisition process, and are illustrated in Figure 2-2: Identification: involves formally identifying the need for a new systems, and obtaining approval to register a new project to acquire that system; Options analysis: involves an analytical comparison of major options to address the deficiency that the acquisition project is targeted to address, resulting in a selected option being approved; Definition: Generates a structured set of requirements (increasingly these are performancebased requirements) for the acquisition of the selected option. At the end of the Definition Phase, a contracting process is established, where multiple vendors bid a solution against the requirements, and a winning solution is selected; and Implementation: Involves the industrial team working with the DND acquisition team to produce the system, and transition the system into operation with military units. In general, before the content of usability requirements is discussed, it is important to understand where they would logically fit into the overall acquisition process. From the presented model (Figure 2-2), it seems evident that usability requirements belong to the definition stage. However, less clear (from the current model) is how to specifically define those requirements, and how usability fits into the pre-acquisition phases. Procurement in the DND involves additional steps and gateways that must be considered when moving from general government procurement to a military acquisition. For example, to ensure that usability considerations will be included in the acquisition phase, their initiation may need to originate at the R&D or even the CD & E stages. This problem space is outside of the scope of this document, but provides a starting point for future considerations of usability within the larger military acquisition process. Capability needs may be met by future products that are currently conceptual, or by commercially available products. Either way, the usability requirements must still be documented in the RFP. The overall requirements defined (including usability) can affect whether a capability need will be met by a commercial or developmental system, and is discussed next Commercial vs. Custom Products Traditionally, defence acquisition has relied on R&D capability or industry contractors to develop militarized products suitable for Defence needs. However, DND is increasingly seeking COTS, GOTS, and MOTS products to achieve efficient deployment, reduce costs, and lower risks (Dean & Vigder, 1997; Landolt & Evans, 2001). The three types of commercial products are described as follows: 31 March Version 02

16 A COTS product is one that is used "as-is." COTS products are designed to be easily installed and to interoperate with existing system components. Almost all software bought by the average computer user fits into the COTS category: operating systems, office product suites, word processing, and programs are among the myriad examples. One of the major advantages of mass-produced COTS software is its relatively low cost. However, there may be limited or no access to technical documentation or source code, and there is no way to control the release of updates to the COTS software. A MOTS (modifiable off-the-shelf in this context) product is typically a COTS product whose source code can be modified. The product may be customized by the purchaser, by the vendor, or by another party to meet the requirements of the customer. In the military context, MOTS refers to an off-the-shelf product that is developed or customized by a commercial vendor to respond to specific military requirements (sometimes referred to as Militarized- COTS, or Mil-COTS). In theory, a MOTS product is adapted for a specific purpose, and can be purchased and used immediately. A GOTS (government off-the-shelf) product is typically developed by the technical staff of the government agency for which it is created. It is sometimes developed by an external entity, but with funding and specification from the agency. Because agencies can directly control all aspects of GOTS products, these are generally preferred for government purposes. On the other hand, a custom product is developed explicitly to meet a government or military capability need. Custom builds may be required when a capability need (e.g., command and control system) is not commercially available. Alternatively, the product may exist commercially, but for reasons such as modifiability, security, cost, etc., a government or defence organization requires a system to be built independently (either internally or contracted). Compared to custom products, commercial-based products are intended to streamline the acquisition process. Extended projects with detailed technical requirements and extended design review cycles can be replaced with faster acquisition cycles and existing products are evaluated against a mix of technical and performance-based specifications to best-fit solutions (Greenley, 2006). The decision to buy commercial products, however, is increasingly complex (Defense Science Board (DSB) 2009). Modern systems are seldom composed solely of military-specified parts, and little of what Defense purchases are composed only of COTS components (NRC, 2002). One potential point of confusion is the varying range of understanding that has been linked with commercial systems. The DSB Task Force (2009) for Integrating Commercial Systems into the DoD reported eight levels of commercial systems used by the DoD. At the most basic level, Level 1 COTS refers to true off-the-shelf, all the way to Level 8 COTS indicating a product that does not yet exist but would require commercial development (see Table 2-1). A program manager may intend to purchase a commercial system with the understanding that the unmodified commercial system is good enough for military use. The design of these products is theoretically complete; the product exists, can simply be acquired and integrated into the Canadian Forces operational context by wrapping it with the appropriate doctrine, procedures, staffing and training to support effective operations (Greenley, 2006). The program manager may intend to purchase a certain level of COTS, such as level 2 for minor 31 March Version 02

17 modifications, but may end up having to purchase a level 6 due to unforeseen requirements (DSB, 2009). Note, at this point a product is most likely understood as a MOTS or Mil-COTS. Table 2-1: Spectrum of COTS Commercial Produce Definitions (adapted from DSB, 2009) COTS Level Definitions of Commercial Systems 1 Purchase a component or system from manufacturer and use as is. 2 Purchase a component or system from manufacturer and make minor modifications that do not affect functionality (e.g., change paint colour). 3 Purchase a component or system from manufacturer and make significant modifications that affect functionality (i.e., adding armored doors, guns, military radio etc.). 4 Purchase a component or system from manufacturer but specifies significant modifications in the purchase agreement that are made prior to purchase. 5 Purchase a component or system based on an existing product. Systems requirements drive the replacement of many subsystems with other military-specified components. 6 Direct a manufacturer or system integrator to modify a prototype product to meet requirements. 7 Direct a manufacturer or system integrator to assemble a collection of commercial and military components independently qualified on different systems into a new system. 8 Specify and purchase a product that does not yet exist, but requires commercial development and utilizes commercial plants or processes. The current procurement process is not structured to explore if minor changes or even a smaller number of major changes would provide military value (DSB, 2009). For instance, many programs do not adequately integrate systems engineering analysis (including HSI) early enough to influence such decisions and trade-offs, making it impossible to estimate the cost of modifications to commercial systems (Greenley, 2006). Similarly, the deficient HSI consideration makes it impossible to foresee or fix system usability issues prior to acquisition, which can be equally costly. While it may seem that COTS acquisition implies a scaled-down version of usability requirements, this may not change the activities required by the Government acquisition teams. Essentially, the usability activities need to be followed by both government and industrial participants in a COTS acquisition. The main difference between COTS procurement and a Custom procurement is that the industry team in a COTS procurement does not develop the design during the implementation phase as the product already exists. As a result, HSI does not drive the design during Implementation. With COTS systems, the Government primarily affects selection of the product and the deployment concept (Greenley, 2006).The government acquisition team must determine which 31 March Version 02

18

19 Needs Analysis Prior to the development of a RFP, a stakeholder meeting should be arranged to ensure that all necessary requirements are identified and can be included in the RFP or assessed prior to purchase. Before the meeting is held, it is important to identify which stakeholders should attend and what issues will be discussed during the meeting. Stakeholders could include project managers, developers, users, trainers, business managers and support personnel. Issues could include defining the overall vision of the system, understanding key functionality of the system and if there are any initial design concepts and identifying the intended users and why they will be using the system (Hakos & Redish, 1998). The advantages to holding a stakeholder meeting are it brings together all parties so that a common vision for the usability of the software can be identified and it is cost effective. A disadvantage is that some stakeholders may not be included because they were not identified as potential stakeholders Request for Information (RFI) Before a formal contract solicitation is issued, pre-solicitation methods can be used to elicit detailed information and feedback from suppliers. A common approach to pre-solicitation is a Request for Information (RFI). RFIs are a potential way to obtain useful feedback on the feasibility of proposed usability requirements. Such requests might outline a potential usability requirement and request suppliers to describe their ability to satisfy the requirement and to provide ideas and suggestions on how the eventual solicitation might be structured. Responses are used to assist the client department and PWGSC in finalizing their plans for the requirement and in developing achievable objectives and deliverables. RFIs would normally be posted on a Government electronic tendering system, such as the Government of Canada s Buy-and-sell system ( used to obtain replies from a wide audience. Note that RFIs must clearly indicate that they are not solicitations and that there are no commitments with respect to future purchases or contracts. RFIs identify the client department's potential requirement and its business objectives. The main objectives of a RFI are to provide ample time for suppliers provide feedback, and to provide information to the procuring department. The objectives are to allow suppliers to: Assess and comment on the adequacy and clarity of the requirements as currently expressed; Offer suggestions regarding potential alternative solutions that would meet requirements, such as solution with a lower environmental impact; Comment on the procurement strategy, preliminary basis of payment elements, and timelines for the project, and Comment on the draft solicitation when included with the RFI. Feedback from suppliers is used by the client department to: 31 March Version 02

20 Determine whether to proceed with requirements/strategy as planned, and if so, further developing internal planning, approval and solicitation documents that may potentially lead to a solicitation; Refine the procurement strategy, project structure, cost estimate, timelines, requirements definition, and other aspects of the requirement; Become a more "informed buyer" with an enhanced understanding of industry goods and service offerings in the areas of interest; and Assess potential alternative solution concepts that would meet its requirement, such as environmentally preferable solutions Request for Proposal An RFP is a form of bid solicitation that is used when the bidder selection is based on best value rather than on price alone. A RFP should be used when it is desirable to invite suppliers to propose a solution to a problem, requirement or objective, rather than simply conform to a requirement. The selection of the contractor is based on the effectiveness of the proposed solution (DoD, 2009). RFPs are project-based, involving solutions, qualifications, and price as the main criteria that define the winning proponent. The RFP solicitation method is used mainly to acquire services when government wants to review and implement different and new solutions to a problem, project, or business process. A RFP can range from a single-step process for straightforward procurement opportunities to a multi-stage process for complex and significant opportunities. A multi-stage process may involve the use of a Request for Information (RFI) to obtain background information, as well as the use of a Request for Qualification (RFQ) to pre-qualify vendors for a subsequent RFP (DoD, 2009). Suppliers can also express their interest in bidding while describing their capabilities through a Letter of Interest (LOI) prior to the official release of an RFP. All RFP bids must be evaluated and the successful supplier must be selected in accordance with specific criteria and procedures as set out in the bid solicitation. In other words, if usability criteria were not specified in the requirements, it cannot be applied towards contract award. The preparation of bids is often costly to suppliers, particularly if they toned to conduct research on their own end-users to even be able to write requirements. To keep the total cost down while ensuring freedom of access to suppliers, PWGSC suggests a two-step approach: 1) Suppliers should provide LOI and responses to RFQs or RFIs, from which a short list is developed; and then 2) Shortlisted suppliers are informed and can then submit detailed bids. Note that suppliers not included on the short list are still able to request the bid solicitation and submit bids; the shortlist is intended to expedite the overall process. The bid solicitation should include, as a minimum, the following information: 1. A general description of the requirement(s); 31 March Version 02

21 2. Bidder Instructions: provides the instructions, clauses and conditions applicable to the bid solicitation and states that the Bidder agrees to be bound by the clauses and conditions contained in all parts of the bid solicitation; 3. Bid Preparation Instructions (i.e., how the bid should be assembled, such as order or appearance of documents); 4. Bid Evaluation Procedures and Basis of Selection: indicates how the evaluation will be conducted, the evaluation criteria that must be addressed in the bid, and the basis of selected (e.g., weighting criteria for each element, and when and how elements will be assessed); 5. Certification Requirements (e.g. expertise, professional affiliations, relevant academic degrees); 6. Security and Financial Requirements; includes specific requirements that must be addressed by bidders; 7. Resulting contract clauses: includes the clauses and conditions that will apply to any resulting contract; and 8. Instructions informing bidders that they may request information about the results of the RFP and how their bid was evaluated Draft Request for Proposal (RFP) In addition to the RFI, issuing a draft RFP may be an effective way to elicit industry feedback for usability requirements (DoDI ). The draft RFP can be included with the RFI to request feedback on both the requirement and the proposed business strategy. Drafts provide any interested parties with an opportunity to provide comments before the actual acquisition process starts. The government can benefit from this process by considering the industry feedback and how it could improve the acquisition. It also gives potential contractors an opportunity to get an early start on planning and proposal development since we often give contractors the minimum 30 days to prepare a proposal once we issue the formal RFP. The primary disadvantage is the time required to issue the draft and evaluate and interpret industry comments. Sufficient planning time and potential support from either internal or external usability experts are recommended for procurer organizations Bid Evaluation The main purpose of bid evaluation is to determine the best responsive bid, in accordance with the evaluation and selection methodology specified in the solicitation document, among the bids submitted before the bid closing time on the date specified in the bid solicitation. The responsive bid offering the best value may or may not necessarily be the one with the lowest price. In order to accurately determine best value, a logical systematic evaluation procedure covering all aspects of the evaluation process must be followed. 31 March Version 02

22 The procuring client is responsible for the evaluation of the technical portion of the bids, and, where applicable, the management portion. Technical Criteria can be mandatory or rated. Mandatory criteria are assessed on a simple pass/fail basis. Bids that fail to meet any of the mandatory criteria will be considered non-compliant. Depending on the product, procurers may find it too restricting to include usability as part of the mandatory requirements. For example, vendor shall supply a usable product, is not recommended as a mandatory requirement as it would not be possible to assess on a simple pass/fail basis. Only bids that meet the mandatory criteria will be subject to point rating, as applicable. Rated criteria are used to assess various elements of the technical bid so that the relative merits of each bid can be determined. As such, the desired weight of the usability requirements would also apply here. The maximum points that can be achieved for each rated criterion must be specified in the bid solicitation. When point rating is used, bids may have to achieve a minimum number of points overall to be considered responsive, and often they must also achieve a minimum number of points for certain individual criteria. Bid solicitations must clearly identify any mandatory minimum thresholds. Bids must be evaluated in accordance with the evaluation criteria established in the bid solicitation. Even though the onus is on bidders to submit clear and well-organized bids, bids must be reviewed with diligence and thoroughness to ensure that no essential information is missed. The evaluators must not use criteria or factors not included in the bid solicitation or derive conclusions from information not contained in bids since the deductions may prove wrong. Whenever possible, the same evaluators should evaluate all bids. When evaluating bids, evaluators must consider all information provided in the bid, and must not base their evaluation on undisclosed criteria. In effect, to procure a usable system, the procuring organization has to provide usability requirements. Furthermore, the RFP must show how the usability requirements explicitly contribute to contract award, meaning the procurer also needs to understand how to evaluate bid responses (Booher, 2003; Jokela, Laine & Nieminen, 2013). The next section addresses both usability concepts and methods towards the definition and evaluation of usability requirements. 31 March Version 02

23 3 UNDERSTANDING USABILITY This section presents usability within the context of systems engineering in order to show how the concept of usability applies to requirements in procurement. The fundamental identification and evaluation of the usability requirements are explained through an overview of relevant usability design standards, and formative and summative testing methods. All three sections contribute to a proposed framework to facilitate the development of usability requirements. 3.1 Usability in Context Usability is broadly defined as optimizing all the factors that determine effective interaction between users and systems in a working environment. The concept of usability applies to the hardware and software of a product. Usability is determined not only by the characteristics of the interactive product or system, but also by the whole context-of-use, including the nature of the users, tasks, and operational environment (Neilsen, 1998; Geis et al., 2003; Hornbaek, 2006; and Markensten, 2005). Usability can be specified and evaluated in terms of user performance and satisfaction (ISO ). User performance is measured by the extent to which the intended goals of use are achieved (effectiveness) and the resources such as time, money or mental effort that have to be expended to achieve the intended goals (efficiency). Satisfaction is measured by the extent to which the user finds the use of the product acceptable (Bevan, 2005). The objective of designing and evaluating products and technology for usability is to enable users to achieve goals and meet needs in a particular context-of-use. In software design, usability applies not to the user, but to the extent to which the functional properties and other quality characteristics meet user needs in a specific context-of-use. Developing usable software necessitates knowledge of what usability criterion to apply and also how to do it. The what portion concerns the detailed specifications to ensure that usability requirements are satisfied. The how aspect concerns the usability-centered processes describing principles and recommendations for how to achieve the desired result. The process of designing for usability is part of the human-centered design considerations of the Systems Engineering process (see Figure 3-1). Usability (sometimes referred to as usability engineering) is a human-centric focus within the domain of Human Factors (HF). In the defense community, HF (also known as HF Engineering) is generally considered a sub-domain of Human Systems Integration (HSI) within the overall systems engineering process. Software engineering is a sub-set of Systems Engineering that can use HF methods, including usability, to design usable software. Usability specifications can be used within software engineering to define requirements for acquisition of usable software. 31 March Version 02

24

25

26 5. Testing the product: Product testing generally takes place at all stages of software development. However, this stage refers to the testing only stage of the product where products defects are reported, tracked, fixed and retested, until the product reaches the quality standards defined in the Software Requirement Specification. When usability requirements have been defined, product testing would also need to include usability testing. 6. Deployment and Maintenance: Once the product is tested and ready to be deployed, it is released formally in the appropriate market. After the product is released, its maintenance is done for the existing customer base. The key point of this section is to note how the requirements gathering stages of software engineering tend to overlap with the requirements definition stages of procurement. Regardless of the system development model chosen, usability should be considered in the initial preparation of a software project, as it should be considered early in procurement. However, it is often the case that the usability specialists come into play relatively late, typically when user testing needs to be scheduled (Geis et al., 2003), or when it is certain there will be sufficient funds remaining in the budget. Ideally however, usability requirements should be specified before the launch of a software project or an acquisition, as these provide a valid basis for usability design and testing. To encourage the early consideration of usability, the next section discusses how the HSI framework can support the requirements engineer's activities by providing guidelines on how to analyze the context-of-use of the product to be developed or purchased. Usability objectives are then derived from this analysis in order to describe a user-oriented concept of system use, which is complementary to the desired system concept Human Systems Integration & Human Factors Human Systems Integration (HSI) captures the basic overarching concept of integrating the human into the engineering of the system. The International Council on Systems Engineering (INCOSE, 2014), defines HSI as "the interdisciplinary technical and management processes for integrating human considerations within and across all system elements [and is] an essential enabler to systems engineering". HSI is the systems engineering process and program management effort that provides integrated and comprehensive analysis, design, and assessment of requirements, concepts, and resources for human engineering, manpower, personnel, training, system safety, health hazards, personnel survivability, and habitability. These domains are intimately and intricately interrelated and interdependent and must be among the primary drivers of effective, efficient, affordable, and safe system designs. HSI integrates and facilitates trade-offs among these domains, but does not replace individual domain activities, responsibilities, or reporting channels (MIL-STD-46855A). The Canadian version of HSI consists of five HSI domains (Figure 3-1): Human Factors, Manpower and Personnel, Training, System Safety and Health Hazards Assessment with a materiel system to ensure safe, effective operability and supportability (Beevis, 1992). The five Canadian HSI domains are described as follows: 31 March Version 02

27 Human Factors (HF): applies what is known about human cognitive and physical capabilities and limitations into system definition, design, development, and evaluation to optimize human-machine performance under operational conditions. The primary sub-areas of HF include operator roles, functions and tasks, user-system interface, workspace and environment. As shown in Figure 3-1, usability (as relevant to this project) is part of HSI within HF. In this context, usability applies to the interaction of the user and the software system interface. Personnel: involves the number of military and civilian personnel required and potentially available to operate, maintain, sustain and provide training for systems, as well as the cognitive and physical capabilities required to train for, operate and maintain these systems. Training: includes the instruction or education and on-the-job training required to provide personnel with their essential job skills, knowledge, values and attitudes, as well as any constraints on such training. System Safety: identifies safety risks occurring when the system is set-up, used, dismantled, transported or maintained; and Health Hazards Assessment: identifies short or long term hazards to health occurring as a result of normal operation of the system. Important takeaways from this section are the parallels between the stages in the procurement process and the system development lifecycles. Within these complimentary models, needs analysis and requirements definition are the natural fit to consider and define usability requirements. HSI drives the application of HF and situates the purpose of usability within the context of this report. Within the HSI framework, HF activities are applicable at varying levels of effort throughout the development life cycle to define and evaluate usability requirements. A usability requirement is a compulsory user performance criteria specified in terms of the product's context-of-use and the implied needs in the flow of the user's task. As such, the next section presents a number of applicable usability standards, followed by means to apply them to requirements definition. 3.2 Usability Standards Usability considerations within software system engineering necessary for defining usability in procurement may be accomplished in part through the use of design standards. Standards are a formal agreement on a specific topic that allow the codification of best practices or a set of requirements, which are shared across industries, national boundaries and disciplines (Geis et al., 2003). HF and other design standards do not only apply to custom acquisition programs. COTS products can also benefit from the application of such standards as COTS items tend to vary in quality of interface design. 31 March Version 02

28 Numerous design standards exist that can support usability requirements, evaluation and design. Note that some standards are open-access (e.g., Military Standard 1472-G) and others must be purchased (e.g., ISO 9241). The main categories of design standards are listed below: 1. Design specification standards: provide detailed guidance on the design elements of a product 2. Design process standards: describe general approaches and specific methods used to develop the product: 3. Proprietary standards: used for developing standard user interfaces; and 4. Standard References: represent composite standards of standards detailed recipe books of published standards collected for the purposes of select domains; Examples of the four categories of standard are presented in Table 3-1, and more detailed descriptions are provided in Appendix A: Table 3-1: Usability Standards Standard Standard Description Examples Design specification standards Design process standards ISO Guidance on Usability (1998) Military Standard (MIL- STD)-1472 G ANSI/HFES Human Factors Engineering of Software User Interfaces MIL-STD-46685A ISO 13407: User Centered Design Detectability: Distinctive message or coding techniques should be consistently used to alert users to conditions that require special attention. Control to background contrast. Sufficient color/brightness contrast between the control and its background shall be provided to ensure prompt and accurate identification by the user. User preference settings: Provide capability to use preference settings across locations Test and evaluation. Human engineering test and evaluation shall be conducted to support design decisions, verify, and validate that military systems, equipment, and facilities meet human engineering criteria, can be operated and maintained in their intended operational environment, within the intended users' performance capabilities, and are compatible with the overall system requirements. Specify the context-of-use: Identify the people who will use the product, what they will use it for, and under what conditions they will use it 31 March Version 02

29 Standard Standard Description Examples Proprietary standards Standards Reference Manuals Windows 8 UX Pack Human Factors Design Standard ((AHLSTROM & LONGO, 2003)) Nuclear Regulatory Commission (NUREG) 0700 Human-System Interface Design Review Guidelines Common Industry Specification for Usability Requirements (CISU-R) ISO/IEC 25062:2006: Reporting Results Drop-down lists: When you design a drop-down list, sort list items in a logical order, such as grouping related options together, placing most common options first, or using alphabetical order. Sort names in alphabetical order, numbers in numeric order, and dates in chronological order. Context. Context should be provided for displayed data. [Source: MIL-HDBK-761A, 1989] Example. When a user is changing parameters for a facility, relevant information concerning that facility should be displayed. [Source: MIL-HDBK-761A, 1989] General Display Designs: Display Information Consistent with User Conventions Information should be displayed consistently according to standards and conventions familiar to users. Additional Information: The wording of displayed data, labels, and other information should incorporate the task-oriented terminology of the users, and avoid unfamiliar terms used by designers and programmers. Goals: The main goals for each user group shall be listed, without reference to any specific means of achieving them. The goals should be an intended outcome of value to the user or business, for example: accurately completing a particular form, locating the most relevant information, or successfully setting up a computer. Follows a variety of ISO standards guiding users on how to report summative usability test results. Participants: This section shall include: Total number of participants tested for each system (a minimum of 100 participants shall successfully cast a ballot for each of the two systems) Key characteristics and demographics of the participants, including gender, race, education and age. This information can be expressed in a table, a bulleted list or even in a graph/chart Comparison of the participants tested with the target demographic criteria Although this section has addressed procuring usable systems through the incorporation of usability requirements and evaluation that can be facilitated by standards and metrics, there is no optimal, or best-practice application for metrics and standards that can be generalized across products. In other words, the application of standards or even usability tests cannot guarantee good design (Ahlstrom & Longo, 2003). First, HF standards can be implemented in different ways, and usability tests may not be conducted properly. In this sense, the user is the 31 March Version 02

30 unknown factor that can impede a standard s intent. Second, standards cannot replace good human factors expertise. A designer who is very knowledgeable in HF might do well without using any standards whereas a novice designer might do poorly even with the help of standards (Ahlstrom & Longo, 2003). The intended result of using this document is to provide a better understanding of the context of usability within software development towards the acquisition of a more usable system. Throughout the remainder of the report, both usability process and specification standards are presented as means of defining requirements and evaluating the usability of a product. It should be noted that multiple standards can be used for deriving usability requirement for RFPs or for assessing existing software or systems. Through detailed analysis of context-of-use, users can select the appropriate standards to derive usability and evaluate usability requirements based on product design and usability processes. 3.3 Usability Measures Usability measures contribute to the systems development lifecycle by providing feedback on usability problems and validating the usability of a system. Usability testing describes a wide range of experimental and observational approaches used to determine the usability of system features in all stages of the system development life cycle. Evaluation of the usability of a product provides feedback on the extent to which a design meets user needs and thus, it is central to a user-centered design process. Without proper usability evaluation, a project runs a high risk of expensive rework to adapt a system to actual user needs or potential rejection of the product or system (NRC, 2007). As will be shown in Section , bid evaluators can use knowledge of usability evaluation as part of the procurement process. The following sections describe four common usability measures, including: Formative Evaluation; Summative Evaluation; Expert Assessment; and Model-based Evaluation Formative Evaluation A formative evaluation is usability testing completed prior to the development of software and it gathers user feedback with respect to software design. Formative methods focus on understanding the user s behavior, intentions, and expectations and typically employ a thinkaloud protocol. Usability Information is gathered through discussions with users and these early evaluations can be shown in paper prototypes or interactive wireframes. When considering COTS systems, developers who undertook a UCD should easily be able to provide examples from the design evolution. A well conducted UCD should provide traceability about user requirements to subsequent designs. Some examples of prototypes that can be evaluated (either prospectively or retrospectively) are: 31 March Version 02

31 Paper-based, low-fidelity simulations for exploratory testing. CORA Task #164 Computer simulations (typically screen-based, e.g., Flash, Macromedia Director, Visual Basic, Java, HTML). This can simulate the user interface while sacrificing full fidelity. Working early prototypes of the actual product. Formative evaluations are done in rapid succession as design feedback is required. This can lead to some parts of the interface not being evaluated which can lead to usability issues when the software is completed. The advantage to using a formative evaluation is it get users involved in design very early in the process and allows for valuable feedback. The disadvantage is because the evaluation is done early in development, some usability issues may not be recognized until deployment. Nonetheless, it is expected that some form of early user involvement should be more likely to produce a usable system, compared to no user participation. As will be discussed in Section 4, formative usability assessments should be included in the RFP, and assessed in the evaluation. If the product has already been developed, the procuring organization can request that bidders describe the most significant formal usability test that has been conducted on the product. If the supplier organization does not possess the capability to conduct usability, they could specify their intent to contract a reputable third party evaluator with usability expertise to execute usability testing. While formative evaluation is done during development to improve a design, summative evaluation is done after development to assess a design (absolute or comparative) to document usability characteristics of the software (Scholtz, 2004). Summative testing is what many people consider to be the true form of usability testing, however, this is only possible on a completed system Summative Evaluation Summative methods for measuring quality in use can be used to evaluate whether the usability objectives have been achieved (ISO/IEC 25062:2006). If any of the measures fall below the minimum acceptable values, the potential risks associated with releasing the system before the usability has been improved should be assessed. The results can be used to prioritize future usability work in subsequent releases. Summative usability testing of an existing system can be used to provide baseline measures that can form the basis for usability requirements (i.e., objectives for human performance and user satisfaction ratings) for the next modification or release. Summative tests at the end of development should have formal acceptance criteria derived from the usability requirements (Theofanos, 2006). As discussed in Section 3, requirements are derived from the iterative process of defining context-of-use, associated standards, and establishing measures of effectiveness, efficiency and satisfaction. Joshi, Sarda, and Tripathi, (2010) provide eight general usability evaluation guidelines that can be used for summative testing: 31 March Version 02

32 1. Both organizational data gathering and user studies are done before requirements are finalized; 2. User studies are done in the context of the users by the method of contextual inquiry; 3. User studies are done with at least 20 users in each profile; 4. User studies are done by people with experience in user studies in a similar domain of at least 2 projects; 5. The findings including user problems, goals, opportunities, and constraints are analyzed, documented, and presented in an established user modelling methodology such as personas, work models, affinity diagram, etc.; 6. Competitive/similar products and earlier versions of the products are evaluated for potential usability problems, at least by using discount usability evaluation methods such as heuristic evaluation, and are benchmarked; 7. User-experience goals are explicitly agreed upon before finalizing requirements; and 8. Results are prepared according to common industry standard by following the ISO/IEC 25062:2006. Summative testing typically uses scenarios and relies on the use of pre-defined tasks that participants must complete. This would be elicited from a task analysis conducted much earlier on with users and stakeholders (Scholtz, 2004). The use of scenarios provides a systematic means to evaluate users on the same tasks. All tasks in the evaluation need to represent the functionality of the software and ideally, the evaluation should occur in the work environment where the software will be used. This will allow users to be exposed to all extraneous factors (e.g., distractions, interruptions, etc.) that may influence usability of the software (Scholtz, 2004) Expert Assessment Product quality can be assessed through the evaluation of user behavior and product characteristics. Although a user-based evaluation is the ultimate test of usability, it is not usually practical to evaluate all permutations of user type, task, and operational conditions. Expert evaluation of the characteristics of the product or interactive system can anticipate and explain potential usability problems without user testing, and it can be carried out before there is a working system. However, valuation of detailed characteristics alone can never be sufficient, as this does not provide enough information to accurately predict the eventual user behavior. Methods based on expert assessment of the characteristics of a system are used for the following purposes: To provide breadth that complements the depth of user-based testing; When there are too many tasks to include all of them in a usability test; 31 March Version 02

33 Before user-based testing; When it is not possible to obtain users; When there is little time; and To train developers. There are several approaches to expert-based evaluation including Heuristic Evaluation, usability walkthrough, and cognitive walkthroughs. Again, aside from actually conducting these tests during the evaluation, procurer s can request that test results be delivered according to common industry format (ISO/IEC 25062:2006) to facilitate comparison among other vendors (Bevan, 1999) Neilson s Usability Heuristics Heuristic evaluation assesses whether a system software user interface follows established, industry-accepted usability guidelines based on general rules of thumb. Neilson (1998) has ten general usability heuristics categories that provide recommendations for, or assessment of, system development (see Appendix A.1.4). Within each of the ten usability heuristics, there are a number of questions that a usability expert uses to determine if there are issues or problems with the system GUI. The evaluation should be conducted by a minimum of five usability experts and each evaluator independently judges the usability of the system. Once all evaluations are complete, the results are compiled according to severity ratings. From a bid evaluation perspective, the supplier could show which high priority issues were identified, and how these issues were addressed within the system design. It would also be useful to provide follow up results proving the issues are no longer present in the current COTS design. The heuristics provide high-level guidance and could be paired with a more detailed human engineering design standard such as MIL-STD-1472G (2012) Department of Defense Design Criteria Standard: Human Engineering or the Nuclear Regulatory Commission (NUREG) Human-System Interface Design Review Guidelines Cognitive Walkthrough A cognitive or usability walkthrough identifies usability problems while attempting to achieve tasks as a user would, making use of the expert s knowledge and experience with relevant usability research. A cognitive walkthrough uses the tasks identified in the task analysis to allow users to walk through the steps to identify potential usability issues. Typically, usability experts consider four questions when conducting a walkthrough (Blackmon, Polson, Muneo, & Lewis, 2002). These are: 1. Does the user understand which subtasks are needed to achieve the goals? 2. Do users notice if the correct action (e.g., is the button visible) is available and easy to access? 31 March Version 02

34 3. Do users know that a subtask can be completed by initiating an action (e.g., does the user is not clear which button to select)? 4. Is feedback provided to users upon completion of an action? A variation is pluralistic walkthrough, in which a group of users, developers, and human factors people step through a scenario, discussing each dialogue element. Methods such as a usability walkthrough that employ task scenarios are generally the most cost-effective and can be combined with heuristic principles or checking conformance to guidelines. Cognitive walkthroughs are included here as another example of what a procurer could look for in the bid evaluation as a sign of UCD practice that should help to increase the usability of a custom, MOTS or COTS system Model-Based Evaluations Model-based evaluations use computational algorithms to predict human interactions with software that may reveal usability issues prior to, or in lieu of, actual user-testing. Model-based evaluation methods can predict such measures as the time to complete a task or the difficulty of learning to use an interface. Some models have the potential advantage that they can be used without the need for any prototype. However, setting up a model usually requires a detailed task analysis, so model-based methods are most cost-effective in situations in which other methods are impracticable, or the information provided by the model is a cost-effective means of managing particular risks. There are a variety of modeling tools available and three commonly used tools are discussed in the following sections Goals, Operators, Methods and Selection Models (GOMS) A model of a human information processor based on human visual and auditory perception, long- and short-term memory and cognition has been developed. Goals are what the user needs to accomplish. Operators are the tasks that are performed to reach the goal and methods are the sequence the operators need to follow to accomplish the goals. Selection rules are developed and they describe what criteria determine when a user chooses one selection path over another. The model also includes times for completion of all the tasks. A task analysis is completed in order to develop the goals, operators, methods and selection rules. When this is complete, the tasks are put into the model and an analysis of the procedures required to achieve the goals is created (Card, Moran & Newell, 1983) Integrated Performance Modelling Environment (IPME) Tool The Integrated Performance Modelling Environment (IPME) tool is a task network simulation environment that is used to estimate visual, auditory, cognitive and psychomotor workload based on the priority and distribution of task flows. Workload ratings of individual tasks are assigned based on the Visual Auditory Cognitive Psychomotor (VACP) ratings scale developed by McCracken and Aldrich (1984). This methodology provides guidance by which to assess workload profiles for task sequences to determine instances where excessive workload demands might cause operator error. Workload ratings are on a scale from zero to seven, 31 March Version 02

35 where zero is low workload and seven is the maximum. The workload represented by each number in the scale varies with modality. For example, in the visual modality, zero represents no visual activity and seven is when a task requires continuous visual scanning. A task analysis would need to be done prior to running the simulations in the model. Once the user s tasks have been established, a workload rating is then assigned to each task and then simulations are run. The IPME output shows workload levels and task performance (IPME, 2009) Executive-Process/Interactive Control (EPIC) System The Executive-Process/Interactive Control (EPIC) incorporates theoretical and empirical findings regarding human performance in dynamic environments. Although EPIC was not originally developed for examining performance when assessing usability of software, EPIC is starting to be used for this purpose. Within EPIC, a model can be developed that represents the many steps and procedures required for users to navigate through software and this is done using production rules. The production rules incorporate the external stimuli required for completing specific tasks which then executes procedures in a way that the task requires. In turn, this simulates a human performing the tasks and cognitive and perceptual performance measures are derived (Kieras & Meyer, 1997) Advantages and Disadvantages to Using Model-Based Evaluations The advantages to using model-based evaluations are they are less expensive and theoretically less time-consuming that empirical research involving human participants and many iterations of the design can be tested. The disadvantage is that a task analysis must be completed prior to model development and this activity is time-consuming and costly. Also, model-based evaluations do not provide information about the errors that could result from poor design. 31 March Version 02

36 4 PROCURING USABLE SYSTEMS The procurement of usable systems centres on the early and frequent integration of usability expertise from those involved in the acquisition process. On the procurement organization side, usability experts can be helpful in the front-end analysis to help understand and formalize stakeholder priorities and end-user s needs. This knowledge can be transformed into testable usability requirement through the systematic process of understanding of a system s intended users and their tasks with the system. On the suppliers side, usability experts can help craft bid responses, add to organizational capability by providing mandatory expertise credentials, and of course, plan and conduct usability testing and evaluation. The difference is the timing of the usability involvement, which is dependent on whether the system is custom, COTS, or some degree of MOTS. While early HF involvement is often deemed unworthy of smaller level procurements, such as purchasing one system for one person, this should be weighed against the criticality of user s role and tasks. In any case, as will be shown, the application of early usability requirements is scalable such that any procured product should be able to request a minimum level of usability. 4.1 Context-of-Use As presented in earlier sections, needs analysis is a key stage of the procurement where user needs are typically sought for technical and non-technical requirements. From a usability perspective, the needs analysis is the ideal time to also elicit and analyze the context-of-use information as part of the formal stakeholder meeting prior to the software development. In addition to functional requirements, a context analysis seeks the activities that users perform to achieve system objectives, the relevant characteristics of the end-users of the system (e.g., expected training, degree of fatigue), the physical environment (e.g., available light, temperature) and any equipment to be used (e.g., protective or communication equipment). The social and organizational influences on users that could affect system use or constrain its design are analyzed when applicable. According to the ISO definition of usability, a number of criteria must be established to declare a product is usable within a context-of-use. As shown in Figure 4-1, the level of usability achieved will depend on the specific circumstances, physical and environmental, in which a product is used that may influence the usability of a product (ISO ). The goal of soliciting context-of-use is to understand how to procure systems that will effectively meet users needs and are compatible with the physical and organizational context. Context-ofuse analyses can help to mitigate the risks of design failures by providing a broader understanding of user needs and potential design obstacles. Once context-of-use is established, HF/usability experts can apply industry accepted standards, research-driven theories and established guidelines to the prior front-end analyses. The result is the identification and formalization of usability requirements. 31 March Version 02

37 Figure 4-1: Usability Framework (ISO ; Bevan, 1997) Following the stakeholder requirements, the procuring organization can use context-of-use information to create context scenarios that describe the operation of the system in its intended environment and to identify requirements that may not have been formally specified by any of the stakeholders, for example, legal, regulatory, and social obligations Context Scenarios In systems in which usability objectives are relevant, the context-of-use should be validated as part of customer requirements (ISO 13407). However, defining context-of-use does not imply simply asking users what they want as what a user wants is not always what the user really needs. Furthermore, software designers are willing to design for usability, but on their own they are not able to specify valid usability requirements. Therefore, usability requirements need to be developed by a usability expert in cooperation and in consensus with users. This is a process of helping users to figure out what they want to do and how they want to use a product. A useful tool for describing the context-of-use is the language of the user, also referred to as "context scenario" (Blackmon, Polson, Muneo, & Lewis, 2002). A context scenario describes the user's work situation as an episode including the objectives, the key tasks to achieve them, means and prerequisites for conducting tasks, and obstacles or shortcomings as well as the user's vision on how to improve the situation. Ultimately, a context scenario provides a structured story of the user's daily work episodes. The systematic structure of the context scenario can be achieved through a task analysis. 31 March Version 02

38

39 formally included in the RFP. The resources assigned to developing usability requirements will depend on the stage of product development (e.g. COTS vs. custom), the amount of information available to the engineer, and the budget available to define requirements. The following sections discuss approaches to defining usability requirements based on the identification of the usability concerns first based on context-of-use, followed by a translation into verifiable requirements 4.2 Writing Usability Requirements The concept of requirements has typically acted as the foundation for competitive selection of a desired system s suppliers and subsequent contracts between the acquirer and the selected supplier (NRC, 2007). Normally, the corresponding supplier payments or penalties are based on the degree to which these requirements are fulfilled. To achieve system usability in a contractual environment, the characteristics of usability requirements should be held to the same standards as any technical or functional requirements; they should be traceable to the users needs, complete, consistent, unambiguous, and testable (NRC, 2007). Ideally, requirements should be verifiable (testable), valid (does this test what we want it to test), and comprehensive (covers system thoroughly enough). While it has been argued that good usability requirements should be based on user performance, rather than design principles, guidelines, or process requirements (Jokela, Laine & Nieminen, 2013), it may not always be possible to state precisely how well users have, or should perform. Ultimately, the identification of objective criteria is difficult as it is context sensitive Incremental Approaches The specification of usability requirements during analysis within development or procurement situations is not a straightforward process. To successfully write usability requirements, the customer (or a knowledgeable representative of the customer) must be well informed about the available standards, practices and processes of usability design and evaluation techniques. While the aforementioned standards provide principles and recommendations, they may not always be entirely useful on their own. Usability standards need to be taken as guidance for recognizing and specifying usability requirements in a specific context-of-use which is subject to analysis by usability or human factors experts prior to the RFP development. The Common Industry Specification for Usability - Requirements (CISU-R), provides a standard for specifying usability requirements consistent with the usability definition of ISO , and the process recommendations of ISO The CISU-R focuses on iterative development of requirements. As such, requirements are decomposed into context-of-use, performance criteria, and usability test procedure that can be documented with increased precision as more information is gathered about the key user groups and their needs. The CISU-R recommends frequent stakeholder engagement. After each iterative round of discussion and revision, the stakeholders can approve the usability requirements for the product. These requirements then form the basis for continued product development and testing based on three levels of compliance (see Figure 4-3). Some projects may choose to develop 31 March Version 02

40 requirements iteratively, reaching higher levels of compliance over time. As more information is gathered about requirements, the usability requirements for the key user groups can be documented with increased precision and specificity as the parties involved finalize and approve the usability requirements for the product. For example, requirements might be developed to Level 1 during work on conceptual prototypes; to Level 2 as the user interface design is specified; and to Level 3 for summative testing of the product. Figure 4-3: Information included in Requirements at each Level of CISU-R Compliance Requirements based on Context-of-use & Standards Geis and colleagues (2003) also recommend specifying a usability requirement from general to specific. The authors approach guides requirements definition through a product's context-ofuse (ISO ) and then relates it to an applicable design. This may serve to clarify expectations from both the procurement and supplier viewpoints. Geis and colleages examples centre on the ISO 9241 principles; however, this approach could be adapted to a number of other design standards. Figure 4-4 illustrates how a context-of-use analysis is used as a prerequisite for specifying usability requirements specifically within the ISO standards. Information acquired from the context-of-use analysis provides the rationale for specifying context-related usability requirements. These requirements, typically specified in terms of user performance, are suitable for bridging the gap between the implied needs in the context-of-use and a proposed design solution. Figure 4-4 presents the template for context-of-use driving requirements. 31 March Version 02

41 Figure 4-4: Framework showing the Application of ISO in the Specification of Context-Related Requirements (Geis et al., 2003) To apply this method, Figure 4-5 provides a concrete example of a railway s traveler s ticket purchasing needs and corresponding design requirements. From a context-of-use analysis, we are informed that Most railway travelers buy railway tickets within the railway station where they start their journey from. As such, the requirements engineer decided that the user should have his station of departure pre-selected in the beginning the dialogue with the system. This context related requirement conforms to ISO (Suitability for the task): If default values exist for a task, these default values should be made available to the user automatically. This structure provides a means for those in charge of writing requirements to link to appropriate standards. This method also provides traceability for bidders who need to show how their design solutions meet both the standard and the context-of-use. 31 March Version 02

42 Figure 4-5: Example Analysis Case showing the Application of ISO in the Specification of Software Design (Geis et al., 2003) Accordingly, Geis and colleagues (2003) framework can further be applied for constructing usable design solutions. Figure 4-6 describes Geis and colleagues application of ISO in combination with those usability standards which are devoted to dialogue techniques (ISO 9241 parts 14-17). Here we see the proposed design requirement (and also solution) the user needed to have his station of departure pre-selected is: Fields should contain default values wherever possible and appropriate to the task (ISO ). Through methods such as a task analysis and context scenarios, requirements engineers can elicit a plethora of similar contextof-use requirements that can then but translated into proposed design solutions. Equally, this can be used as a means to detail the areas where usability is required, and suggestions for standards to be applied. Although the generic recommendations of ISO are not specific to any dialogue technique, they may correspond to a specific recommendation given in a dialogue-technique specific standard. Figure 4-6 illustrates how a designer may choose an applicable recommendation in one of the specific standards for dialogue techniques and set this recommendation relative to both the context-of-use and a design principle (or generic recommendation). This approach helps the designer to create a design solution in line with the recommendations given in usability standards. Although the usability standards do not provide a design proposal, the designer is guided to a solution that does not violate the standard recommendations (Geis et al., 2003). 31 March Version 02

43 Figure 4-6: Applying ISO in Association with the Dialogue-Techniques Design Solution (Geis et al., 2003). This section has discussed how to conduct a stakeholder analysis to define the desired product s context-of-use. Prior to development of a custom system, or the procurement of a COTS system, a purchasing organization can use context-of-use information combined with usability design standards as a framework (Geis et al., 2003) for specifying the quality in use requirements that the system must meet and against which acceptance testing may be carried out (Bevan, 1997) Quantitative Usability Requirements The quantification of usability goals through the use of usability objectives is a recognized human factors and HSI best practice for many kinds of systems, however they are not employed often or consistently (NRC, 2007). The main goal of specifying usability objectives is to create a metric that can be applied during usability testing as a way of having quantitative acceptance criteria for the test. Usability objectives are one way to create a quantitative quality-related goal and avoid qualitative, ambiguous conclusions that are sometimes claimed about devices (e.g., This device is user-friendly, or easy to use ). Johansson & Lahtinen (2012) suggest that the current RFP terminology of functional and nonfunctional terms for requirements is too ambiguous. Usability requirements should be presented as context-specific, measurable metrics, which allow bidders to more clearly evaluate their products to avoid errors, and provide a more accurate price (Johansson & Lahtinen 2012). Further, precise requirements should facilitate procurers' evaluation of specific proposals. 31 March Version 02

44 A quantitative usability requirement should be specified in terms of a pre-defined performance metric of the user s tasks while interacting with the system user interface. Again, users and tasks should align with the product's established context-of-use. The concept of performance has a threefold meaning (Geis et al., 2003). First, performance measures a behaviour of an object (e. g. the selected menu option is echoed at the display by highlighting). Second, performance refers to a process conducted by the user (e. g. discriminating selected and unselected menu options). Third, performance indicates the quality or quantity of a certain success which a user should achieve in interaction with the system (e. g. a user succeeds to select the intended menu option within a menu panel). Typically, quantified usability objectives include Human performance goals (objective goals), such as task completion time, success rate or error rate and type, and learning time. These may be based on baselines from similar or current products (when a new replacement is being procured), or through industry standards when available. If possible, baseline values can also be defined through related HF research activities, however the resource investment may be substantial. According to the ISO 9421 definition of usability, the following are definitions and examples (NRC, 2007) of quantified measures: 1. Effectiveness is a measure of the accuracy and completeness with which users achieve specified goals. Common metrics include completion rate and number of errors. E.g., 90 percent of experienced nurses will be able to insert the infusion pump tubing set on the first try with no instructions. And 100 percent will be able to correct any insertion errors 2. Efficiency is a measure of the resources expended in relation to the accuracy and completeness with which users achieve goals. Efficiency is related to productivity and is generally measured as task time. E.g., After reading the quick reference card, 90 percent of experienced clinicians will be able to properly configure the display on the first try to show the two ECG lead traces 3. Satisfaction is the degree to which the product meets the users expectations a subjective response in terms of ease of use, satisfaction, and usefulness. E.g., 80 percent of experienced intensive care unit nurses will prefer the readability of the display for the latest generation ventilator monitor compared with the existing monitors Despite the aforementioned recommendations, it still may be difficult for a customer to specify quantifiable usability requirements in an RFP, especially those that suggest further user testing prior to contract award. With custom systems, the RFP can specify that the contractor shall verify, validate and manage usability requirements, including the conduct of acceptance tests. When the system is COTS with enhancements, such as MOTS, the customer cannot prescribe a specific user interface (Lauesen, 1998). Furthermore it must be possible to verify the usability requirements with a reasonable effort, which will not discourage serious proposers. Lauesen (1998) offers the notion of design styles as an alternative approach to consider when creating 31 March Version 02

45 requirements. Lauesen (1998) suggests six different styles of usability specification Styles 1-4 are forms of user-testing, while 5-6 could be conducted offline: 1. Performance Style: Performance style specifies how fast users can learn various tasks, how fast they can perform after training, etc. Requirements are verified through usability tests using prototypes early during development, thereby tracing the requirements forward into design. a. E.g. Novice users shall be able to perform tasks Q and R in 15 minutes. Experienced users shall be able to perform tasks Q, R, and S in 2 minutes. 2. Defect Style: The defect style resembles the performance style, but instead of measuring task times, it identifies the usability defects in the system and specifies how frequently they may occur. A usability defect is something which causes the user to make mistakes or feel annoyed. The user is asked to think aloud during usability tests, and an observer records the defects. a. E.g. On average, a novice user shall encounter less than 0.2 serious usability defects when performing tasks Q and R. [A serious usability problem is typically a task failure, i.e. that users cannot complete the task on their own. Thus the requirement roughly says that at least 80% of users shall be able to complete the tasks on their own] 3. Process Style: The process style specifies the development procedure to be used for ensuring usability. The style does not say anything about the result of the development, but anticipates the process will generate a good result. a. E.g. During design, a sequence of 3 prototypes shall be made. Each prototype shall be usability tested and the defects most important to usability shall be corrected. 4. Subjective Style: The subjective style, asks users about their opinion, typically with questionnaires using a Likert scale (e.g., Chin, Diehl & Normal, 1988; Lin, Choong & Salvendy, 1997; Sauro & Kirkland, 2005). a. 80% of users shall find the system easy to learn and efficient for daily use. 5. Design Style: The design style prescribes the details of the user interface, essentially turning the usability requirements into functional requirements. They are easy to verify in the end product and easy to trace during development. a. The system shall use the screen pictures shown in App. Xx 6. Guideline Style: The guideline style prescribes the general appearance and response on the user interface. This may be considered as a set of broad functional requirements that apply to every window, etc. Guidelines may be official or de facto style guides (e.g. Window Proprietary), or they may be company guides or experience- based rules (e.g. MLT-STD 46885A). Although guidelines may improve usability, they have little relation to the essence of usability. In other words, you can have a system that users find very hard to use although it follows the guidelines. 31 March Version 02

46 a. E.g. The system shall follow the MS-Windows style guide. Menus shall have at most three levels. Lauesen (1998) warns that no style is ideal. While system oriented requirements are easy to verify during design, they still do not guarantee the usability the customer expects. The styles also specify and measure the usability factors more or less directly. The best choice in practice is often a combination of the styles, so that some usability requirements use one style, and others use another style (Lauesen, 1998). Furthermore, as suggested in the previous section, it is difficult to set target values. In general, it can be risky to insist on specific targets in a procurement process with COTS systems. If the target is too restrictive, suppliers may decide not to make a proposal. In reality, the customer might be satisfied with a system that does not fully meet the target, if the system has other qualities. On the other hand, why set a too pessimistic target if you could get something better. Lauesen s (1998) solution is to let suppliers specify the target values. For instance, by asking them to specify the necessary course time for performing certain jobs or required training time to learn how to use the system. When the customer later compares the various proposals, they can compare prices as well as usability related performance values. When the customer has selected a supplier, they set up a contract based on the RFP requirements. In case the supplier chooses an iterative design, there is a risk that he cannot provide a satisfactory design in the stated number of iterations. In this instance, the customer might want to cancel the contract, but that is difficult since the supplier has not committed to any specific usability level. Lauesen (1998) suggests that the customer pay a fee for the cancellation, while the supplier specifies the fee up front. 4.3 Evaluating Usability Components RFPs This section discusses how bid evaluators can use knowledge of usability evaluation as part of the procurement process. From a procurement perspective, awareness of the various types of usability assessment techniques is vital for the requirements definition through to the conduct of an informed bid evaluation. In particular, when hands-on usability is not possible at time of evaluation, the bid evaluator needs to understand what (if any) usability testing was complete at the time of product development, or what testing is planned post-cots modification. If development is required post contract award, the evaluator will need to assess a supplier organization s claimed ability to integrate the requested amount of usability into the product design Usability Components Overview There are four main components that should be considered when assessing and specifying the concept of system usability: organizational capability, process quality, product quality and quality in use (NRC, 2007; UsabilityNet, 2003). The relationship among these four measures is illustrated in Figure 4-8. As described below (from top-left moving clockwise in), the organization s usability capability permits the occurrence of usability processes. Better processes yield products according to accredited design standards and guidelines, leading to better quality products. In turn, and better quality products are perceived as easier-to-use by 31 March Version 02

47

48 information, the procuring organization must state this requirement in the RFP. It has been suggested that bidders describe their management process that verifies and controls the overall development life cycle and assigns traceability and accountably to ensure that customer requirements are met during the product design in the development process (Hix, Hartson, Siochi & Ruppert, 1994). As part of organizational capability, supplier organizations are typically required to describe their Project Management Organization (PMO). PMO may include the structure of the personnel responsible for each sub-domain, some of which could be usability. PMO also includes links between usability and HF/HSI personnel and the contractor s organization (if HF is contracted externally), the number of personnel proposed, position titles, qualifications, responsibilities, experience, accreditations, and HF capability in sub-contractors organizations. Accordingly, a number of relevant organizational capabilities that could be included in the RFP are provided below: The bidder shall describe the size and the professional skill set of their user-experience or usability team, which may include the following information: a. Demonstrate knowledge of usability, HSI, and HF; b. Demonstrate knowledge of how to fit usability practice into the systems engineering process; c. Demonstrate the presence of usability, HSI and/or HF program teams; d. Show Experience with HSI programs e. Demonstrate the applied use of HF methods, application of HF results f. Demonstrate usability/hf related certification (e.g., engineering, psychology, usability backgrounds, degrees, training) Process Quality Process quality implies that suppliers possess sufficient knowledge and expertise of usability practices and procedures to adhere to the customer s usability requirements (Artman, 2002). Process quality is relevant to all procurement products including when the product is already developed, is not yet developed, or user-testing is or is not possible at the time of purchase. The difference being whether suppliers are demonstrating quality in the processes already conducted, or quality in the processes to be conducted. In any case, to demonstrate process quality, the supplier should be provide knowledge and experience with usability concepts such as design standards, user-experience, user-centered design, iterative prototyping, systems engineering for software lifecycle management, HSI and/or HF. It is generally a good idea to include at least one mandatory RFP requirement in the proposal that details the user-interface development methodology, including descriptions of appropriate techniques and any tools that will be used (Hix et al., 1994). Further, the RFP should require 31 March Version 02

49 that developers explicitly show the customer how each of the required usability principles will be incorporated into their proposed methodology (Hix et al., 1994). To ensure the contractor shares the main objectives of system development, the RFP should make clear requirement specifications that focus not only on the product but also on the process and participation (Artman, 2002). Similarly, usability and user-centered design competence can be employed by the procuring organization while making a concrete, interactive prototype before the developmental organization is contracted. This prototype can then be used both as a common ground for communicating different requirements with the decision makers at an early stage (Artman & Zällh, 2005). The following is an example of process requirement and means to evaluate it from the healthcare domain (Healthcare Information and Management Systems Society (HIMSS), 2010): Please describe what other user-centered design and usability evaluation practices you include in your development process, including informal usability testing. Include number and types of users involved, manner in which they were engaged, type of data collected and how your organization used the information to improve the product design. Your answer should clearly explain (and elaborate where necessary): a. The number and types (roles) of users that participated; b. The mix of computer skill level in the pool tested (novice to expert); c. How participants were recruited (e.g. from one organization or many?); d. The scenarios tested; e. Efficiency measures that were captured for each scenario and the overall results for each; f. Effectiveness measures that were captured for each scenario and the overall results; g. Satisfaction data that was captured either per scenario or as overall results; h. Other qualitative results captured during the process; i. How your organization utilized the data to incorporate changes into your product; The following information provides suggestions on how to recognize and evaluate suppliers answers: Good Answer: A good answer should reflect a range of user-centered, iterative design activities, including: Iterative, informal usability testing with a broad selections of users starting early in the design phase; Expert review (by human factors or usability specialists); 31 March Version 02

50 Contextual inquiry (a specific method of learning about users and their tasks by observation in the workplace), Task analysis (a formal analysis of tasks performed by each type of user and how they relate to each other in the workflow);and Focus groups (facilitated by human factors or usability specialists.) Questionable Answer: A poor answer would likely not constitute usability/ucd methods and does not provide any indication of the vendor s true incorporation of usability practices and principles. For instance, if the vendor only states their system was designed with User group input, and Vendor staff with domain relevant background who provided the user understanding and expert perspective, this would be an insufficient response. These responses are vague and not inclusive of the UCD process, signifying they have not embraced usability in their design process Product Quality Product quality can be measured using both formative and expert evaluation. Again, when referring to COTS products, any of the following types of tests can be incorporated into an RFP by requesting evidence that product quality testing was completed, including the results themselves. As presented in 3.3, product quality evaluation methods that could be described in the RFP include: a. Methods based on expert assessment of system characteristics against design standards, or discount usability including heuristic evaluation and cognitive walkthroughs; and b. The use of model-based evaluations (such as IPME and GOMS) could be required, or suggested in the RFP, as a means for suppliers to demonstrate the projected usability evaluations of software. Given that model-based evaluations are still relatively immature, it is likely they would only be required in organizations that are intimately familiar with the area. While expert evaluation and heuristics are traditionally done on complete systems with handson testing, there may be situations where COTS or MOTS cannot be provided for the evaluation. In these cases, the supplier could again provide evidence that expert assessment was done during the original development, as well as provide the corresponding results. Alternatively, it is possible to conduct expert assessment during product evaluation through an online demonstration provided by the supplier. This type of demonstration could be based on a fixed set of tasks established by the customer (if known), or by the supplier if the tasks are unique to the product. If the number of potential products is large, it may be more feasible to limit this demonstration style of expert evaluation to a selected few (e.g., top three choices based on functional assessment only). In these cases, it is again recommended to use evaluators with usability expertise to lead the evaluation in concert with active feedback from representative users. 31 March Version 02

51 4.3.5 Quality in Use Quality in use typically describes the evaluation of the user s performance and satisfaction when using the product or system in a real or simulated working environment. Quality in use is primarily accomplished through summative user testing with a representative end-user sample interacting with the product in the appropriate context-of-use. As discussed in Section , summative testing typically uses scenarios and relies on the use of pre-defined tasks that participants must complete. As a rather resource intensive tasks, usability experts, rather than procurement officers should conduct the user testing. Scoring of user performance should be based on the predefined usability criteria as defined in the RFP. Again, this may be an objective values if known, or more descriptive based on context of use, as described in Section 4.2. Depending on the type of product and security restrictions of a given domain (e.g., defence), it may not be possible to conduct hands-on user testing through the procurement organization. In these situations, similar to how usability experts can be used to help write requirements, the same kinds of usability experts or other third parties may be invited to assist in evaluating bids (e.g., scholars and other experts). This can be particularly useful in evaluating the usability of an actual product (e.g. COTS) or product prototype prior to contract award. When third parties will be involved, bidders must be advised in the bid solicitation. In general, third parties participating in the evaluation, or in the bid preparation, must sign a non-disclosure agreement and a conflict of interest agreement before such participation. In addition (as presented for organizational capability), bidders are typically required to provide a minimum level of related experience, such as years of experience or project based experience. In this sense, vendors could provide project descriptions of previous summative testing accompanied by the references to the organizations or products for who the work was completed. The four previously described components of usability combine the identification and evaluation of user-centered design, standards and software usability requirements designed to elicit bidder responses that allow RFP evaluators to rank the relative usability of two or more systems (e.g., software or web applications, etc.). The following section summarizes these components in a comprehensive framework that includes methods for bid evaluation against the specified requirements. 4.4 Usability Procurement Framework The Usability Procurement Framework (UPF) presented in Figure 4-9 is intended to summarize the methods of defining and evaluating usability requirements in the context of procurement presented in Section The overall framework proposed by the authors was derived through amalgamation of the previously presented procurement processes, related standards, context-of-uses analysis and other HF methods that support usability requirement definition and evaluation. The UPF is situated along a vertical and horizontal axis. From left to right, the X-axis represents the spectrum of project initiation to project completion (note as this may encompass COTS products it is not necessarily representative of the system life cycle). From bottom to top, the Y- 31 March Version 02

52 axis represents the spectrum of procurement activities to HF considerations, largely usability. Along the lower X-axis, the UPF framework centers on the procurement processes that were described in section 2. Within the upper half, each stage of the procurement is driven either directly (solid arrow) or indirectly (dashed arrow) by HF and usability expertise. The primary requirement definition-based phases and activities are highlighted by the lighter grey boxes, while the requirement evaluation-based activities are highlighted by dark grey regions. In general, the main stages of the procurement process are each supported by the HF processes, linked above the respective stage. The initial needs analysis is based on product s context-of-use. Information on context-of-use can be elicited through stakeholder meeting and user interviews, and used to create context scenarios. Scenarios are systematically decomposed by task analysis, to define specific use cases. Connected to the right of the context-of-use analysis is guidance on how to write requirements. Procurers can improve the clarity of requirements from the design to the development team through by linking use-case information to usability design specification and process standards. Depending on whether the procurement is for a commercial or COTS/MOTS system, the RFP can request various elements of usability, such as organizational capability (PMO and personnel expertise), process quality (the organization s quality process within management, experience in UCD), product quality (how the product meets or will meet the RFP s requirements), and quality in use (based on past or future user-testing). Linked back to the procurement process, the procurement organization can request feedback on proposed requirements and supplier organization qualifications though RFIs or related processes (RFQ, LOI, draft RFP). This feedback will help ensure the requirements are feasible as to not discourage applicants, and to allow contractors more time to prepare their bid response. Following the release of the RFP, the potential contracts will need to provide responses to all of the requirements, including usability, as specified by the RFP. Again, these are linked to the four usability components as contracts will need to provide traceability of their usability capability in line with the corresponding requirements. Note that the start and near finish of the model are marked by the SDLC model. This cycle is intended to represent the full spectrum of where the actual product development could occur within the UPF framework. In a COTS system, the suppliers own requirements analysis is based on their interpretation of the end-user and would have already taken place outside the procurement process in question. However, in a custom or MOTS system, further development post-contract award is necessary. In these cases, the procuring organization has more control over the usability specifications. The SDLC may need to be repeated in whole or in part to meet the usability requirements as specified in the RFP. The UPF provides a bird s eye view of the entire process from writing to evaluating usability requirements. What is difficult to capture here however, are the intricacies of COTS, MOTS and custom systems. As such, the next section illustrates the revised versions of the procurement process, focused explicitly on what the procurement organization can and should request in the RFP, and what should be evaluated depending on the level of customization required. 31 March Version 02

53

54

55

56

57 Table 4-1: Summary of Key RFP Requirements and Application to Products Requirement Category Description Evaluation Components Custom COTS MOTS Organizational Processes The capability of an organization to routinely employ appropriate HSI methods and techniques. Usability expertise of personnel & company experience of usability projects; PMO that understands and supports usability design/testing/evaluation Current Current Current Development Processes Evaluation of the systems development method to assess whether appropriate HSI methods and techniques were or will be used. Ability to plan, verify, validate and manage usability requirements such as demonstrated experience in UCD, task analysis, iterative prototyping, user requirements, user testing. Current Current Current Product Characteristics Evaluation of the design characteristics of the interactive system, user tasks, and the working environment to identify any obstacles to usability. Knowledge and demonstrated experience in applying design standards/guidelines, and/or expert assessment to verify or adhere to RFP. Future Current Current & Future User Performance & Satisfaction Quality in use assessment during product interaction in a real or simulated working environment. User testing based on user efficiency, effectiveness and satisfaction Future Current Current & Future 31 March Version 02

58 5 DISCUSSION & CONCLUSION Defining measurable usability requirements poses a specific challenge to both measuring usability and the ability to set target levels (Jokela, 2010, 2008). Part of the issue stems from lack of clarification of the procurer's role in defining usability prior to contract award. Until recently, system development methods and models for user-centered system development have mainly focused on the developmental side of the contract (i.e., post contract award, while COTS provides development pre-contract but outside of the procurement s control; Artman, 2002). Despite ample research on user-centered design for developers, there is a lack of understanding on how to integrate usability into a general system-development process (Artman & Zällh, 2005). The implicit reasoning seems to be that it is the contractor s responsibility to ask the right questions and produce the right design. This means that procurers have only passive voices in the development of usable systems. To successfully write and evaluate usability requirements, this report has provided a general framework designed to identify the usability concerns first, and later translate them into verifiable requirements (Lauesen, 1998). The UPF recommends to: 1. Identify the key usability issues among the interaction of critical tasks, user profiles and system goals; 2. Select a requirements styles/standards to cover the issues; and 3. Select appropriate metrics and target values. Collectively, this information forms the usability criteria specified during procurement, and ideally leads to the purchase or development of more usable systems (Jokela, Laine & Nieminen, 2013). The intended result of the UPF is to provide a better understanding of the context of usability within software development towards the acquisition of a more usable system. The framework is not intended to substitute for usability expertise. First and foremost, to apply the UPF, the procuring organization (or a knowledgeable representative) must be well-informed about the available standards, practices and processes of usability design and evaluation techniques. Ginsburg (2005) suggests HF evaluation should be performed to influence all hospital procurement decisions when purchasing medical devices, including task analysis to specify user tasks for conducting usability evaluation. Although the suggested inclusion of HF in procurement is not novel, the demonstrated use of HF, and particularly usability requirements, is still not commonplace. Given the lack of current processes for integrating usability in procurement, it is difficult to judge the validity of the proposed UPF. As mentioned, even systems that are carefully procured using design standards in conjunction with a human factors expert may need to be verified through means such as prototyping and testing with representative users prior to purchase (Ahlstrom & Longo, 2003). If direct access to systems is not possible (e.g. for security purposes), it is advised to request information on the supplier s usability capability, as well as their usability design processes. Another option is to request usability testing from a third party vendor who is hired to test the product on site. Testing will allow the designer to confirm the positive design 31 March Version 02

59 features and identify any problematic design features that may have been unforeseeable as part of the requirements design process. Ultimately, the question of how to acquire usable systems is theoretically simple. Usability requirements need to be included as part of the RPF. However, more investigation is necessary to assess procurement organizations ability (and willingness) to define usability requirements in a way that affects product selection, bidders ability to respond to RFPs, and bid evaluator s ability to select the suppliers who are able to deliver truly usable systems. 31 March Version 02

60 6 REFERENCES Ahlstrom, V., & Longo, K. (2003). Human Factors Design Standard (HF-STD-001). Atlantic City International Airport, NJ: Federal Aviation Administration William J. Hughes Technical Center. Ardito, C., Buono, P., Caivano, D., Costabile, M. F., & Lanzilotti, R. (2013). Investigating and promoting UX practice in industry: An experimental study. International Journal of Human- Computer Studies, doi: /j.ijhcs Artman, H. (2002). Procurer Usability Requirements: Negotiations in Contract Development. In NordiCHI, October 19-23, Artman, H., & Zällh, S. (2005). Finding a way to usability: procurement of a taxi dispatch system. Cognition, Technology & Work, 7(3), doi: /s Beevis, (1992). Analysis techniques for man-machine design, Vol 1 & 2. NATO/Panel 8- RSG,14, Technical Report, AC/243 (Panel 8) TR/7, Brussels: North Atlantic Treaty Organization. Bevan, N. (1995). Proceedings of the 6th International Conference on Human Computer Interaction, Yokohama, July Anzai & Ogawa (eds), Elsevier. Bevan, N. (1999). Common Industry Format Usability Tests. In Proceedings of UPA 98, Usability Professionals Association, Scottsdale, Arizona, 29 June 2 July, 1999 (pp. 0 5). Bevan, N., & Azuma, M. (1997). Quality in use: Incorporating human factors into the software engineering lifecycle. In Software Engineering Standards Symposium and Forum, Emerging International Standards. ISESS 97., Third IEEE International (pp ). IEEE. Blackmon, M. H. Polson, P.G. Muneo, K & Lewis, C. (2002) Cognitive Walkthrough for the Web CHI (1), Blanchard, B. S., Fabrycky, W. J., & Fabrycky, W. J. (1990). Systems engineering and analysis (Vol. 4). Englewood Cliffs, New Jersey: Prentice Hall. Booher, 2003 (ed) Handbook of Human Systems Integration. John Wiley and Sons Inc., Canada Card, S. Moran, T. P., & Newell, A. (1983). The Psychology of Human Computer Interaction. Hillsdale, New Jersey: Lawrence Erlbaum. Chin, J. P., Diehl, V. A. & Norman, K. L. (1988, May). Development of an instrument measuring user satisfaction of the human-computer interface. Paper presented at the meeting of the Conference on Human Factors in Computing Systems, Washington, DC. Chagpar, A., Cafazzo, J., & Easty, T. (2006). Lessons Learned from a Comparative High Fidelity Usability Evaluation of Anesthesia Information Systems. Proceedings of the Human 31 March Version 02

61 Factors and Ergonomics Society Annual Meeting, 50(24), doi: / CORA Task #164 Dean, J.C. & Vigder, M.R. (1997). System implementation using commercial off-the-shelf (COTS) software. NRC Publications Archive. Report Number Track 4 - Technology Adaption. Defense Acquisition University (2010). Introduction to Defense Acquisition Management (10 th ed.). Defense Acquisition University Press, Fort Belvoir, Virginia, Defense Acquisition University (2012). Glossary of Defense Acquisition Acronyms and Terms, 12th Edition. Available at Department of Defense. (2011). Incorporating Test and Evaluation into Department of Defense Acquisition Contracts. Washington, DC. Department of Defense. (2001). Systems Engineering Fundamentals. Systems Management Collect, Defense Acquisition University Press, Fort Belvoir, Virginia, (p. 217). Defense Science Board (DSB, 2009). Buying Commercial: Gaining the Cost/Schedule Benefits for Defense Systems. Report of the Defense Science Board Task Force on Integrating Commercial Systems into the DoD, Effectively and Efficiently. Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, Washington, D.C , pp Diaper, D., & Stanton, N. (Eds.). (2003). The handbook of task analysis for human-computer interaction. CRC Press. Department of Defense (2013). Interim DoD Instruction , Operation of the Defense Acquisition System (pp. 80). Geis, T., Dzida, W., & Redtenbacher, W. (2003). Specifying usability requirements and test criteria for interactive systems: Consequences for new releases of software-related standards within the ISO 9241 series - Research report Fb 1010 (pp. 60). Ginsburg, G. (2005). Human factors engineering: a tool for medical device evaluation in hospital procurement decision-making. Journal of biomedical informatics, 38(3), doi: /j.jbi Findlay, D. (2014). Announcing the Defence Procurement Strategy. Economic Club of Canada Ottawa, Ontario. Available at Greenley, M., Scipione, A., Brooks, J., Salway, A., Dyck, W., & Shaw, C. (2008). The Development and Validation of a Human Systems Integration (HSI) Program for the Canadian Department of National Defence (DND). Contract report for DRDC by CAE Professional Services (DRDC -CR ), Ottawa, Canada. 31 March Version 02

62 Hackos, J. T. & Redish, J. (1998). User and task analysis for interface design. Toronto, Canada: Wiley and Sons. Healthcare Information and Management Systems Society HIMSS EHR Usability Task Force, v 1.0. (2010). Selecting an EHR for Your Practice: Evaluating Usability (p. 26). Available at Human Factors and Ergonomic Society (HFES) (2008). Human Factors Engineering of Software User Interfaces (ANSI/HFES 200). Washington, DC: American National Standards Hix, D., Hartson, R., Siochi, A., & Ruppert, D. (1994). Customer Responsibility for Ensuring Usability : Requirements on the User Interface Development Process. Journal of Systems, 25, HIMSS EHR Usability Task Force (2010): Hornbæk, K. (2006). Current practice in measuring usability: Challenges to usability studies and research. International Journal of Human-Computer Studies, 64(2), doi: /j.ijhcs Integrated Performance Modelling Environment (IPME, 2009). Alion Science and Technology, Inc. Retrieved from International Standards Organization (1998). Ergonomic requirements for office work with visual display terminals (VDTs) Part 11: Guidance on usability International Standards Organization (1999).ISO-13407:1999 Human-centred design processes for interactive systems. International Standards Organization (2006). ISO/IEC 25062:2006 Software engineering -- Software product Quality Requirements and Evaluation (SQuaRE) -- Common Industry Format (CIF) for usability test reports. International Standards Organization (2010). Ergonomics of human-system interaction 9241, Part 210: Human-centred design for interactive systems. International Standards Organization (2008). ISO/IEC 12207, Systems and software engineering Software life cycle processes International Council on Systems Engineering (INCOSE; 2014). What is Systems Engineering. Website available at Johansson, B., & Lahtinen, M. (2012). Requirement Specification in Government IT Procurement. Procedia Technology, 5, doi: /j.protcy Johansson, B., & Lahtinen, M. (2013). Getting the balance right between functional and nonfunctional requirements : the case of requirement specification in IT procurement. International Journal of Information Systems and Project Management, 1(1), doi: /ijispm March Version 02

63 Jokela, T. (2008). A Two-Level Approach for Determining Measurable Usability Targets. In E. L. Law, N. Bevan, G. Christou, M. Springett, & M. Lárusdóttir (Eds.), Proceedings of the International Workshop on meaningful measures: valid useful user experience measurement (VUUM) (pp ). Reykjavik, Iceland, June 18th 2008: Institute of Research in Informatics of Toulouse (IRIT) - Toulouse, France. Jokela, T. (2010). Determining Usability Requirements into a Call-for-Tenders. A Case Study on the Development of a Healthcare System. In NordiCHI 2010, October (pp ). Jokela, T., Laine, J., & Nieminen, M. (2013). Usability in RFPs: The current practice and outline for the future. In M. Kurosu (Ed.), Human-Computer Interaction, Part II, HCII 2013, LNCS 8005 (pp ). Springer, Berlin Heidelber. Kieras, D. E., & Meyer, D. E. (1997). An overview of the EPIC architecture with cognition and performance with application to human-computer interaction. Human-Computer Interaction, 12, Joshi, A., Sarda, N. L., & Tripathi, S. (2010). Measuring effectiveness of HCI integration in software development processes. Journal of Systems and Software, 83(11), doi: /j.jss Landolt, J. P., & Evans, M. J. R. (2001). An R&D Strategy for the Way Ahead in M&S for the Canadian Air Force. In The Second NATO Modelling and Simulation Conference (Vol. 1). Lauesen, S. (1998). Usability Requirements in a Tender Process. In Proceedings of OZCHI 98, IEEE Computer Society, 1998 Usability (pp. 1 8). Adelaide.Lee, Y., & Kozar, K. a. (2012). Understanding of website usability: Specifying and measuring constructs and their relationships. Decision Support Systems, 52(2), doi: /j.dss Lehtonen, T., Kumpulainen, J., Liukkonen, N., & Jokela, T. (2010). To what extent usability truly matters? A study on usability requirements in call-for-tenders of software systems issued by public authorities. In NordiCHI 2010, October (pp ). Lin, H. X., Choong, Y. Y., & Salvendy, G. (1997). A proposed index of usability: A method for comparing the relative usability of different software systems. Behaviour and Information Technology, 16(4/5), Lund, A. M. (2001). Measuring usability with the USE Questionnaire. STC Newsletter, 8.2. Markensten, E. (2005). Mind the Gap: A Procurement Approach to Integrating User-Centered Design in Contract Development. Stockholm University. Markensten, E. (2003). Procuring Usable Systems-An Analysis of a Commercial Procurement Project. In Proceedings of HCI International, 3, McCracken, J.H. & Aldrich, T.B. (1984). Analysis of selected LHX mission functions: Implications for operator workload and system automation goals. (Technical note ASI March Version 02

64 024-84(b)), Fort Rucker, AL: Anacapa Sciences, Inc.MIL-STD-1472G (2012) Department of Defense Design Criteria Standard: Human Engineering. Military Standard-1472 G (2012). Department of Defense Design Criteria Standard: Human Engineering. MIL-STD-46855A (2011) Department of Defense Standard Practice: Human Engineering Requirements for Military Systems, Equipment, and Facilities. National Research Council (2007). Human-System Integration in the System Development Process: A New Look. Washington, DC: The National Academies Press. Available at National Research Council (2002). Equipping Tomorrow s Military Force: Integration of Commercial and Military Manufacturing in 2010 and Beyond. Available at National Research Council (2008). Pre-Milestone A and Early Phase System Engineering: A Retrospective Review and Benefits for Future Air Force Acquisition. Committee on Pre- Milestone A Systems Engineering: A Retrospective Review and Benefits for Future Air Force Systems Acquisition with the Air Force Studies Board and the Division on Engineering and Physical Sciences. The National Academies Press. Available at National Institute of Standards and Technology (2007). Common Industry Specification for Usability Requirements (CISU-R). Technology Administration, US Department of Commerce, NISTIR North Atlantic Treaty Organization (NATO; 2012). Safe Ride Standards for Casualty Evacuation Using Unmanned Aerial Vehicles Task Group HFM-184 ( ), C/323(HFM-184), TP/475, STO-TR-HFM-184. Available at: C.pdf. Nielsen, J. (1989). Coordinating user interfaces for consistency. Boston, MA: Academic Press. Nielsen, J. (1992). Finding usability problems through heuristic evaluation. Proceedings ACM CHI'92 Conference (Monterey, CA, May 3-7), Nielsen, J. (1994). Heuristic evaluation. In Nielsen, J., and Mack, R.L. (Eds.), Usability Inspection Methods. John Wiley & Sons, New York, NY. Nielsen (1995). 10 usability Heuristics for User Interface Design, retrieved from Nielsen, J., & Molich, R. (1990). Heuristic evaluation of user interfaces, Proc. ACM CHI'90 Conf. (Seattle, WA, 1-5 April), March Version 02

65 Nuclear Regulatory Commission (2002). Human-System Interface Design Review Guidelines - NUREG-0700 Rev. 2., Nuclear Regulatory Commission, Washington, DC. Prumper, J. & Hurtienne, J. (2007). Usability is easy to use: Some background on standards and processes. Paper presented at the 2007 International Research Workshop on Userdriven IT Design and Quality Insurance (UITQ). Retrieved from Public Works and Government Services Canada (PWGSC; 2012). Policy and Guidelines Supply Manuals. Retrieved from Research & Technology Organization/North Atlantic Treaty Organization (RTO-NATO; 2001). The Second NATO Modelling and Simulation Conference. In the NATO Modelling and Simulation Group (NMSG) Conference held in Shrivenham, UK, October (RTO- MP-071 ed., Vol. 323, pp ). Saffer, D. (2009). Designing for interaction: Creating innovative applications and devices. New Riders. Sauro, J. & Kirkland, E. (2005, April). A method to standardize usability metrics into a single score. Paper presented at the Conference for Computer Human Interaction, Portland, Oregon. Scholtz, J. (2004). Usability evaluation. National Institute of Standards and Technology. Stanton, N. A. (2006). Hierarchical task analysis: Developments, applications, and extensions. Applied ergonomics, 37(1), Stone, J.C. (2012). A Separate Defence Procurement Agency: Will it Actually Make a Difference? Strategic Studies Working Group Papers: Canadian Defence & Foreign Affairs Institute, (February), System Development Life Cycle (n.d.) Available at _e1_plan.htm. Theofanos, M., Stanton, B., & Bevan, N. (2006). A Practical Guide to the CIF: Usability Measurements. Waits & Measures Special Section, November &, Rail Safety & Standards Board (RSSP, 2008). Understanding Human Factors: a guide for the railway industry. Understanding Human Factors/June 08. UsabilityNet (2003). A project funded by the European Union to promote usability and usercentered design. Retrieved from Accessed March 26, March Version 02

66 Wharton, C., Bradford, J., Jeffries, J., & Franzke, M. (1992, May). Applying cognitive walkthroughs to more complex user interfaces: Experiences, issues and recommendations. Paper presented at the meeting of Human Factors in Computing Systems, Monterey, California. 31 March Version 02

67 APPENDIX A STANDARDS A.1 Design Specification Standards Product design standards provide detailed guidance on the design elements of a product and are described below. A.1.1 ISO 9241: Ergonomic requirements for office work with visual display terminals The ISO 9241 documents contain 17 parts that make up the complete set of standards. Each standard addresses usability requirements as they relate to different areas of the product (see for more details). The usability standards dealing with hardware are parts 3 and 4 as well as 7, 8, and 9. Software usability standards are parts Standards dealing with the ergonomics of the context are parts 2, 5, and 6. ISO Guidance on Usability (1998) provides the main definition of usability that is used in conjunction with other related ergonomic standards. This standard specifies that user performance can be measured by how effective the system is for allowing users to achieve intended goals and the resources (e.g., time, money, cognitive effort, etc.) required to achieve the intended goals. ISO does not provide explicit measures of usability, but explains how to identify the information that it is necessary when specifying or evaluating usability in terms of measures of user performance and satisfaction. It includes an explanation of how the usability of a product can be specified and evaluated as part of a quality system. It also explains how measures of user performance and satisfaction can be used to measure how any component of a work system affects the quality of the whole work system in use. A potential drawback to the use of the ISO 9421 design standards is that the above organizational structure may not inherently depict a clear structure for deriving usability requirements. Geis (2003) recommends a generic to detailed approach (Figure A-1) where the most general standard defines the quality concept of usability (ISO ). Principles for the design of user interfaces are given in two standards: one devoted to the dynamic aspect of interactive system usage, referred to as dialogue principles (ISO ), and the other one is devoted to the static aspect, referred to as information design principles (ISO ). Standards devoted to specific dialogue techniques (ISO 9241 Parts 14-17) provide recommendations which should be considered. 31 March 2014 A Version 02

68 Figure A-1: Software Usability Standards (ISO 9241) Structured from Generic to Specific (Geis et al., 2003). A.1.2 Military Standard-1472 G: 2012 Human Engineering Military Standard 1472 G (MIL-STD-1472 G, 2012) provides detailed human engineering design criteria for all military systems, subsystems, equipment, workspaces and facilities. The purpose of this standard is to provide human design criteria, principles and practices to ensure that the human is considered and integrated into any military systems, subsystems or workspaces. Although this standard was developed for military applications, it can be used for non-military applications. MIL-STD-1472 G can be used in concert with MIL-STD 46855A to provide detailed human design criteria for areas such as noise limits, night vision imaging systems or war fighting symbology. MIL-STD-1472 G can be used as a usability requirement in an RFP and it can also be used to assess existing software. A.1.3 ANSI/HFES Human Factors Engineering of Software User Interfaces The American National Standards/Human Factors Engineering of Software (ANSI/HFES) 200 (2008) provides design requirements and recommendations to increase the accessibility, learnability, and ease of use of software. The application of this standard is intended to provide user interfaces that are more usable, accessible, and consistent and that enable greater productivity and satisfaction. The standard costs around $ US, and consists of five parts: HFES 200 Part 1: Introduction provides an overview of the content, explains relationships among the individual parts, and provides guidance on the relevance of individual parts to the development process so that designers may understand where and when to use the parts. HFES 200 Part 2: Accessibility provides recommendations on features and functions of computer operating systems, drivers, application services, other software layers on which applications depend, and applications that increase the accessibility of applications for users with disabilities. HFES 200 Part 3: Interaction Techniques incorporates material from ISO 9241 Parts 13 through 17 and is compatible with those ISO standards. 31 March 2014 A Version 02

69 HFES 200 Part 4: Interactive Voice Response consists of completely new material that has not appeared in ISO 9241 standards. HFES 200 Part 5: Visual Presentation and Use of Color incorporates material from ISO 9241 Part 12 and includes new recommendations on the use of color. A.1.4 Nielsen s Heuristics The following is the list of ten heuristics that are used to identify usability issues of software applications (Nielsen and Molich, 1990; Nielsen 1992, 1994). 1. Visibility of system status: The system should always keep users informed about what is going on, through appropriate feedback within a reasonable time. 2. Match between system and the real world: The system should speak the users' language with words, phrases and concepts familiar to the user, rather than system-oriented terms. 3. User control and freedom: Users often choose system functions by mistake and will need a clearly marked "emergency exit" to leave the unwanted state without having to go through an extended dialogue. 4. Consistency and standards: Applications require consistency within their features, including terminology, layout, color, and behavior. 5. Error prevention: Even better than a good error message is careful design which prevents a problem from occurring in the first place. 6. Recognition rather than recall: Minimize the user's memory load by making objects, actions, and options visible. 7. Flexibility and efficiency of use: The system should allow users to tailor frequent actions. 8. Aesthetic and minimalist design: Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility. 9. Help users recognize, diagnose, and recover from errors: Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution. 10. Help and documentation: Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. 11. Visibility of system status: The system should always keep users informed about what is going on, through appropriate feedback within a reasonable time. 31 March 2014 A Version 02

70 A.2 Design Process Standards Design process standards describe general approaches and specific methods used to develop the product. The following are some examples of commonly used design process standards sources. A.2.1 Military-Standard 46685A MIL-STD-46855A (2011) is the primary tasking document used by Defense services to specify human engineering efforts during system acquisition. It supports the human factors engineering discipline independently or as a part of Human System Integration initiatives. MIL-STD-46855A is also written to accommodate a wide range of products, including small equipment items as well as major systems. This standard intentionally provides reasonable latitude for performing organizations to apply technical and program judgment and innovation consistent with specific procurements. MIL-STD-46855A provides information regarding general and detailed requirements for implementing human factors engineering techniques and programs in system design. The general requirements outline the scope and nature of the work and that during system development, an iterative process be employed whereby analysis, design and development and test and evaluation occur. This process requires input from the end-users to ensure that usability requirements are implemented into system development. The detailed requirements build upon the iterative process of analysis, design and develop, and test and evaluation. For example, guidelines for conducting task analyses, workload studies, experiments, tests and developing mock-ups of the system are provided. During design and development of a system, MIL-STD 46855A specifies that the human engineering inputs and results from human engineering analyses shall be converted into detail engineering design features. Design of the equipment shall satisfy human-system performance requirements and meet the applicable criteria of MIL-STD-1472G and other human engineering criteria (e.g., ISO 9241-series) specified by the contract. A.2.2 ISO :2010 Ergonomics of human-system interaction -- Part 210: Human-Centred Design for Interactive Systems ISO :2010 provides requirements and recommendations for human-centred design principles and activities throughout the life cycle of computer-based interactive systems. It is intended to be used by those managing design processes, and is concerned with ways in which both hardware and software components of interactive systems can enhance human system interaction. Note: ISO :2010 is preceded by ISO 13407:1999. The user-centred design process has also been formalized in the ISO :2010 Humancentred design processes for interactive systems. The standard describes UCD as an iterative process consisting of five steps, depicted in Figure A-2 and also states the following key principles: The active involvement of users and clear understanding of user and task requirements. 31 March 2014 A Version 02

71

72 A.3 Proprietary Standards Proprietary standards are used for developing standard user interfaces. An example of a proprietary standard is the User Experience Interaction Guidelines for Windows. Proprietary standards or guidelines are different from the ISO 9241 standards in that the Windows Style Guide provides detailed guidelines (e.g., font colour and size) for developing user interfaces. These guidelines can be used as a requirement in the RFP for a new system or software or they can be used to assess usability of software or systems that are partially or completely developed. A.4 Reference Guides Reference guides provide informed guidance on which standards to use and why for both COTS and custom products, but are tailed specifically to a given domain (i.e. a standard of standards). Reference guides are often developed for specific domain (e.g. Rail Safety & Standards Board Understanding Human Factors, 2009), but may or may not be mandated. This type of standard may be a useful starting point for procurement organizations to understand how various standards can be combined for procuring software systems. Although these standards are presented generally in order to apply to the wide range of systems and equipment, they can be made into system specific rules. Not all of the standards proposed here may be applicable to every system. For any particular system, some of the standards will be relevant and some will not. Additionally, the use of design standards cannot substitute for knowledge of task (user and system) requirements. The user must possess (or obtain) detailed knowledge of user and system needs (Ahlstrom & Longo, 2003). A.4.1 Human Factors Design Standard (HFDS) for Acquisition of Commercialoff-the-Shelf (COTS) Subsystems, Non-Developmental Items (NDI) and Developmental Systems The HFDS for Acquisition of COTS subsystems standard was developed by human factors specialists within the Federal Aviation Administration (FAA) and provides human factors design and evaluation guidelines derived from a number of research projects, international and military standards. The guidelines can be used for procuring COTS software, non-developmental items and new developmental systems within the FAA. This standard contains 15 chapters and cover human factors engineering criteria such as general design requirements and design and development of displays, alarms, controls and automation for the aviation domain. The selected guidelines were developed using information from the Department of Defense, National Aeronautics and Space Administration and the Department of Energy. A.4.2 Nuclear Regulatory Commission (NUREG) 0700 Human-System Interface Design Review Guidelines NUREG Human System Interface Design Review Guidelines provides a collection of detailed human factors engineering guidelines and criteria with respect to both the physical and functional characteristics of interfaces within the nuclear energy domain. This standard provides guidance for designing and developing all systems in a nuclear plant from the main control room 31 March 2014 A Version 02

73 to the systems within the nuclear plant itself. As such, the guidelines can be used for designing other systems such as military command and control systems or communications centres. A.4.3 Common Industry Specification for Usability Requirements (CISU-R) The Common Industry Specification for Usability - Requirements (CISU-R; NISTIR, 2007) sets standards for specifying usability requirements using the ISO definition of usability in: the effectiveness, efficiency, and satisfaction with which the intended users can achieve their tasks in the intended context of product use. Usability requirements created using the CISU-R are consistent with the process recommendations of ISO Usability tests conducted to measure whether usability requirements have been met can be reported using the Common Industry Format (CIF) (ISO/IEC 25062). The CISU-R facilitates the iterative development of requirements. The three parts of a requirement can be documented with increased precision as more information is gathered about the key user groups and their needs, and include the following information: First, specifying the expected context-of-use. This provides information on users, the environment, and the intended use of the product that are the basis for usability requirements. This information can be communicated to designers, developers or customers for review and confirmation. Next performance and satisfaction criteria can be specified for defined scenarios of use. These criteria are used as input to the design process, and to create testable measures to validate whether a product meets the requirements. Finally the procedure for a usability test for the requirements can be specified. After each iterative round of discussion and revision, the stakeholders can approve the usability requirements for the product. These requirements then form the basis for continued product development and testing. The CISU-R provides a structure that allows stakeholders to: Document the context-of-use for a product, including definitions of the expected technical, physical, and social environments, user groups, goals for use of the product and scenarios of use, Write usability requirements in sufficient detail to make an effective contribution to design and development. Relate usability requirements to stakeholder requirements (including user, customer, and business) for successful use of a product and increased productivity. Define usability criteria that can be empirically validated. Define the method for testing the product against the criteria 31 March 2014 A Version 02

74 Create requirements that are useful throughout the product design and development process, providing input to the design process early in a project and adding more detailed information about criteria and methods as it is available. A.4.4 ISO/IEC 25062:2006: Reporting Results ISO/IEC 25062:2006 provides a standard method for reporting usability test findings called Software engineering -- Software product Quality Requirements and Evaluation (SQuaRE) -- Common Industry Format (CIF) for usability test reports., and can be purchased from Standards Council of Canada for $ The standard is based on a collection of other international standards, such as ISO and ISO The format is designed for reporting results of formal usability tests in which quantitative measurements were collected, and is particularly appropriate for summative/comparative testing. The CIF does not indicate how to perform a usability test but provides guidance on how to report the results of a usability test. The CIF targets two audiences: usability professionals and stakeholders in an organization. Stakeholders can use the usability data to help make informed decisions concerning the release of software products or the procurement of such products. The format includes the following elements: description of the product, goals of the test, test participants tasks the users were asked to perform, experimental design of the test, method or process by which the test was conducted, usability measures and data collection methods, and numerical results. While the CIF (ISO/IEC 25062:2006) was developed as a standard best practice for summative usability testing, many usability professionals are using the CIF as part of formative testing reporting. From this community, the common elements for formative reporting where extracted and put into a website in hopes that usability professionals could go to the site and use the elements to develop their own formative test reports. The open access website shown in Figure A-3 contains detailed examples of the recommended elements for reporting formative usability, everything needed for a complete report, from Title page structure, to methods, to how to structure screen shot related data. 31 March 2014 A Version 02

75

Iterative constrained least squares for robust constant modulus beamforming

Iterative constrained least squares for robust constant modulus beamforming CAN UNCLASSIFIED Iterative constrained least squares for robust constant modulus beamforming X. Jiang, H.C. So, W-J. Zeng, T. Kirubarajan IEEE Members A. Yasotharan DRDC Ottawa Research Centre IEEE Transactions

More information

CRITERIA FOR CERTIFICATION BODY ACCREDITATION IN THE FIELD OF RISK BASED INSPECTION MANAGEMENT SYSTEMS

CRITERIA FOR CERTIFICATION BODY ACCREDITATION IN THE FIELD OF RISK BASED INSPECTION MANAGEMENT SYSTEMS CRITERIA FOR CERTIFICATION BODY ACCREDITATION IN THE FIELD OF RISK BASED INSPECTION MANAGEMENT SYSTEMS Approved By: Executive: Accreditation: Mpho Phaloane Revised By: RBI STC Working Group Members Date

More information

EXAM PREPARATION GUIDE

EXAM PREPARATION GUIDE When Recognition Matters EXAM PREPARATION GUIDE PECB Certified Management System Auditor www.pecb.com The objective of the PECB Certified Management System Auditor examination is to ensure that the candidates

More information

Standard Glossary of Terms used in Software Testing. Version 3.2. Foundation Extension - Usability Terms

Standard Glossary of Terms used in Software Testing. Version 3.2. Foundation Extension - Usability Terms Standard Glossary of Terms used in Software Testing Version 3.2 Foundation Extension - Usability Terms International Software Testing Qualifications Board Copyright Notice This document may be copied in

More information

SECTION 10 CONTRACTING FOR PROFESSIONAL SERVICES CONSULTANT COMPETITIVE NEGOTIATION ACT (CCNA)

SECTION 10 CONTRACTING FOR PROFESSIONAL SERVICES CONSULTANT COMPETITIVE NEGOTIATION ACT (CCNA) SECTION 10 CONTRACTING FOR PROFESSIONAL SERVICES CONSULTANT COMPETITIVE NEGOTIATION ACT (CCNA) 10.0 INTRODUCTION The purpose of this procedure is to provide guidance for hiring professional firms for architectural,

More information

UNOPS esourcing vendor guide. A guide for vendors to register on UNGM, and submit responses to UNOPS tenders in the UNOPS esourcing system

UNOPS esourcing vendor guide. A guide for vendors to register on UNGM, and submit responses to UNOPS tenders in the UNOPS esourcing system A guide for vendors to register on UNGM, and submit responses to UNOPS tenders in the UNOPS esourcing system Version: 1.3 By: UNOPS Procurement Group Date: 15 September 2016 TABLE OF CONTENTS 1. Purpose

More information

Global Specification Protocol for Organisations Certifying to an ISO Standard related to Market, Opinion and Social Research.

Global Specification Protocol for Organisations Certifying to an ISO Standard related to Market, Opinion and Social Research. CONTENTS i. INTRODUCTION 3 ii. OVERVIEW SPECIFICATION PROTOCOL DOCUMENT DEVELOPMENT PROCESS 4 1. SCOPE 5 2. DEFINITIONS 5 3. REFERENCES 6 4. MANAGEMENT STANDARDS FOR APPROVED CERTIFICATION BODIES 6 4.1

More information

ISO/IEC TR TECHNICAL REPORT. Software engineering Product quality Part 4: Quality in use metrics

ISO/IEC TR TECHNICAL REPORT. Software engineering Product quality Part 4: Quality in use metrics TECHNICAL REPORT ISO/IEC TR 9126-4 First edition 2004-04-01 Software engineering Product quality Part 4: Quality in use metrics Génie du logiciel Qualité des produits Partie 4: Qualité en métrologie d'usage

More information

SUBJECT: PRESTO operating agreement renewal update. Committee of the Whole. Transit Department. Recommendation: Purpose: Page 1 of Report TR-01-17

SUBJECT: PRESTO operating agreement renewal update. Committee of the Whole. Transit Department. Recommendation: Purpose: Page 1 of Report TR-01-17 Page 1 of Report TR-01-17 SUBJECT: PRESTO operating agreement renewal update TO: FROM: Committee of the Whole Transit Department Report Number: TR-01-17 Wards Affected: All File Numbers: 465-12, 770-11

More information

UNOPS esourcing vendor guide. A guide for vendors to register on UNGM, and submit responses to UNOPS tenders in the UNOPS esourcing system

UNOPS esourcing vendor guide. A guide for vendors to register on UNGM, and submit responses to UNOPS tenders in the UNOPS esourcing system A guide for vendors to register on UNGM, and submit responses to UNOPS tenders in the UNOPS esourcing system Version: 1.4 By: UNOPS Procurement Group Date: 1 November 2017 TABLE OF CONTENTS 1. Purpose

More information

The Analysis and Proposed Modifications to ISO/IEC Software Engineering Software Quality Requirements and Evaluation Quality Requirements

The Analysis and Proposed Modifications to ISO/IEC Software Engineering Software Quality Requirements and Evaluation Quality Requirements Journal of Software Engineering and Applications, 2016, 9, 112-127 Published Online April 2016 in SciRes. http://www.scirp.org/journal/jsea http://dx.doi.org/10.4236/jsea.2016.94010 The Analysis and Proposed

More information

3Lesson 3: Web Project Management Fundamentals Objectives

3Lesson 3: Web Project Management Fundamentals Objectives 3Lesson 3: Web Project Management Fundamentals Objectives By the end of this lesson, you will be able to: 1.1.11: Determine site project implementation factors (includes stakeholder input, time frame,

More information

EXAM PREPARATION GUIDE

EXAM PREPARATION GUIDE When Recognition Matters EXAM PREPARATION GUIDE PECB Certified ISO 9001 Lead Auditor www.pecb.com The objective of the PECB Certified ISO 9001 Lead Auditor examination is to ensure that the candidate possesses

More information

IMPROVING CYBERSECURITY AND RESILIENCE THROUGH ACQUISITION

IMPROVING CYBERSECURITY AND RESILIENCE THROUGH ACQUISITION IMPROVING CYBERSECURITY AND RESILIENCE THROUGH ACQUISITION Briefing for OFPP Working Group 19 Feb 2015 Emile Monette GSA Office of Governmentwide Policy emile.monette@gsa.gov Cybersecurity Threats are

More information

ISO/IEC This is a preview - click here to buy the full publication INTERNATIONAL STANDARD. First edition

ISO/IEC This is a preview - click here to buy the full publication INTERNATIONAL STANDARD. First edition INTERNATIONAL STANDARD ISO/IEC 25062 First edition 2006-04-01 Corrected version 2006-10-01 Software engineering Software product Quality Requirements and Evaluation (SQuaRE) Common Industry Format (CIF)

More information

CASA External Peer Review Program Guidelines. Table of Contents

CASA External Peer Review Program Guidelines. Table of Contents CASA External Peer Review Program Guidelines Table of Contents Introduction... I-1 Eligibility/Point System... I-1 How to Request a Peer Review... I-1 Peer Reviewer Qualifications... I-2 CASA Peer Review

More information

EXAM PREPARATION GUIDE

EXAM PREPARATION GUIDE When Recognition Matters EXAM PREPARATION GUIDE PECB Certified ISO 22000 Lead Auditor www.pecb.com The objective of the Certified ISO 22000 Lead Auditor examination is to ensure that the candidate has

More information

Evolution of Canadian

Evolution of Canadian Evolution of Canadian Defence Procurement US U.S. UK Canada Tri-Lateral Symposium 2 October 2009 John Neri Director General International and Industry Programs Department of National Defence Outline The

More information

Accessibility Procurement Pilot: Call for Proposals (CFP)

Accessibility Procurement Pilot: Call for Proposals (CFP) Accessibility Procurement Pilot: Call for Proposals (CFP) Date: January 23, 2018 CFP No.: 24062-180181/B Amendment No.: 003 GETS reference number: PW-17-00809528 Closing date: January 31, 2018, 2 pm (EST)

More information

Conference for Food Protection. Standards for Accreditation of Food Protection Manager Certification Programs. Frequently Asked Questions

Conference for Food Protection. Standards for Accreditation of Food Protection Manager Certification Programs. Frequently Asked Questions Conference for Food Protection Standards for Accreditation of Food Protection Manager Certification Programs Frequently Asked Questions Q. What was the primary purpose for the Conference for Food Protection

More information

User Centered Design (UCD)

User Centered Design (UCD) User Centered Design (UCD) User Centered Design (UCD) introduction defining UCD usability characteristics of UCD UCD activities Introduction The primary aim of the process of design and implementation

More information

PROTERRA CERTIFICATION PROTOCOL V2.2

PROTERRA CERTIFICATION PROTOCOL V2.2 PROTERRA CERTIFICATION PROTOCOL V2.2 TABLE OF CONTENTS 1. Introduction 2. Scope of this document 3. Definitions and Abbreviations 4. Approval procedure for Certification Bodies 5. Certification Requirements

More information

ISO/IEC/ IEEE INTERNATIONAL STANDARD. Systems and software engineering Requirements for acquirers and suppliers of user documentation

ISO/IEC/ IEEE INTERNATIONAL STANDARD. Systems and software engineering Requirements for acquirers and suppliers of user documentation INTERNATIONAL STANDARD ISO/IEC/ IEEE 26512 First edition 2011-06-01 Systems and software engineering Requirements for acquirers and suppliers of user documentation Ingénierie du logiciel et des systèmes

More information

EXAM PREPARATION GUIDE

EXAM PREPARATION GUIDE When Recognition Matters EXAM PREPARATION GUIDE PECB Certified ISO 22000 Lead Implementer www.pecb.com The objective of the Certified ISO 22000 Lead Implementer examination is to ensure that the candidate

More information

Request for Information Strategies to Improve Maritime Supply Chain Security and Achieve 100% Overseas Scanning

Request for Information Strategies to Improve Maritime Supply Chain Security and Achieve 100% Overseas Scanning Request for Information Strategies to Improve Maritime Supply Chain Security and Achieve 100% Overseas Scanning May 2, 2016 1 STRATEGIES TO IMPROVE MARITIME SUPPLY CHAIN SECURITY AND ACHIEVE 100% OVERSEAS

More information

EXAM PREPARATION GUIDE

EXAM PREPARATION GUIDE When Recognition Matters EXAM PREPARATION GUIDE PECB Certified ISO/IEC 20000 Lead Auditor www.pecb.com The objective of the Certified ISO/IEC 20000 Lead Auditor examination is to ensure that the candidate

More information

EXAM PREPARATION GUIDE

EXAM PREPARATION GUIDE When Recognition Matters EXAM PREPARATION GUIDE PECB Certified ISO 37001 Lead Auditor www.pecb.com The objective of the Certified ISO 37001 Lead Auditor examination is to ensure that the candidate possesses

More information

Request for Proposal (RFP)

Request for Proposal (RFP) Request for Proposal (RFP) BOK PENETRATION TESTING Date of Issue Closing Date Place Enquiries Table of Contents 1. Project Introduction... 3 1.1 About The Bank of Khyber... 3 1.2 Critical Success Factors...

More information

2018 CANADIAN ELECTRICAL CODE UPDATE TRAINING PROVIDER PROGRAM Guidelines

2018 CANADIAN ELECTRICAL CODE UPDATE TRAINING PROVIDER PROGRAM Guidelines 2018 CANADIAN ELECTRICAL CODE UPDATE TRAINING PROVIDER PROGRAM Guidelines Under this program, CSA Group has developed a training program that provides detailed instruction on all changes of the CE Code

More information

Module 3. Overview of TOGAF 9.1 Architecture Development Method (ADM)

Module 3. Overview of TOGAF 9.1 Architecture Development Method (ADM) Module 3 Overview of TOGAF 9.1 Architecture Development Method (ADM) TOGAF 9.1 Structure The Architecture Development Method (ADM) Needs of the business shape non-architectural aspects of business operation

More information

SSC Transformation Initiative Fairness Monitoring Services

SSC  Transformation Initiative Fairness Monitoring Services SSC Email Transformation Initiative Fairness Monitoring Services Fairness Monitoring Final Report Date of Submission: 14 June, 2013 Submitted To: Director General Operational Integrity Sector Departmental

More information

SVENSK STANDARD SS-ISO/IEC

SVENSK STANDARD SS-ISO/IEC SVENSK STANDARD SS-ISO/IEC 25062:2006 Fastställd 2006-07-14 Utgåva 1 Programvarukvalitet Generellt industriellt rapportformat (CIF) för användbarhetstester (ISO/IEC 25062:2006, IDT) Software engineering

More information

LETTER OF INTEREST ACCESSIBILITY PROCUREMENT PILOT ON BEHALF OF TREASURY BOARD OF CANADA SECRETARIAT AND THE PUBLIC SERVICE COMMISSION OF CANADA

LETTER OF INTEREST ACCESSIBILITY PROCUREMENT PILOT ON BEHALF OF TREASURY BOARD OF CANADA SECRETARIAT AND THE PUBLIC SERVICE COMMISSION OF CANADA LETTER OF INTEREST ACCESSIBILITY PROCUREMENT PILOT ON BEHALF OF TREASURY BOARD OF CANADA SECRETARIAT AND THE PUBLIC SERVICE COMMISSION OF CANADA Page 1 of 10 1. PURPOSE The purpose of this is to notify

More information

WEBSITE DESIGN, DEVELOPMENT AND HOSTING SERVICES

WEBSITE DESIGN, DEVELOPMENT AND HOSTING SERVICES REQUEST FOR PROPOSAL WEBSITE DESIGN, DEVELOPMENT AND HOSTING SERVICES FOR FIRST NATIONS HEALTH MANAGERS ASSOCIATION (FNHMA) TABLE OF CONTENTS PART A INTRODUCTION Pages 3 5 1.0 Introduction 2.0 Scope of

More information

Manager, Infrastructure Services. Position Number Community Division/Region Yellowknife Technology Service Centre

Manager, Infrastructure Services. Position Number Community Division/Region Yellowknife Technology Service Centre IDENTIFICATION Department Position Title Infrastructure Manager, Infrastructure Services Position Number Community Division/Region 32-11488 Yellowknife Technology Service Centre PURPOSE OF THE POSITION

More information

REQUEST FOR PROPOSALS Consultant to Develop Educational Materials for the Applied Informatics Team Training

REQUEST FOR PROPOSALS Consultant to Develop Educational Materials for the Applied Informatics Team Training REQUEST FOR PROPOSALS Consultant to Develop Educational Materials for the Applied Informatics Team Training Table of Contents: Part I. Overview Information Part II. Full Text of Announcement Section I.

More information

Using the Common Industry Format to Document the Context of Use

Using the Common Industry Format to Document the Context of Use Human-Computer Interaction. Human-Centred Design Approaches, Methods, Tools, and Environments - 15th International Conference, HCI International 2013, Las Vegas, NV, USA, July 21-26, 2013, Proceedings,

More information

Chapter 4 EDGE Approval Protocol for Auditors Version 3.0 June 2017

Chapter 4 EDGE Approval Protocol for Auditors Version 3.0 June 2017 Chapter 4 EDGE Approval Protocol for Auditors Version 3.0 June 2017 Copyright 2017 International Finance Corporation. All rights reserved. The material in this publication is copyrighted by International

More information

Usability and User Experience in (EHR) Procurement sometimes entitled as Government System Procurement. Marko Nieminen Aalto University, Finland

Usability and User Experience in (EHR) Procurement sometimes entitled as Government System Procurement. Marko Nieminen Aalto University, Finland Usability and User Experience in (EHR) Procurement sometimes entitled as Government System Procurement Marko Nieminen Aalto University, Finland Is usability important for EHRs? Is user experience important

More information

ISO/IEC INTERNATIONAL STANDARD

ISO/IEC INTERNATIONAL STANDARD INTERNATIONAL STANDARD ISO/IEC 27006 Second edition 2011-12-01 Information technology Security techniques Requirements for bodies providing audit and certification of information security management systems

More information

ISO/IEC/ IEEE INTERNATIONAL STANDARD. Systems and software engineering Architecture description

ISO/IEC/ IEEE INTERNATIONAL STANDARD. Systems and software engineering Architecture description INTERNATIONAL STANDARD ISO/IEC/ IEEE 42010 First edition 2011-12-01 Systems and software engineering Architecture description Ingénierie des systèmes et des logiciels Description de l'architecture Reference

More information

MT. PROSPECT PARK DISTRICT REQUEST FOR PROPOSAL REQUEST FOR STATEMENTS OF INTEREST, QUALIFICATIONS, PERFORMANCE DATA AND COST PROPOSAL

MT. PROSPECT PARK DISTRICT REQUEST FOR PROPOSAL REQUEST FOR STATEMENTS OF INTEREST, QUALIFICATIONS, PERFORMANCE DATA AND COST PROPOSAL MT. PROSPECT PARK DISTRICT REQUEST FOR PROPOSAL REQUEST FOR STATEMENTS OF INTEREST, QUALIFICATIONS, PERFORMANCE DATA AND COST PROPOSAL WEB SITE DESIGN SERVICES Issue Date: December 19, 2017 Mt. Prospect

More information

Information Bulletin

Information Bulletin Application of Primary and Secondary Reference Documents Version 1.1 Approved for release July 2014 Table of Contents 1.0 Purpose statement... 3 2.0 Audience... 3 3.0 BCA requirements and referenced documents...

More information

FSC STANDARD. Standard for Multi-site Certification of Chain of Custody Operations. FSC-STD (Version 1-0) EN

FSC STANDARD. Standard for Multi-site Certification of Chain of Custody Operations. FSC-STD (Version 1-0) EN FOREST STEWARDSHIP COUNCIL INTERNATIONAL CENTER FSC STANDARD Standard for Multi-site Certification of Chain of Custody Operations FSC-STD-40-003 (Version 1-0) EN 2007 Forest Stewardship Council A.C. All

More information

Reviewed by ADM(RS) in accordance with the Access to Information Act. Information UNCLASSIFIED.

Reviewed by ADM(RS) in accordance with the Access to Information Act. Information UNCLASSIFIED. Assistant Deputy Minister (Review Services) Reviewed by in accordance with the Access to Information Act. Information UNCLASSIFIED. Security Audits: Management Action Plan Follow-up December 2015 1850-3-003

More information

Standards: Implementation, Certification and Testing Work group Friday, May 8, :00 Pm-1:30 Pm ET.

Standards: Implementation, Certification and Testing Work group Friday, May 8, :00 Pm-1:30 Pm ET. Standards: Implementation, Certification and Testing Work group Friday, May 8, 2015. 12:00 Pm-1:30 Pm ET. Agenda Complete Work group Comments- Group 1 Review Group 2 Comments. 2015 Edition Certification

More information

Computing Accreditation Commission Version 2.0 CRITERIA FOR ACCREDITING COMPUTING PROGRAMS

Computing Accreditation Commission Version 2.0 CRITERIA FOR ACCREDITING COMPUTING PROGRAMS Computing Accreditation Commission Version 2.0 CRITERIA FOR ACCREDITING COMPUTING PROGRAMS Optional for Reviews During the 2018-2019 Accreditation Cycle Mandatory for Reviews During the 2019-2020 Accreditation

More information

DEPARTMENT OF HEALTH and HUMAN SERVICES. HANDBOOK for

DEPARTMENT OF HEALTH and HUMAN SERVICES. HANDBOOK for DEPARTMENT OF HEALTH and HUMAN SERVICES HANDBOOK for FEDERAL ACQUISITION CERTIFICATION PROGRAM/PROJECT MANAGERS Issuer Office of the Secretary Office of the Assistant Secretary for Financial Resources

More information

VMware BCDR Accelerator Service

VMware BCDR Accelerator Service AT A GLANCE The rapidly deploys a business continuity and disaster recovery (BCDR) solution with a limited, pre-defined scope in a non-production environment. The goal of this service is to prove the solution

More information

ISO/IEC INTERNATIONAL STANDARD. Systems and software engineering Requirements for designers and developers of user documentation

ISO/IEC INTERNATIONAL STANDARD. Systems and software engineering Requirements for designers and developers of user documentation INTERNATIONAL STANDARD ISO/IEC 26514 First edition 2008-06-15 Systems and software engineering Requirements for designers and developers of user documentation Ingénierie du logiciel et des systèmes Exigences

More information

IIA EXAM - IIA-CGAP. Certified Government Auditing Professional. Buy Full Product.

IIA EXAM - IIA-CGAP. Certified Government Auditing Professional. Buy Full Product. IIA EXAM - IIA-CGAP Certified Government Auditing Professional Buy Full Product http://www.examskey.com/iia-cgap.html Examskey IIA IIA-CGAP exam demo product is here for you to test the quality of the

More information

<PROJECT NAME> IMPLEMENTATION PLAN

<PROJECT NAME> IMPLEMENTATION PLAN IMPLEMENTATION PLAN Version VERSION HISTORY [Provide information on how the development and distribution of the Project Implementation Plan was controlled and tracked.

More information

EXAM PREPARATION GUIDE

EXAM PREPARATION GUIDE EXAM PREPARATION GUIDE PECB Certified ISO 21500 Lead Project Manager The objective of the PECB Certified ISO 21500 Lead Project Manager examination is to ensure that the candidate has the knowledge and

More information

Information technology Security techniques Guidance on the integrated implementation of ISO/IEC and ISO/IEC

Information technology Security techniques Guidance on the integrated implementation of ISO/IEC and ISO/IEC Provläsningsexemplar / Preview INTERNATIONAL STANDARD ISO/IEC 27013 Second edition 2015-12-01 Information technology Security techniques Guidance on the integrated implementation of ISO/IEC 27001 and ISO/IEC

More information

Information technology Security techniques Requirements for bodies providing audit and certification of information security management systems

Information technology Security techniques Requirements for bodies providing audit and certification of information security management systems Provläsningsexemplar / Preview INTERNATIONAL STANDARD ISO/IEC 27006 Third edition 2015-10-01 Information technology Security techniques Requirements for bodies providing audit and certification of information

More information

Federal Government. Each fiscal year the Federal Government is challenged CATEGORY MANAGEMENT IN THE WHAT IS CATEGORY MANAGEMENT?

Federal Government. Each fiscal year the Federal Government is challenged CATEGORY MANAGEMENT IN THE WHAT IS CATEGORY MANAGEMENT? CATEGORY MANAGEMENT IN THE Federal Government Each fiscal year the Federal Government is challenged to accomplish strategic goals while reducing spend and operating more efficiently. In 2014, the Federal

More information

Information technology Service management. Part 10: Concepts and vocabulary

Information technology Service management. Part 10: Concepts and vocabulary Provläsningsexemplar / Preview INTERNATIONAL STANDARD ISO/IEC 20000-10 First edition 2018-09 Information technology Service management Part 10: Concepts and vocabulary Technologies de l'information Gestion

More information

Community College of Beaver County. Request for Proposal. April 15, 2019

Community College of Beaver County. Request for Proposal. April 15, 2019 Community College of Beaver County Request for Proposal Tristate Energy & Advanced Manufacturing (TEAM) Consortium Website Design April 15, 2019 1 April 15, 2019 Dear Potential Vendor: Community College

More information

ISO/IEC INTERNATIONAL STANDARD. Information technology Software asset management Part 1: Processes and tiered assessment of conformance

ISO/IEC INTERNATIONAL STANDARD. Information technology Software asset management Part 1: Processes and tiered assessment of conformance INTERNATIONAL STANDARD This is a preview - click here to buy the full publication ISO/IEC 19770-1 Second edition 2012-06-15 Information technology Software asset management Part 1: Processes and tiered

More information

NIAC Membership Application Checklists

NIAC Membership Application Checklists NIAC Membership Application Checklists Thank you for your interest in joining NIAC. To ensure the NIAC membership requirements are met, please review your processes for qualifying lead auditors and performing

More information

Systems and software engineering Requirements for managers of information for users of systems, software, and services

Systems and software engineering Requirements for managers of information for users of systems, software, and services This is a preview - click here to buy the full publication INTERNATIONAL STANDARD ISO/IEC/ IEEE 26511 Second edition 2018-12 Systems and software engineering Requirements for managers of information for

More information

EXAM PREPARATION GUIDE

EXAM PREPARATION GUIDE When Recognition Matters EXAM PREPARATION GUIDE PECB Certified OHSAS 18001 Lead Auditor www.pecb.com The objective of the PECB Certified OHSAS 18001 Lead Auditor examination is to ensure that the candidate

More information

Writing Proposals that Win 1

Writing Proposals that Win 1 All You Need to Know to Win: Writing Proposals that Win A PROPOSAL WORTH WRITING SHOULD BE WORTHY OF WINNING Please Note Materials contained within this presentation are proprietary to MarkeTrainer under

More information

gistec Service Delivery Program (SDP)

gistec Service Delivery Program (SDP) gistec Service Delivery Program (SDP) Specifications & Terms and Conditions V4.0 Dated 18 March 2018 gistec Service Delivery Program (SDP) provides a flexible and cost-effective vehicle to engage gistec

More information

Applying ISO/IEC Quality Model to Quality Requirements Engineering on Critical Software

Applying ISO/IEC Quality Model to Quality Requirements Engineering on Critical Software Applying ISO/IEC 9126-1 Quality Model to Quality Engineering on Critical Motoei AZUMA Department of Industrial and Management Systems Engineering School of Science and Engineering Waseda University azuma@azuma.mgmt.waseda.ac.jp

More information

This document is a preview generated by EVS

This document is a preview generated by EVS INTERNATIONAL STANDARD ISO/IEC/ IEEE 90003 First edition 2018-11 Software engineering Guidelines for the application of ISO 9001:2015 to computer software Ingénierie du logiciel Lignes directrices pour

More information

lnteroperability of Standards to Support Application Integration

lnteroperability of Standards to Support Application Integration lnteroperability of Standards to Support Application Integration Em delahostria Rockwell Automation, USA, em.delahostria@ra.rockwell.com Abstract: One of the key challenges in the design, implementation,

More information

QUESTIONS AND CONTACTS

QUESTIONS AND CONTACTS Contact: Jake Losinski, Management Analyst P.O. Box 2315 480 East Avenue North Ketchum, ID 83340 July 27, 2018 Telephone: (208) 727-5081 jlosinski@ketchumidaho.org SUBMITTAL DEADLINE The City of Ketchum,

More information

Government of Ontario IT Standard (GO ITS)

Government of Ontario IT Standard (GO ITS) Government of Ontario IT Standard (GO ITS) GO-ITS Number 56.3 Information Modeling Standard Version # : 1.5 Status: Approved Prepared under the delegated authority of the Management Board of Cabinet Queen's

More information

Accreditation Body Evaluation Procedure for AASHTO R18 Accreditation

Accreditation Body Evaluation Procedure for AASHTO R18 Accreditation Accreditation Body Evaluation Procedure for AASHTO R18 Accreditation Final August 9, 2016 Page 1 of 12 Section Number Table of Contents Title of Section 0 Purpose 1 Scope 2 References 3 Objectives 4 Criteria

More information

ISO/IEC/ IEEE Systems and software engineering Content of life-cycle information items (documentation)

ISO/IEC/ IEEE Systems and software engineering Content of life-cycle information items (documentation) This is a preview - click here to buy the full publication INTERNATIONAL STANDARD ISO/IEC/ IEEE 15289 Second edition 2015-05-15 Systems and software engineering Content of life-cycle information items

More information

Summary. Program Overview. Notice Type: Request for Proposal. Short Title: CEI Industrial Refrigeration Technical Support

Summary. Program Overview. Notice Type: Request for Proposal. Short Title: CEI Industrial Refrigeration Technical Support Notice Type: Request for Proposal Short Title: CEI Industrial Refrigeration Technical Support Posted date: December 1, 2015 E-mail response date: December 31, 2015 Point of contact at VEIC: Greg Baker

More information

Program Guidelines for Security Fenestration Rating and Certification Program Administered by Architectural Testing

Program Guidelines for Security Fenestration Rating and Certification Program Administered by Architectural Testing Program Guidelines for Security Fenestration Rating and Certification Program Administered by Architectural Testing TABLE OF CONTENTS 1.0 FORWARD...2 2.0 GENERAL...2 3.0 REFERENCED DOCUMENTS...4 4.0 DEFINITION

More information

OCM ACADEMIC SERVICES PROJECT INITIATION DOCUMENT. Project Title: Online Coursework Management

OCM ACADEMIC SERVICES PROJECT INITIATION DOCUMENT. Project Title: Online Coursework Management OCM-12-025 ACADEMIC SERVICES PROJECT INITIATION DOCUMENT Project Title: Online Coursework Management Change Record Date Author Version Change Reference March 2012 Sue Milward v1 Initial draft April 2012

More information

Higher National Unit specification: general information. Graded Unit title: Computer Science: Graded Unit 2

Higher National Unit specification: general information. Graded Unit title: Computer Science: Graded Unit 2 Higher National Unit specification: general information This Graded Unit has been validated as part of the HND Computer Science. Centres are required to develop the assessment instrument in accordance

More information

American Society for Quality

American Society for Quality US Army BOSS Program Information Session American Society for Quality Choices and Challenges for the Future 10 September 2017 Certified Lean Six Sigma Black Belt Certified Change Management Advanced Practitioner

More information

ISO/IEC INTERNATIONAL STANDARD

ISO/IEC INTERNATIONAL STANDARD INTERNATIONAL STANDARD ISO/IEC 27013 First edition 2012-10-15 Information technology Security techniques Guidance on the integrated implementation of ISO/IEC 27001 and ISO/IEC 20000-1 Technologies de l'information

More information

ISO LEAD AUDITOR TRAINING

ISO LEAD AUDITOR TRAINING FINAL CERTIFICATION AWARDED BY PECB CANADA ISO 22301 LEAD AUDITOR TRAINING & CERTIFICATION (Business Continuity Management) Master the Audit of Business Continuity Management System (BCMS) based on ISO

More information

Article II - Standards Section V - Continuing Education Requirements

Article II - Standards Section V - Continuing Education Requirements Article II - Standards Section V - Continuing Education Requirements 2.5.1 CONTINUING PROFESSIONAL EDUCATION Internal auditors are responsible for maintaining their knowledge and skills. They should update

More information

The IDN Variant TLD Program: Updated Program Plan 23 August 2012

The IDN Variant TLD Program: Updated Program Plan 23 August 2012 The IDN Variant TLD Program: Updated Program Plan 23 August 2012 Table of Contents Project Background... 2 The IDN Variant TLD Program... 2 Revised Program Plan, Projects and Timeline:... 3 Communication

More information

Choosing the Right Solution for Strategic Deployment of Encryption

Choosing the Right Solution for Strategic Deployment of  Encryption Choosing the Right Solution for Strategic Deployment of Email Encryption White Paper: Enterprise Email Encryption Email Protection Buyer s Guide Choosing the Right Solution for Strategic Deployment of

More information

EXAM PREPARATION GUIDE

EXAM PREPARATION GUIDE When Recognition Matters EXAM PREPARATION GUIDE PECB Certified ISO 14001 Lead Implementer www.pecb.com The objective of the PECB Certified ISO 14001 Lead Implementer examination is to ensure that the candidate

More information

Computer Aided Draughting and Design: Graded Unit 1

Computer Aided Draughting and Design: Graded Unit 1 Higher National Graded Unit Specification General Information for Centres This Graded Unit has been validated as part of the HNC Computer Aided Draughting and Design (CADD) award. Centres are required

More information

Kansas City s Metropolitan Emergency Information System (MEIS)

Kansas City s Metropolitan Emergency Information System (MEIS) Information- Sharing Interagency Cooperation Resources Management Law Enforcement Fire Emergency Medical Services Public Health Private Sector Kansas City s Metropolitan Emergency Information System (MEIS)

More information

Appendix 2. Level 4 TRIZ Specialists Certification Regulations (Certified TRIZ Specialist) Approved for use by MATRIZ Presidium on March 21, 2013

Appendix 2. Level 4 TRIZ Specialists Certification Regulations (Certified TRIZ Specialist) Approved for use by MATRIZ Presidium on March 21, 2013 Appendix 2 Level 4 TRIZ Specialists Certification Regulations (Certified TRIZ Specialist) Approved for use by MATRIZ Presidium on March 21, 2013 1. General provisions 1.1. TRIZ Level 4 Specialist Certificate,

More information

Higher National Unit specification: general information. Graded Unit 2

Higher National Unit specification: general information. Graded Unit 2 Higher National Unit specification: general information This Graded Unit has been validated as part of the HND Computing: Software Development. Centres are required to develop the assessment instrument

More information

HPE Network Transformation Experience Workshop Service

HPE Network Transformation Experience Workshop Service Data sheet HPE Network Transformation Experience Workshop Service HPE Network and Mobility Consulting Led by experienced HPE technology consultants, HPE Network Transformation Experience Workshop Service

More information

Architecture and Standards Development Lifecycle

Architecture and Standards Development Lifecycle Architecture and Standards Development Lifecycle Architecture and Standards Branch Author: Architecture and Standards Branch Date Created: April 2, 2008 Last Update: July 22, 2008 Version: 1.0 ~ This Page

More information

REQUEST FOR QUOTATION (RFQ)

REQUEST FOR QUOTATION (RFQ) REQUEST FOR QUOTATION (RFQ) Procurement of Professional Service on ISO/IEC17025 Testing Laboratory Management System Development for Fei Cui (Omphacite and Kosmochlor) Prepared by Gemmological Association

More information

The Defence Nuclear Enterprise: a landscape review

The Defence Nuclear Enterprise: a landscape review A picture of the National Audit Office logo Report by the Comptroller and Auditor General Ministry of Defence The Defence Nuclear Enterprise: a landscape review HC 1003 SESSION 2017 2019 22 MAY 2018 4

More information

10 Considerations for a Cloud Procurement. March 2017

10 Considerations for a Cloud Procurement. March 2017 10 Considerations for a Cloud Procurement March 2017 2017, Amazon Web Services, Inc. or its affiliates. All rights reserved. Notices This document is provided for informational purposes only. It represents

More information

DESIGN AND TECHNOLOGY

DESIGN AND TECHNOLOGY Qualification Accredited A LEVEL NEA Marking Criteria April 2017 DESIGN AND TECHNOLOGY H404, H405 and H406 For first teaching in 2017 www.ocr.org.uk/gcsedesignandtechnology A Level Design and Technology

More information

American Association for Laboratory Accreditation

American Association for Laboratory Accreditation R311 - Specific Requirements: Federal Risk and Authorization Management Program Page 1 of 10 R311 - Specific Requirements: Federal Risk and Authorization Management Program 2017 by A2LA. All rights reserved.

More information

2 ACCREDITED AUDITORS

2 ACCREDITED AUDITORS 2 ACCREDITED AUDITORS 2.1 Auditor Accreditation 2.1.1 IBAC will issue auditor accreditation and appropriate credentials to individuals that apply for such accreditation and who meet the requirements established

More information

Typical Training Duration 11 months

Typical Training Duration 11 months New Zealand Certificate in Business (Administration and Technology) (Level 3) This programme is ideal for learners who need to gain a good general understanding of business administration and technology.

More information

STAFF REPORT. January 26, Audit Committee. Information Security Framework. Purpose:

STAFF REPORT. January 26, Audit Committee. Information Security Framework. Purpose: STAFF REPORT January 26, 2001 To: From: Subject: Audit Committee City Auditor Information Security Framework Purpose: To review the adequacy of the Information Security Framework governing the security

More information

The Great TOGAF Scavenger Hunt. Enterprise Architecture Using TOGAF 9 Course Preparation Guide

The Great TOGAF Scavenger Hunt. Enterprise Architecture Using TOGAF 9 Course Preparation Guide Enterprise Architecture Using TOGAF 9 Course Preparation Guide 2011 Metaplexity Associates LLC All Rights Reserved Version 2.0 January 2, 2011 The Open Group Certification Mark logo and TOGAF are trademarks,

More information

Chapter 4. EDGE Approval Protocol for Auditors

Chapter 4. EDGE Approval Protocol for Auditors Chapter 4 EDGE Approval Protocol for Auditors Version 2.01 June 2016 Copyright 2015 International Finance Corporation. All rights reserved. The material in this publication is copyrighted by International

More information

GUIDELINE. of the European Committee for Welding of Railway Vehicles (ECWRV) ( ) PART 1

GUIDELINE. of the European Committee for Welding of Railway Vehicles (ECWRV) ( ) PART 1 GUIDELINE of the European Committee for Welding of Railway Vehicles (ECWRV) (2016-05-10) PART 1 Procedure for the application of EN 15085 and certification of welding manufacturers for welding railway

More information

Conceptual Model Architecture and Services

Conceptual Model Architecture and Services CAN UNCLASSIFIED Conceptual Model Architecture and Services Contribution to the National Science Foundation Report on Research Challenges in Modeling and Simulation for Engineering Complex Systems Nathalie

More information