Sixth Framework Programme Information Society Technologies Priority. 05 July 2006

Size: px
Start display at page:

Download "Sixth Framework Programme Information Society Technologies Priority. 05 July 2006"

Transcription

1 Web Accessibility Benchmarking Cluster Sixth Framework Programme Information Society Technologies Priority D-WAB4 Unified Web Evaluation Methodology (UWEM 1.0) Contractual Date of Delivery to the EC: 05 July 2006 Actual Date of Delivery to the EC: 05 July 2006 Editors: Contributors: Workpackage: Security: Nature: Version: Eric Velleman (Accessibility Foundation), Carlos A Velasco (Fraunhofer Institute for Applied Information Technology FIT), Mikael Snaprud (Agder University College), Dominique Burger (BrailleNet) See Appendix D for contributors list WAB1a Public Report Total number of pages: 140 G Keywords: Web Accessibility,, World Wide Web Consortium, W3C, Web Accessibility Initiative, WAI, Web Content Accessibility Guidelines, WCAG, Unified Web site Evaluation Methodology, UWEM, Evaluation and Report Language, EARL. Please see Appendix A for the document license.

2 Version Version date DOCUMENT HISTORY Responsible Description A Eric Velleman Initial version based upon version 0.5 review, comments and testing. Changes made in all sections to shift to 1.0 B Nils Ulltveit-Moe, Agata Sawicka, Eric Velleman Added error margin of 95% confidence interval as sampling quality measure, and indicated algorithms for random sampling. Changed scorecards according to Agatas suggestions. EV changed appendix template report C Annika Nietzio Updated version of section on aggregation D Eric Velleman Changes overall in checkpoints following review comments and work on scope and sampling E Eric Velleman Adding references provided by Annika Nietzio in aggregation section F Carlos A. Velasco Overall copy-editing G Eric Velleman Updated Section 5 Expected results and editorial changes thanks to review comments by Christoph, Annika and Jesse 7 July 2006 Page 2 of 140

3 Table of Contents 1Executive summary Introduction Methodology definition Target audience of the document Target technologies of this document Expertise for evaluating Accessibility Acknowledgements More information about the Evaluation procedures and conformance Types of evaluation Conformance claims Tool conformance Scope and Sampling of resources Definitions Procedure to express the scope Procedure to generate evaluation samples The Core Resource List The Sampled Resource List Manual sample size Tests for conformance evaluation Introduction Guideline Checkpoint (X)HTML tests Test 1.1_HTML_ Test 1.1_HTML_ July 2006 Page 3 of 140

4 Test 1.1_HTML_ Test 1.1_HTML_ Test 1.1_HTML_ Test 1.1_HTML_ Test 1.1_HTML_ Test 1.1_HTML_ Test 1.1_HTML_ Test 1.1_HTML_ Tests for external objects Test 1.1_external_ Test 1.1_external_ Checkpoint (X)HTML tests Test 1.2_HTML_ Checkpoint Tests for external objects Test 1.3_external_ Test 1.3_external_ Checkpoint Tests for external objects Test 1.4_external_ Guideline Checkpoint (X)HTML tests Test 2.1_HTML_ Test 2.1_HTML_ Test 2.1_HTML_ July 2006 Page 4 of 140

5 CSS tests Test 2.1_CSS_ Checkpoint (X)HTML tests Test 2.2_HTML_ CSS tests Test 2.2_CSS_ Guideline Checkpoint (X)HTML tests Test 3.1_HTML_ Test 3.1_HTML_ Test 3.1_HTML_ Checkpoint (X)HTML tests Test 3.2_HTML_ Test 3.2_HTML_ CSS tests Test 3.2_CSS_ Checkpoint (X)HTML tests Test 3.3_HTML_ Test 3.3_HTML_ Checkpoint (X)HTML tests Test 3.4_HTML_ Test 3.4_HTML_ July 2006 Page 5 of 140

6 Test 3.4_HTML_ CSS tests Test 3.4_CSS_ Checkpoint (X)HTML tests Test 3.5_HTML_ Test 3.5_HTML_ Test 3.5_HTML_ Test 3.5_HTML_ Test 3.5_HTML_ Checkpoint (X)HTML tests Test 3.6_HTML_ Test 3.6_HTML_ Test 3.6_HTML_ Test 3.6_HTML_ Test 3.6_HTML_ Test 3.6_HTML_ Test 3.6_HTML_ CSS tests Test 3.6_CSS_ Checkpoint (X)HTML tests Test 3.7_HTML_ Test 3.7_HTML_ Test 3.7_HTML_ Test 3.7_HTML_ July 2006 Page 6 of 140

7 5.5Guideline Checkpoint (X)HTML tests Test 4.1_HTML_ Test 4.1_HTML_ Test 4.1_HTML_ Test 4.1_HTML_ CSS tests Test 4.1_CSS_ Test 4.1_CSS_ Guideline Checkpoint (X)HTML tests Test 5.1_HTML_ Test 5.1_HTML_ Checkpoint Test 5.2_HTML_ Test 5.2_HTML_ Test 5.2_HTML_ Checkpoint (X)HTML tests Test 5.3_HTML_ Checkpoint (X)HTML tests Test 5.4_HTML_ Test 5.4_HTML_ Test 5.4_HTML_ July 2006 Page 7 of 140

8 Test 5.4_HTML_ Test 5.4_HTML_ Guideline Checkpoint (X)HTML tests Test 6.1_HTML_ Test 6.1_HTML_ Checkpoint (X)HTML tests Test 6.2_HTML_ Test 6.2_HTML_ Test 6.2_HTML_ Checkpoint (X)HTML tests Test 6.3_HTML_ Test 6.3_HTML_ Checkpoint (X)HTML tests Test 6.4_HTML_ Test 6.4_HTML_ Tests for external objects _external_ Checkpoint (X)HTML tests Test 6.5_HTML_ Test 6.5_HTML_ Test 6.5_HTML_ July 2006 Page 8 of 140

9 5.8Guideline Checkpoint (X)HTML tests Test 7.1_HTML_ Test 7.1_HTML_ Test 7.1_HTML_ CSS tests Test 7.1_CSS_ Test for external objects Test 7.1_external_ Test 7.1_external_ Checkpoint (X)HTML tests Test 7.2_HTML_ Test 7.2_HTML_ Test 7.2_HTML_ CSS tests Test 7.2_CSS_ Test 7.2_CSS_ Tests for external objects Test 7.2_external_ Test 7.2_external_ Checkpoint (X)HTML tests Test 7.3_HTML_ Test 7.3_HTML_ CSS tests July 2006 Page 9 of 140

10 Test 7.3_CSS_ Tests for external objects Test 7.3_external_ Test 7.3_external_ Checkpoint (X)HTML tests Test 7.4_HTML_ Test 7.4_HTML_ Tests for external objects Test 7.4_external_ Checkpoint (X)HTML tests Test 7.5_HTML_ Test 7.5_HTML_ Tests for external objects Test 7.5_external_ Guideline Checkpoint (X)HTML tests Test 8.1_HTML_ Tests for external objects Test 8.1_external_ Guideline Checkpoint (X)HTML tests Test 9.1_HTML_ Checkpoint July 2006 Page 10 of 140

11 Tests for external objects Test 9.2_external_ Checkpoint (X)HTML tests Test 9.3_HTML_ Guideline Checkpoint (X)HTML tests Test 10.1_HTML_ Test 10.1_HTML_ Test 10.1_HTML_ Checkpoint (X)HTML tests Test 10.2_HTML_ Guideline Checkpoint (X)HTML tests Test 11.1_HTML_ Checkpoint (X)HTML tests Test 11.2_HTML_ Test 11.2_HTML_ Checkpoint (X)HTML tests Test 11.4_HTML_ Test 11.4_HTML_ Test 11.4_HTML_ July 2006 Page 11 of 140

12 5.13Guideline Checkpoint (X)HTML tests Test 12.1_HTML_ Test 12.1_HTML_ Checkpoint (X)HTML tests Test 12.2_HTML_ Checkpoint (X)HTML tests Test 12.3_HTML_ Test 12.3_HTML_ Test 12.3_HTML_ Test 12.3_HTML_ Test 12.3_HTML_ Test 12.3_HTML_ Test 12.3_HTML_ Test 12.3_HTML_ Test 12.3_HTML_ Test 12.3_HTML_ Test 12.3_HTML_ Test 12.3_HTML_ Test 12.3_HTML_ Test 12.3_HTML_ Test 12.3_HTML_ Checkpoint (X)HTML tests July 2006 Page 12 of 140

13 Test 12.4_HTML_ Test 12.4_HTML_ Guideline Checkpoint (X)HTML tests Test 13.1_HTML_ Test 13.1_HTML_ Checkpoint (X)HTML tests Test 13.2_HTML_ Checkpoint (X)HTML tests Test 13.3_HTML_ Checkpoint (X)HTML tests Test 13.4_HTML_ Guideline Checkpoint (X)HTML tests Test 14.1_HTML_ Aggregation model for test results Introduction Approach Definitions and mathematical background Definitions Notation Example (Confidence weighting) July 2006 Page 13 of 140

14 6.3.4Mathematical background: The UCAB model Stage 2: Accessibility Barrier Probability Fp Example (Fp for single Web page): Combining results from different testing procedures Stage 3: Accessibility Barrier Probability Fs Example (Fs for single Web site) Limitations of the UCAB model Aspects not covered by the model Underlying assumptions Reporting of test results Scorecard report Scorecard report Scores indicating barrier status Scores indicating barrier change Aggregation of statistics Glossary References Appendix A: Document License Appendix B: Template for expert evaluation Introduction Executive Summary Background about Evaluation Web Site Evaluated Reviewer(s) Evaluation Process Resources List Check Results July 2006 Page 14 of 140

15 12.6References Appendices Appendix C: RDF Schemas for Resources Lists Expressing resources Resources Lists Appendix D: Contributors to this document Appendix E: W3C DOCUMENT LICENSE List of Figures Figure 1: UWEM Evaluation Types...23 Figure 2: The first and second stage of the Web site evaluation process Figure 3: The third stage of the Web site evaluation process List of Tables Table 1: UWEM barrier score represented by letter and colour to indicate barrier probability. Note that the levels chosen are preliminary values that will be evaluated when real data measurements from the EIAO project are available Table 2: UWEM barrier change score represented by a symbol indicating change in barrier probability July 2006 Page 15 of 140

16 1 Executive summary This document is the result of a joint effort by 23 European organizations participating in three European projects combined in a cluster to develop a Unified Web Evaluation Methodology (UWEM). It describes a methodology for evaluating conformance of Web sites with the W3C Web Content Accessibility Guidelines 1.0 [WCAG10]. The next version of UWEM will be synchronized with the foreseen migration from WCAG 1.0 to WCAG 2.0 [WCAG20]. We hope that this version and the work inside the can serve as input to the WCAG2.0 work that is done by the World Wide Web Consortium as part of its Web Accessibility Initiative. The projects involved in making this methodology aim to ensure that evaluation tools and methods developed for global monitoring or for local evaluation, are compatible and coherent among themselves and with WAI. Their aim is to increase the value of evaluations by basing them on a shared interpretation of WCAG 1.0. Therefore, UWEM offers a harmonized interpretation of the WCAG1.0 guidelines agreed among the participants within the three aforementioned projects. The UWEM provides an evaluation procedure consisting of a system of principles and practices for expert and automatic evaluation of Web accessibility for humans and machine interfaces. The methodology aims to be fully conformant with WCAG 1.0 guidelines. Currently the UWEM is limited to priority 1 and priority 2 guidelines and combines tool and expert evaluation. The methodology is suitable for detailed evaluations of one Web page, an entire site (irrespective of size), or multiple sites. It covers sampling, clarifications of the checkpoints and information necessary for reporting, interpretation and integration/aggregation of results. The UWEM will be used by the projects included in the Cluster for an observatory (EIAO project), tools for benchmarking (BenToWeb project) and a certification scheme (SEAM project). More information about the Cluster, the UWEM and the projects involved can be found at: The document is organized in the following way: Section 2 outlines the requirements, the target audience and the techniques for the document and serves as a description of the basic properties of UWEM. Section 3 describes the UWEM evaluation procedures and conformance. Section 4 describes the approaches for scoping and sampling to provide resources lists for conformance of a web site. The procedures are designed to ensure that the identified samples are representative so that the evaluation results are reliable. Section 5 on conformance testing describes how to carry out tool and expert tests for conformance evaluation of WCAG 1.0 checkpoints for priority 1 and 2. Section 6 describes a model for aggregating test results from the procedures described in 7 July 2006 Page 16 of 140

17 section 5. The model also supports estimation of accessibility barriers within a Web site as well as across groups of Web sites. Section 7 describes the reporting of test results with EARL and also proposes a template for reporting expert evaluation results. Section 8 describes a reporting method specially tailored for policy makers using a scorecard approach. Section 9 and 10 provide a glossary with the terms and, a list of the references used in the document. Appendix A provides the document license for UWEM. Appendix B provides a template to report evaluation results tailored to the expert evaluation using UWEM. Appendix C provides RDF schema's. Appendix D contains a list of contributors to this document. Appendix E provides the W3C document license for the referenced W3C work in this UWEM document. 7 July 2006 Page 17 of 140

18 2 Introduction The Unified Web Evaluation Methodology should ensure that evaluation tools and methods developed for large scale monitoring or for local evaluation, are compatible and coherent among themselves and with W3C/WAI. This document is the result of a joint effort of three European projects with 23 organisations collaborating in the to develop UWEM. The UWEM describes a methodology for evaluation of conformance with the W3C Web Content Accessibility Guidelines 1.0 [WCAG10]. For practical reasons, this version of the methodology focuses on the current WCAG 1.0 guidelines. These WCAG 1.0 guidelines are broadly accepted and form a stable factor in accessibility since May Already in 2002, the EU recommended that they should be adopted by the public sector in Member States. In some countries they found their way into legislation. In the later version, UWEM will be synchronized with the foreseen migration from WCAG 1.0 to WCAG 2.0 [WCAG20]. The purpose of the UWEM is to provide a basis for evaluating the methodology with regard to the intended types of testing: expert and automatic evaluation of Web resources. 1 The evaluation of the UWEM is also planned to provide feedback and contribute to W3C/WAI for future guidelines or versions of guidelines. W3C/WAI staff have reviewed and provided input into previous drafts of this document in order to minimize potential fragmentation of technical content. This does not imply W3C or WAI endorsement of any part of this document in any way. Part of the materials presented in this document in section 5 are annotations of W3C documents. In particular, we are targeting the following two documents: Web Content Accessibility Guidelines 1.0 [WCAG10], Techniques for Web Content Accessibility Guidelines 1.0 [WCAG10- TECHS], and other Techniques documents linked from this one. According to the Intellectual Rights FAQ from W3C, 2 section 5 of the UWEM falls under an annotation... that does not require the copying and modification of the document being annotated. 3 Therefore, all references to guidelines and checkpoints are duly quoted, and the URL to the original document is included. W3C is not responsible for any content not found at the original URL, and the annotations in this document are non-normative. 1 Evaluation of Web resources via user testing will be covered by future UWEM releases July 2006 Page 18 of 140

19 2.1 Methodology definition The Unified Web Evaluation Methodology provides an evaluation procedure offering a system of principles and practices for expert and automatic evaluation of Web accessibility for humans and machine interfaces. The Methodology is designed to be conformant with WCAG 1.0 priority 1 and 2 checkpoints with regard to technical criteria. The UWEM aims to increase the value of evaluations by basing them on a shared interpretation of WCAG 1.0 and a set of tests that are sufficiently robust to give stakeholders confidence in results. Web content producers may also wish to evaluate their own content and UWEM aims to also be suitable to these users. The methodology is designed to meet the following requirements: Technical conformance to existing Web Accessibility Initiative (WAI) Recommendations and Techniques documents. Tool and browser independence: questions and tests are given in a 'pure' form, making them as tool independent as possible. Unique interpretation: questions shall have only one way of being interpreted. Replicability: different Web accessibility evaluators who perform the same tests on the same site should get the same results within a given tolerance. Translatability: the methodology will address localisation issues. Compliance with Regulation (EC) No 808/2004 of the European Parliament and of the Council of 21 April 2004 concerning Community statistics on the information society. In the methodology we have included information about: Scope and sampling. Reporting, interpretation and integration/aggregation of results. 2.2 Target audience of the document The target audiences for this document include, but are not limited to: Web accessibility benchmarking projects (European Commission, national governments, disability groups, EDeAN, and research institutions) Possible Certification Authorities 7 July 2006 Page 19 of 140

20 Web content producers wishing to evaluate their content Developers of Evaluation and Repair Tools Policy makers and Web site owners Other organisations evaluating Web sites The European Commission, national governments and other organisations who wish to benchmark Web accessibility will be able to use the UWEM to carry out the evaluations and compare their results in a meaningful way. The UWEM is an evaluation methodology and is not intended to provide information for Web content producers wishing to produce content comformant with WCAG 1.0. This information is provided in the WCAG 1.0 Techniques Documents that are available through the W3C/WAI web site [WCAG10-TECHS]. 2.3 Target technologies of this document The UWEM primarily covers methods to evaluate documents based on the following technologies: HTML 4.01 XHTML 1.0 and 1.1 CSS 2.x Other embedded objects in (X)HTML resources 2.4 Expertise for evaluating Accessibility The W3C Evaluation suite 4 extensively describes the expertise necessary for the evaluation of the accessibility of Web content for people with disabilities. Evaluation activities require diverse kinds of expertise and perspectives. Individuals evaluating the accessibility of web content require training and experience across a broad range of disciplines. A collaborative approach can bring better results for individuals evaluating web content. The W3C/WAI evaluation suite writes: 5 Effective evaluation of Web accessibility requires more than simply running an evaluation tool on a Web site. Comprehensive and effective evaluations require evaluators with an understanding of Web technologies, evaluation tools, barriers that people with disabilities experience, assistive technologies and approaches that people with disabilities use, and accessibility guidelines and techniques July 2006 Page 20 of 140

21 This document describes the expert and automatic evaluation methodology. automatic evaluation can significantly reduce the time and effort needed for an evaluation but please note that many accessibility checks require human judgement and must be evaluated manually. This is described in more detail in the tests of section 5. More information on using tools can be found in the W3C/WAI evaluation suite section selecting tools Acknowledgements The following organizations worked on this UWEM document: Accessibility Foundation (The Netherlands, Cluster coordinator); Agder University College (Norway, EIAO coordinator); Fraunhofer Institute for Applied Information Technology FIT (Germany, BenToWeb coordinator); Association Braillenet (France, SupportEAM coordinator), Vista Utredning AS (Norway); Forschungsintitut Technologie-Behindertenhilfe der Evangelischen Stiftung Volmarstein (Germany); The Manchester Metropolitan University (UK); Nettkroken as (Norway); FBL s.r.l. (Italy); Warsaw University of Technology, Faculty of Production Engineering, Institute of Production Systems Organisation (Poland); Aalborg University (Denmark); Intermedium as (Norway); Fundosa Technosite (Spain); Dublin City University (Ireland); Universität Linz, Institut integriert studieren (Austria); Katholieke Universiteit Leuven, Research & Development (Belgium); Accessinmind Limited (UK); Multimedia Campus Kiel (Germany); Department of Product and Systems Design, University of the Aegean (Greece); University of York (UK); ISdAC International Association (Belgium); FernUniversität in Hagen (Germany). We thank the Web Accessibility Initiative's Team from the World Wide Web Consortium for all the useful comments to the draft versions of this document. 2.6 More information about the The projects participating in the WAB cluster are funded by the European Union in the second FP6 IST call (2003) of the einclusion Strategic Objective. The WAB cluster Web site is available at More information about the projects can be found on the project web sites: July 2006 Page 21 of 140

22 3 Evaluation procedures and conformance This section describes the procedures and their objectives for expert and automatic evaluation and provides information about conformance claims based on the UWEM. The evaluation procedures in this document have the following objectives: To improve replicability of the evaluation results by definition of a repeatable sampling scheme in section 4. To clarify interpretation, facilitate automatic evaluation and improve replicability of its test results (see section 5). To support aggregation of the evaluation results for individual Web pages, Web sites and even sets of Web sites (section 6). The UWEM will also support aggregations across geographical regions as well as economical sectors, as indicated in section 8. To support the reporting of test results (section 7 and 8) To support large scale evaluation of Web sites, to identify problem areas in Web sites. To evaluate conformance to the checks of section 5. It includes procedures for evaluation down to very specific tests for expert and automatic evaluation. 3.1 Types of evaluation Accessibility testing may be done via automatic, expert and user testing. The different types of evaluation methods have a number of strengths and weaknesses. Figure 1 describes three different evaluation methods, from which two (automatic and expert) are covered in the UWEM. The figure shows, for example, that automatic evaluation (Tool1 or Tool2) can only test for conformance to a subset of the checkpoints (such as the set of tests marked as "fully automatable" in section 5), which further means that only a subset of all possible accessibility barriers can be identified reliably by using automatic testing. Therefore, coverage of automatic evaluation as an overall indicator of accessibility is low, however it can identify many barriers reliably. It may also be applied efficiently to test very large numbers of Web resources within one Web site as well as multiple Web sites. Tool 1 and Tool 2 are here two fully automatic assessment tools that focus on checking accessibility issues, possibly with some overlap of functionality. Some tools can also act as support systems in an expert evaluation process. The 7 July 2006 Page 22 of 140

23 tools provide reliable results for a subset of tests and can not only speed up the process by performing some tasks automatically, but also, by providing hints about barrier locations, indicate areas the expert evaluators should focus on. User testing is able to identify barriers that are not caught by other testing means, and is also capable of estimating the accessibility for tested scenarios. However, user testing is quite specialised, thus it is not generally suitable for conformance testing, since it is not able to test all aspects of the tests of section 5. The best approach to ensure both accessibility and UWEM conformance is to use a combined approach encompassing all evaluation methods: automatic, expert evaluation and user testing of the Web site. Accessibility Testing UWEM tests User Testing (not addressed in this UWEM version) Expert Testing (plus support tools) UWEM automatic tests Automatic Testing (tool #1+tool #2+...) Figure 1: UWEM Evaluation Types. User testing is not covered in this version of UWEM. How to involve users in the evaluation of web content is described also on the W3C/WAI website in the evaluation suite. 7 The main advantages of automatic testing are: The method is replicable (although the dynamic nature of some Web July 2006 Page 23 of 140

24 technologies mean complete replication is impossible [LEVENE99]). It may be applied to very large number of resources on a Web site and to multiple Web sites. 3.2 Conformance claims A conformance claim determines if a web site meets the accessibility standards described in section 5. To claim conformance with the UWEM, it is minimally required that: 1. The resources sample and the scope for the evaluation is defined according to section All resources in the sample pass all applicable tests to the corresponding conformance level. The conformance levels to the UWEM replicate those of [WCAG10], i.e.: Conformance Level 1: all tests relevant to [WCAG10] Priority 1 checkpoints are satisfied (see section 5). Conformance Level 2: all tests relevant to [WCAG10] Priority 1 and Priority 2 checkpoints are satisfied (see also section 5). More information on the meaning of priorities can be found in: The claims of accessibility conformance according to the UWEM methodology must use the following form: 1. The UWEM version and its URI identifier, i.e., 2. The URI to a document detailing the scope and the sample (see section 4) to which the claim refers. 3. The level of conformance. 3.3 Tool conformance Evaluation tools can also claim conformance to UWEM 1.0. In that way, experts evaluating web sites according to UWEM 1.0 will be able to rely on the results of the tool for the fully automatable tests of the methodology. To claim conformance to UWEM 1.0, the tool MUST implement all fully automatable tests of section 5, to the corresponding conformance level (see previous section). This conformance claim must use the following form: 7 July 2006 Page 24 of 140

25 1. The UWEM version and its URI identifier, i.e., 2. The URI to a document detailing a set of public test files where this conformance claim has been verified. 3. The level of conformance. 7 July 2006 Page 25 of 140

26 4 Scope and Sampling of resources 4.1 Definitions Within UWEM, conformance claims need to refer to a list of resources evaluated within the scope of the web site(s). This section provides the background definitions to the different concepts used. Resource: a network data object identified by a URI [RFC3986]. This definition is adapted from the definition of resource in [RFC2616]. This concept is included for non-http resources like, e.g., those accessed via the FTP protocol. This type of resource must be expressed via an instance of the earl:webcontent Class (see Appendix C and [EARL10-Schema] for further details). HTTP Resource: a network data object identified by a single HTTP request. This type of resource must be expressed via an instance of the earl:webcontent Class (see Appendix C for further details), which may contain additional components of the [HTTP-RDF] W3C Note. This distinction on the resources is due to the underlying complexity of the HTTP protocol [RFC2616], where content negotiation can lead to different versions of a resource (e.g., language versions via the Accept-Language HTTP header). Resources list: Conformance claims in UWEM are related to a given resources list, which is expressed as an RDF sequence of resources (of any type). Appendix C describes the RDF syntax used to express a resources list. According to the needs of different applications of UWEM, this resources list may be specified by a variety of different participants in the evaluation process such as a site owner, a site operator, an inspection organisation, etc. This document only explains how such a list should be unambiguously expressed. 4.2 Procedure to express the scope For the purposes of the UWEM a Web site is defined as an arbitrary collection of hyperlinked Web resources, each identified accordingly to the procedure described in section 4.1. The purpose of UWEM is to guarantee replicability of results. Therefore, it is of key importance for the aggregation and comparison of results, the unambiguous expression of tested resources. Therefore, it will not be accepted as UWEM 1.0 conformance claims blanket statements of the type is conformant to UWEM 1.0 Level 1. This will imply that a set of seed resources 7 July 2006 Page 26 of 140

27 have been crawled to the end following certain pre-determined limits or constraints. However, bearing in mind, the wide variety of existing crawlers, and the different technologies that they use, it is not possible to verify the reliability of those statements. Furthermore, the different RFCs related to Domain Names leave open room for interpretation in regard to the concept of subdomain and its resolution. Therefore, for UWEM 1.0 conformance claims, the scope of a Web site MUST be expressed in the form of a list of resources (see Appendix C). 4.3 Procedure to generate evaluation samples In general it will not be practical to test all site resources against all evaluation criteria. Accordingly, after determining and disclosing a list of resources to be evaluated and the targeted conformance level, we propose to identify certain subsets or samples. The resources to sample should include the Core Resource List supplemented with a selection of arbitrary resources. We call this the Sampled Resource List The Core Resource List The Core Resource List is a set of generic resources, which are likely to be present in most Web sites, and which are core to the use and accessibility evaluation of a site. The Core Resource List therefore represents a set of resources which should be included in any accessibility evaluation of the site. The Core Resource List cannot, in general, be automatically identified, but requires human judgement to select. In case of completely automatic testing like, e.g., in an observatory, the Core Resource List may be determined via some heuristic methods. The Core Resource List should consist of as many of the following resources as are applicable: The "Home" resource. This is defined as the resource identified by a URL consisting only of a HTTP protocol component followed by the host name. The mapping from this URL to the Home resource may rely on server side redirection. This resource is required to be present on the site (though it may be differently named). The "Contact Information" resource (if any). The generic "Help" resource (if any). The "Site Map" resource (if any). The resources comprising the "Primary Site Search" service (if any). This shall include at least one resource where a search can be initiated and at least one resource showing the results from a sample search. Examples of resources representative of the primary intended uses of the 7 July 2006 Page 27 of 140

28 site (if identifiable). If the site provides services which involve a user systematically traversing a sequence of resources (e.g., a multi-page form or transaction), then representative resources accessed at each step of each such key scenario should be included (if applicable and only if within the scope). Resources representative of each category of resources having a substantially distinct look and feel (typically representative of distinct underlying site templates ) (if identifiable). Resources representative of each of the following distinct web technologies (where they are in use): forms frames data tables client side scripting applets, plugins, multimedia, etc. Of course, any single resource may belong to more than one of the categories above: the requirement is simply that the Core Resource List as a whole should, as far as possible, collectively address all the applicable sampling objectives. Any given resource should appear only once in the Core Resource List The Sampled Resource List A Sampled Resource List is a set, which can be generated by automatic recursive crawling from a set of seed resources. A Sampled Resource List would typically be used in the context of evaluations carried out over large numbers of sites (against automatic criteria only), where it is not feasible or necessary to evaluate the complete set of web pages for each site. 8 If a sampled approach is used, then the sampled result must be representative and unbiased, which means that it must be a random sub-set of the total number of resources. The Sampled Resource List for large scale automatic evaluation should therefore use a sampling algorithm that samples the resource set using a random uniform, or near-random uniform sampling algorithm [HENZINGER00], or a random set of samples from the complete set of web pages (provided that the complete set of web pages is available). 9 The UWEM aggregation method in section 6 operates on 8 Note that if a small web site is evaluated entirely, then the mean value of the aggregated samples can be calculated exactly. For larger web sites, it is tolerable to sample a random subset of the web pages, as long as the error margin of the 95% confidence interval is disclosed. 9 As long as the algorithm used selects a random, unbiased set of web pages, then the sample valid, and should provide the same results within the calculated error margin for a 95% confidence interval. 7 July 2006 Page 28 of 140

29 web page level, so each sample unit should resemble the set of web resources that together form a rendered web page. The error margin of the 95% confidence interval of the mean value of the aggregated samples for a web site, using the UWEM aggregation method in section 6 should be clearly denoted in the test results. It is up to the tool vendor whether they choose to present the error margin for each web site, or if they choose to perform sampling to a given error margin, 10 so that the maximum error margin is presented once. 11 The error margin m of a confidence interval is defined to be the value added or subtracted from the sample mean which determines the length of the interval: m= z n Where z=1.96 for 95% confidence interval and σ is the standard deviation of the aggregated samples for the web site using the aggregation method in section 6. Note that both the sampling algorithm used, and any further restrictions limitating or biasing the result, including, but not limited to the set of restrictions below, should be explicitly disclosed in any evaluation report: Restriction to pages without form interaction Restriction by content type (e.g., not JavaScript/Flash/PDF). Restriction by linkage depth from any seed resource. Restriction on the specific seed resources used. Restricting the total number of resources in the sample. Restricting the total amount of resource data in the sample Manual sample size As an alternative to the method described above, and especially suited for expert testing, we allow a manual selection of the minimum number of resources. This minimum number of resources in the Sampled Resource List, depends on the estimated web site size. 10 Sample until error margin is achieved is based on the fact that an increase in sample size will decrease the length of the confidence interval without reducing the level of confidence. This is because the standard deviation of the sample mean decreases as n increases. 11 Note that is also is possible to determine a minimum number of samples that will provide results that are within a given error margin, even in worst case, with a variance of 0.5. However this will require more samples than strictly necessary for all web sites that are better (less variance) than the worst case. With this in mind, we believe it is better to have a requirement on telling the error margin of the result, than a requirement on the number of samples. 7 July 2006 Page 29 of 140

30 The minimum sample size consists of 30 unique resources (if available), adding 2 unique resources per 1000 up to a maximum of 50 resources in the Sampled Resource List. This is an arbitrary number. More detailed recommendations for sample sizes will be added in a later version of the UWEM and will be largely based on the results from experiments within the EIAO and BenToWeb projects. 7 July 2006 Page 30 of 140

31 5 Tests for conformance evaluation 5.1 Introduction This section covers the UWEM automatic and expert testing of Priority 1 and Priority 2 checkpoints of WCAG 1.0 [WCAG10]. For this purpose, this section provides tests for expert and/or automatic evaluation. The structure of the tests in this section is the following: 1. Guideline Quotation of the corresponding WCAG 1.0 guideline. Pointers to additional clarifications might be added. Checkpoint Quotation of the corresponding WCAG 1.0 checkpoint. Pointers to additional clarifications might be added. For each checkpoint, a set of one or more tests is defined. If no automatic tests for a certain technology are defined, this means that there are no applicable tests for automated testing. a) (X)HTML-specific tests Set of tests to be made for conformance claims for (X)HTML resources. Each test consists of: Title and ID: short descriptive title (informative) and unique identifier (normative). Applicability criteria: elements, attributes and combinations thereof used to determine the applicability of the test. Whenever possible, the criteria are presented as XPath expressions, otherwise a prose description is given. description in a tool-independent manner of the test procedure. The procedure may consist of multiple steps and is written so as to enable possible machine-testing. Expected results: statement defining the fail or pass conditions with regard to one or more steps in the test procedure. The elements or content specified in the accessibility criteria pass the test if the result is not FAIL. Fully automatable: statement whether the test procedure can be fully automated (yes/no). 7 July 2006 Page 31 of 140

32 b) CSS-specific tests Set of tests to be made for conformance claims for CSS resources. Each test consists of: Title and ID: short descriptive title (informative) and unique identifier (normative). Applicability criteria: CSS selectors, properties and combinations thereof used to determine the applicability of the test. description in a tool-independent manner of the test procedure. The procedure may consist of multiple steps and is written so as to enable possible machine-testing. Expected results: statement defining the fail or pass conditions with regard to one or more steps in the test procedure. The elements or content specified in the accessibility criteria pass the test if the result is not FAIL. Fully automatable: statement whether the test procedure can be fully automated (yes/no). c) Tests for external objects Set of tests to be made for conformance claims for objects included or embedded in web pages through HTML elements or CSS-generated content. This includes applets, Flash, video and audio. Each test consists of: Title and ID: short descriptive title (informative) and unique identifier (normative). Applicability criteria: Elements, attributes and combinations thereof used to determine the applicability of the test. description in a tool-independent manner of the test procedure. The procedure may consist of multiple steps and is written so as to enable possible machine-testing. Expected results: statement defining the fail or pass conditions with regard to one or more steps in the test procedure. The elements or content specified in the accessibility criteria pass the test if the result is not FAIL. 7 July 2006 Page 32 of 140

33 Fully automatable: statement whether the test procedure can be fully automated (yes/no). 2. (Optional) Additional clarification issues, such as definition pointers. This section does not repeat information available in W3C documents. Instead, it provides pointers to the relevant places and extends only information when necessary for the defined tests. Web content passes a checkpoint if it fails none of the applicable tests for that checkpoint. Web content fails a checkpoint if it fails any of the applicable tests for that checkpoint. 5.2 Guideline 1 Provide equivalent alternatives to auditory and visual content. (See This guideline provides information on how to support with complementary text alternatives auditory and visual content Checkpoint 1.1 Provide a text equivalent for every non-text element (e.g., via "alt", "longdesc", or in element content). This includes: images, graphical representations of text (including symbols), image map regions, animations (e.g., animated GIFs), applets and programmatic objects, art, frames, scripts, images used as list bullets, spacers, graphical buttons, sounds (played with or without user interaction), stand-alone audio files, audio tracks of video, and video. [Priority 1] (See and the techniques in TECHS/#tech-text-equivalent) (X)HTML tests Test 1.1_HTML_01 This test is targeted to check that non-text content has a text equivalent. Applicability criteria: all non-text elements that support the alt attribute. //img //area //input[@type='image'] //applet 7 July 2006 Page 33 of 140

34 Check that the element has an alt attribute. Expected results: PASS if true. FAIL if false. Fully automatable: yes Test 1.1_HTML_02 This test is targeted to analyse non-text elements with an empty text alternative. Applicability criteria: non-text elements with empty text alternative. //object[count(local-name(*)!='param')=0] 1. Check that the image/content is purely decorative. 2. If #1 is false, check that there is a text alternative adjacent to the nontext content. Expected results: PASS if #1 or #2 is true. FAIL if #1 and #2 are false Test 1.1_HTML_03 This test is targeted to analyse non-text elements with non-empty text alternative. Applicability criteria: all non-text elements with non-empty text alternative. //img[@alt][@alt!=''] //area[@alt][@alt!=''] //input[@type='image'][@alt][@alt!=''] //applet[@alt][@alt!=''] //object[count(local-name(*)!='param')>0] Check that the text alternative 12 appropriately represents the function of the non-text-element within the context. Expected results: PASS if true. FAIL if false. 12 If there is text content adjacent to the non-text element, the text alternative can consist of this text content combined with the non-text elements alt attribute value. 7 July 2006 Page 34 of 140

35 Test 1.1_HTML_04 This test is targeted to analyse long descriptions of media elements. Applicability criteria: all long descriptions of images and media elements. 1. Check that the long description referenced by the longdesc or href attribute is available. 2. Check that it appropriately describes the element. Expected results: FAIL if #1 or #2 is false Test 1.1_HTML_05 This test is targeted to find complex images and non-text content that require a long description. Applicability criteria: all img and object elements. //img[not(@longdesc)] //object[not(.//a/@href)] 1. Check that it does not require a long description. 2. If #1 is false, check that a long description is referenced by the longdesc attribute for img elements, or provided by child elements (including an a element referencing a long description) for object elements, respectively. Note that the child elements of the object element can contain a link to a long description. Expected results: PASS if #1 or #2 is true. FAIL if #1 and #2 are false Test 1.1_HTML_06 This test is targeted to non-text content embedded with the non-standard embed element. Since there is no defined method of providing alternatives for embed, embed is inherently inaccessible. Applicability criteria: all embed elements. //embed 7 July 2006 Page 35 of 140

36 Select elements Expected results: FAIL if true. Fully automatable: Yes Test 1.1_HTML_07 This test is targeted to check for text alternatives for non-text content loaded into an inline frame. Applicability criteria: all iframe elements. //iframe 1. Check if the element loads non-text content. 2. If #1 is true, check that the element contains a text alternative or a link to a text alternative for the non-text content. Note that the iframe content cannot be updated when the content loaded by the iframe changes as a result of user interaction or script execution. Expected results: PASS if #2 is true. FAIL if #2 is false Test 1.1_HTML_08 This test is targeted to check for frames that directly load non-text content. Applicability criteria: all frame elements. //frame Check that the element directly loads non-text content. Expected results: FAIL if true Test 1.1_HTML_09 This test is targeted to find embedded or linked audio-only components without a text transcript. Applicability criteria: All audio-only components. //object //applet //a 7 July 2006 Page 36 of 140

37 Check that there is a text transcript of the audio-only component. Expected results: PASS if true. FAIL if false Test 1.1_HTML_10 This test is targeted to analyse text transcripts of embedded or linked audio-only components. Applicability criteria: All audio-only components with a transcript. //object //applet //a Check that the text transcript fully describes all the important information in the audio track(s) of the audio-only component, including spoken words and non-spoken sounds such as sound effects. Expected results: PASS if true. FAIL if false Tests for external objects Test 1.1_external_01 This test is targeted to find linked or embedded multimedia presentations without associated captions. Applicability criteria: All multimedia presentations with at least one audio and at least one video track. //object //applet //a Check that all applicable components have associated captions. Expected results: PASS if true. FAIL if false Test 1.1_external_02 This test is targeted to analyse the associated captions of multimedia presentations. Applicability criteria: all multimedia presentations with at least one audio 7 July 2006 Page 37 of 140

38 and at least one video track. //object //applet //a Check that the associated captions fully convey all the important information in the audio track(s) of the multimedia presentation, including spoken words and non-spoken sounds such as sound effects. Expected results: PASS if true. FAIL if false Checkpoint 1.2 Provide redundant text links for each active region of a server-side image map. [Priority 1] (See and the techniques in WEBCONTENT-TECHS/#tech-redundant-server-links) (X)HTML tests Test 1.2_HTML_01 This test is targeted to find active regions of a server-side image map without redundant text links. Applicability criteria: all server-side image maps. //img[@ismap] //input[@type='image'][@ismap] 1. Identify all active regions of the image map. 2. Check that there is a redundant text link for each active region. Expected results: FAIL if #2 is false (for at least one active region) Checkpoint 1.3 Until user agents can automatically read aloud the text equivalent of a visual track, provide an auditory description of the important information of the visual track of a multimedia presentation. [Priority 1] 7 July 2006 Page 38 of 140

39 (See and the techniques in WEBCONTENT-TECHS/#tech-auditory-descriptions) Tests for external objects Test 1.3_external_01 This test is targeted to find multimedia presentations without an auditory description of the important information of their visual track. Applicability criteria: all multimedia presentations with at least one visual track. //object //applet //a Check that there is an auditory description of the important information of the visual track. Expected results: PASS if true. FAIL if false Test 1.3_external_02 This test is targeted to analyse the auditory description of linked and embedded multimedia presentations. Applicability criteria: all multimedia presentations with at least one visual track. //object //applet //a Check that the auditory description effectively conveys all important visual elements of the presentation including information about actors, actions, body language, graphics, and scene changes. Expected results: PASS if true. FAIL if false Checkpoint 1.4 For any time-based multimedia presentation (e.g., a movie or animation), synchronize equivalent alternatives (e.g., captions or auditory descriptions of the visual track) with the presentation. 7 July 2006 Page 39 of 140

40 [Priority 1] (See and the techniques in WEBCONTENT-TECHS/#tech-synchronize-equivalents) Tests for external objects Test 1.4_external_01 This test is targeted to check the synchronisation of equivalent alternatives for multimedia presentations. Applicability criteria: all multimedia presentations with equivalent alternatives. //a //applet //object Check that the equivalent alternative (captions, auditory descriptions or other equivalent alternative, as applicable) is synchronised with the presentation. Expected results: PASS if true. FAIL if false. 5.3 Guideline 2 Don't rely on color alone. (See This guideline provides information on how to use colour appropriately Checkpoint 2.1 Ensure that all information conveyed with color is also available without color, for example from context or markup. [Priority 1] (See and the techniques in TECHS/#tech-color-convey) (X)HTML tests Test 2.1_HTML_01 This test is targeted to find phrases in text that refer to parts of a document only by mentioning their colour. 7 July 2006 Page 40 of 140

41 Applicability criteria: all text. //body 1. Check that the text does not refer to parts of a document only by mentioning their colour. 2. If #1 is false check that references in text via colour are redundant. Expected results: PASS if #1 or #2 is true. FAIL if #1 and #2 are false Test 2.1_HTML_02 This test is targeted to find phrases in non-text content that refer to parts of a document only by mentioning their colour. Applicability criteria: all text in non-text content. //img //area //input[@type='image'] //applet //object 1. Check that the text does not refer to parts of a document only by mentioning their colour. 2. If 1 is false check that references in text via colour are redundant. Expected results: PASS if #1 or #2 is true. FAIL if #1 and #2 are false Test 2.1_HTML_03 This test is targeted to find find coloured elements without redundant methods of conveying the information. Applicability criteria: all coloured html elements. //*/@color //*/@bgcolor //*/@text Check that the colour information is redundant. Expected results: PASS if true. FAIL if false. 7 July 2006 Page 41 of 140

42 CSS tests Test 2.1_CSS_01 This test is targeted to find coloured elements without redundant methods of conveying the information. Applicability criteria: all coloured content produced by CSS. color background-color background border-color border outline-color outline Check that the colour information is redundant. Expected results: PASS if true. FAIL if false Checkpoint 2.2 Ensure that foreground and background color combinations provide sufficient contrast when viewed by someone having color deficits or when viewed on a black and white screen. [Priority 2 for images, Priority 3 for text]. (See and the techniques in TECHS/#tech-color-contrast) (X)HTML tests Test 2.2_HTML_01 This test is targeted to find images without sufficient colour contrast. Applicability criteria: all images: //img //area //input[@type='image'] //object Check that the contrast between foreground and background colour is sufficient to convey the information. Expected results: PASS if true. FAIL if false. 7 July 2006 Page 42 of 140

43 Fully automatable: yes (if colour contrast algorithm is available 13 ) CSS tests Test 2.2_CSS_01 This test is targeted to find text without sufficient colour contrast. Applicability criteria: images referenced from CSS styles. background-image background content cursor list-style-image 14 Check that the contrast between foreground and background colour is sufficient to convey the information. Expected results: PASS if true. FAIL if false. Fully automatable: yes (if colour contrast algorithm is available). 5.4 Guideline 3 Use markup and style sheets and do so properly. (See Checkpoint 3.1 When an appropriate markup language exists, use markup rather than images to convey information. [Priority 2] (See and the techniques in WEBCONTENT-TECHS/#tech-use-markup) (X)HTML tests Test 3.1_HTML_01 This test is targeted to check that there are no images containing text that can be replaced by markup constructs. Applicability criteria: all images. //img 13 An example of such an algorithm is available in WCAG 2.0: WCAG /appendixA.html#luminosity-contrastdef. 14 These CSS properties can specify the URI of an image to use as content or background. 7 July 2006 Page 43 of 140

44 //object 1. Check that the image contains text. 2. If #1 is true check that the image can be replaced by a markup construct without loss of information conveyed by the image. Expected result: FAIL if #2 is true Test 3.1_HTML_02 This test is targeted to check that there are no images of mathematical equations that can be replaced by markup constructs. Applicability criteria: all images. //img //input[@type='image'] //object 1. Check that the image contains a mathematical equation. 2. If yes, check that the image can be replaced by a markup construct. Expected result: FAIL if #2 is true Test 3.1_HTML_03 This test is targeted to check that there are no bitmap images that do not contain text or mathemetical equations and can be replaced by markup. Applicability criteria: all bitmap imagesi that do not contain text or mathemetical equations. //img //input[@type='image'] //object Check that there is no appropriate markup language able to convey the information in the image. Expected result: PASS if true. 7 July 2006 Page 44 of 140

45 5.4.2 Checkpoint 3.2 Create documents that validate to published formal grammars. [Priority 2] (See and the techniques in WEBCONTENT-TECHS/#tech-identify-grammar) (X)HTML tests Test 3.2_HTML_01 This test is targeted to check that the document contains a valid document type declaration. Note: W3C Quality Assurance maintains a document entitled Recommended DTDs to use in your Web document at Applicability criteria: content preceding the HTML element of any HTML 4.x or XHTML 1.0 document. Check that the doctype declaration is valid. 15 Expected result: PASS if true. FAIL if true. Fully automatable: yes Test 3.2_HTML_02 This test is targeted to find violations against the formal schema for HTML 4.x or XHTML 1.0. Applicability criteria: Any HTML 4.x or XHTML 1.0 document. a) For HTML, check that the document validates against the specified document type using a validating SGML parser. b) For XHTML, check that the document is well formed and that it validates against the specified document type using a validating XML parser. Expected result: FAIL if false. 15 In XHTML 1.0 documents, the public identifier must in the DOCTYPE declaration must reference one of three DTDs using the respective Formal Public Identifier. See ("Strictly Conforming Documents") ofxhtml 1.0 The Extensible HyperText Markup Language (Second Edition) - A Reformulation of HTML 4 in XML W3C Recommendation 26 January 2000, revised 1 August 2002: 7 July 2006 Page 45 of 140

46 Fully automatable: yes CSS tests Test 3.2_CSS_01 This test is targeted to find violations against the formal grammar for CSS 1.0 or CSS 2.x. Applicability criteria: Any CSS style rules. a) For style rules inside the style element or in the style attributes in an (X)HTML file: check that they conform to the formal grammar defined at with a SAC parser. b) For CSS files: check that parsing each CSS file with a SAC parser causes no errors. Note that the W3C's CSS Validator does more then checking CSS rules against the formal grammar: it also checks for (un-)defined properties and their values, which are not included in the grammar. The grammar does not define the actual vocabulary of CSS. Expected result: PASS if true. FAIL if false. Fully automatable: yes Checkpoint 3.3 Use style sheets to control layout and presentation. [Priority 2] (See and the techniques in TECHS/#tech-style-sheets) (X)HTML tests Test 3.3_HTML_01 This test is targeted to find whitespace that is used to control spacing between characters within words. Note: There is no language-independent definition of the term word, so evaluators need to check that if the term word is applicable to the language of the content they are evaluating and, if yes, make sure that they understand what the term word means in the language of the content they are evaluating. Applicability criteria: any word containing whitespace. text() 7 July 2006 Page 46 of 140

47 Check that the whitespace is not used to convey emphasis or importance. Expected result: PASS if true. FAIL if false Test 3.3_HTML_02 This test is targeted to determine if layout or presentation of one or more elements has been achieved via means other than CSS. Applicability criteria: elements and attributes that can be used to position or influence presentation //img //font //td (in layout table) //th (in layout table) //center //u //b //i //blink //strong (unless used semantically) //em (unless used semantically) Check that the resulting position and/or presentation could not be achieved using style sheets. Expected result: FAIL if false Checkpoint 3.4 Use relative rather than absolute units in markup language attribute values and style sheet property values. [Priority 2] (See and the techniques in WEBCONTENT-TECHS/#tech-relative-units) 7 July 2006 Page 47 of 140

48 (X)HTML tests Test 3.4_HTML_01 This test is targeted to check for relative values in (X)HTML attributes of type %Length;. Applicability criteria: attributes that specify height, width, cell padding, cell spacing or a character offset as a number of pixels or a percentage. //table/@cellpadding //table/@cellspacing //col/@charoff //colgroup/@charoff //tbody/@charoff //td/@charoff //tfoot/@charoff //th/@charoff //thead/@charoff //tr/@charoff //iframe/@height //td/@height //th/@height //img/@height //object/@height //applet/@height //hr/@width //iframe/@width //img/@width //object/@width //table/@width //td/@width //th/@width //applet/@width Check that the value of the attribute is a percentage value (positive integer + '%') or that it is an absolute value that does not interfere with the readability of other text elements. Expected result: PASS if true. FAIL if false. Fully automatable: yes Test 3.4_HTML_02 This test is targeted to check for relative values in (X)HTML attributes of type multi-length 16 ("%MultiLength;" in the HTML 4.01 DTD). Applicability criteria: attributes that specify the width of columns or column groups July 2006 Page 48 of 140

49 Check that the value of the attribute is a percentage value (positive integer + '%') or an * (asterisk) value or that it is an absolute value that does not interfere with the readability of other text elements. Expected result: PASS if true. FAIL if false. Fully automatable: yes Test 3.4_HTML_03 This test is targeted to check for relative values in (X)HTML attributes of type multi-length-list 17 or ("%MultiLengths;" in the HTML 4.01 DTD: a commaseparated list of MultiLength). Applicability criteria: attributes that specify a list of lengths in pixels, a percentage or a relative value. //frameset/@cols //frameset/@rows Check that each value listed in the attribute is a percentage value (positive integer + '%') or an * (asterisk) value or that it is an absolute value that does not interfere with the readability of other text elements. Expected result: PASS if true. FAIL if false. Fully automatable: yes CSS tests Test 3.4_CSS_01 This test is targeted to check for relative value units in CSS properties that may contain <length> values. Applicability criteria: CSS properties that specify length, width, height, size, spacing or offset. background-position border-spacing bottom font-size height left letter-spacing line-height July 2006 Page 49 of 140

50 marker-offset max-height max-width min-height min-width right size text-indent text-shadow top vertical-align width word-spacing 1. Check that the unit of the value is not cm, mm, in, pt, pc or px Check that the value is not xx-small, x-small, small, medium, large, x- large or xx-large If an absolute value is used, check that the absolute value does not interfere with the readability of any text element. Expected result: PASS if #1 and #2 true or #3 true. Fully automatable: yes Checkpoint 3.5 Use header elements to convey document structure and use them according to specification. [Priority 2] (See and the techniques in WEBCONTENT-TECHS/#tech-logical-headings) (X)HTML tests Test 3.5_HTML_01 This test is targeted to find markup constructs that conceptually represent headings, but are not marked up with hx elements. Applicability criteria: the body of a web page. //body//* 1. Select markup constructs that conceptually represent headings. 18 The CSS 2.0 specification lists 'px' (pixel) as a relative unit. However, the size of a pixel is relative to the computer display, not to any properties defined in web content. The CSS 2.0 specification also defines a reference pixel with an absolute size. 19 These are defined as absolute values at 7 July 2006 Page 50 of 140

51 2. Check whether the headings are marked up with hx elements. Hint: candidates for insufficient markup are e.g. combinations of fontweight/font-style changes (HTML b, i elements; CSS font-weight, fontstyle properties) and font-size enlargements (HTML big, font elements; CSS font-size property). This list is not exhaustive. Expected result: PASS if true Test 3.5_HTML_02 This test is targeted to check that there is no heading element in the page that has a higher level than the first heading. Applicability criteria: all heading elements except h6 20. //h1 //h2 //h3 //h4 //h5 Check that the heading element does not have a higher level than the first heading element in the document. Expected result: PASS if true. Fully automatable: yes Test 3.5_HTML_03 This test is targeted to check that no levels are skipped in the heading hierarchy. Applicability criteria: all heading elements except h1 and h2 21. //h3 //h4 //h5 //h6 Check that the inspected heading element does not skip one or more levels in the structure. (e.g. check that for h5 the preceding heading element is either h4, h5 or h6) Expected result: PASS if true. 20 h6 elements don't need to be checked because they represent the lowest heading level. 21 h1 and h2 don't have to be checked because there is no header level which is two levels higher. 7 July 2006 Page 51 of 140

52 Fully automatable: yes Test 3.5_HTML_04 This test is targeted to check if heading elements have been used (improperly) for font formatting. Applicability criteria: all heading elements (h1,..., h6) //h1 //h2 //h3 //h4 //h5 //h6 Check that headings are not used to create font formatting effects. Expected result: PASS if true. FAIL if false Test 3.5_HTML_05 This test is targeted to check for the correct heading level hierarchy. Applicability criteria: the whole document. //body Check that the heading elements convey the logical structure of the document. Expected result: PASS if true Checkpoint 3.6 Mark up lists and list items properly. [Priority 2] (See and the techniques in TECHS/#tech-list-structure) Encode list structure and list items (UL, OL, DL, LI) properly. The HTML list elements DL, UL, and OL (available in HTML 3.2 and HTML 4.0) should only be used to create lists, not for formatting effects such as indentation. When possible, use ordered (numbered) lists to help navigation. 7 July 2006 Page 52 of 140

53 (X)HTML tests Test 3.6_HTML_01 Authors can disable the default list style of ordered and unordered list and manually create multi-level numbering (for example, 1, 1.1, 1.2, 1.2.1).This test is targeted to check that manually added list numbering conveys the depth of the list to users. Applicability criteria: all nested ordered and unordered lists with manually inserted multi-level numbers. //li/ol //li/ul Check that the numbering does not skip levels or numbers. Expected result: PASS if true Test 3.6_HTML_02 This test is targeted to find out whether the List elements (li) are appropriate for the context of the document, i.e. to create lists, not for formatting such as indentation. Applicability criteria: all list item elements, including definitions in definition lists. //ul/li //ol/li //dl/dd Check each that li or dd elements are used to mark up list items and not for formatting effects. Expected result: PASS if true. FAIL if false Test 3.6_HTML_03 This test is targeted to find paragraphs, line breaks and numbers that are used to simulate numbered lists and which can be replaced with the ol element. Applicability criteria: all paragraphs starting with a counter (number or character that indicates an order or sequence). //p 7 July 2006 Page 53 of 140

54 //p//br 1. Check that the document does not contain sequences of paragraphs that start with counters to simulate numbered lists. 2. Check that the document does not contain paragraphs with line breaks followed by counters to simulate numbered lists. Expected result: PASS if #1 and #2 are true. Fully automatable: yes Test 3.6_HTML_04 This test is targeted to find paragraphs, line breaks and certain characters such as asterisk and hyphens that are used to simulate unordered lists and which can be replaced with the ul element. Applicability criteria: all paragraphs starting with characters that can be used to simulate list items. //p //p//br 1. Check that the document does not contain sequences of paragraphs that start with characters such as asterisk or hyphen to simulate unordered lists. 2. Check that the document does not contain paragraphs with line breaks followed by characters such as asterisk or hyphen to simulate unordered lists. Expected result: PASS if #1 and #2 are true. FAIL if #1 or #2 are false. Fully automatable: yes Test 3.6_HTML_05 This test is targeted to find paragraphs, line breaks and images displaying numbers that are used to simulate ordered lists and which can be replaced with the ol element and CSS. Applicability criteria: all paragraphs starting with images displaying a number or other types of counters. //p//img //p//br/following-sibling::img 1. Check that the document does not contain sequences of paragraphs that start with images displaying numbers or other types of counters to simulate ordered lists. 7 July 2006 Page 54 of 140

55 2. Check that the document does not contain paragraphs with line breaks followed by images of consecutive numbers or other types of counters to simulate numbered lists. Expected result: PASS if #1 and #2 are true. FAIL if #1 or #2 are false Test 3.6_HTML_06 This test is targeted to find paragraphs, line breaks and images (especially bullet images) that are used to simulate unordered lists and which can be replaced with the ul element and CSS. Applicability criteria: all paragraphs starting with bullet images. /p//img //p//br/following-sibling::img 1. Check that the document does not contain sequences of paragraphs that start with images of bullets to simulate unordered lists. 2. Check that the document does not contain paragraphs with line breaks followed by images of bullets to simulate unordered lists. Expected result: PASS if #1 and #2 are true. FAIL if #1 or #2 are false Test 3.6_HTML_07 This test is targeted to find paragraphs, line breaks and formatting effects that are used to simulate definition lists and which can be replaced with the dt and dd elements. Applicability criteria: all paragraphs starting with a term followed by a definition. //p //p//br Check that the document does not contain paragraphs that should be replaced by a definition list. Expected result: PASS if true. FAIL if false. 7 July 2006 Page 55 of 140

56 CSS tests Test 3.6_CSS_01 This test is targeted to check that a fall-back list style is present if images are used a list bullets. Applicability criteria: all list-style properties. *{list-style:...;}, *{list-style-image:url(...);}, *{list-style-type:...;} Check that a fall-back bullet style (e.g., 'disc') is specified in case a bullet image cannot be loaded. Expected result: PASS if true. Fully automatable: yes Checkpoint 3.7 Mark up quotations. Do not use quotation markup for formatting effects such as indentation. [Priority 2] (See and the techniques in TECHS/#tech-quotes (X)HTML tests Test 3.7_HTML_01 This test is targeted to check that quotation elements are used properly to mark up quotations and not for formatting or indentation effects. Applicability criteria: all blockquote elements. //blockquote Check that the blockquote is used to mark up a quotation. Expected result: PASS if true. FAIL if false Test 3.7_HTML_02 This test is targeted to check that short quotations (q element) are used properly for quotations and not for layout purposes. 7 July 2006 Page 56 of 140

57 Applicability criteria: all q elements. //q Check that the q element is used to mark up a quotation. Expected result: PASS if true. FAIL if false Test 3.7_HTML_03 This test is targeted to find quotations that have not been marked up with q or blockquote. Applicability criteria: all text. //p 1. Are there any quotations in the select paragraphs. e.g. passages that contain quotation marks in the markup or have CSS-generated quotation marks? 2. If yes, check that the quotations are marked up with q or blockquote. Expected result: PASS if #2 is true. FAIL if #2 is false Test 3.7_HTML_04 This test is targeted to find any cite and address elements that are used to italicise text. Applicability criteria: all cite and address elements. //cite //address 1. Select any cite and address elements. 2. Determine if they are used to italicise text instead of marking up a citation or providing information on the author of the document, respectively. Expected result: FAIL if #2 is true. 7 July 2006 Page 57 of 140

58 5.5 Guideline 4 Clarify natural language usage. (See This guideline provides information on how to facilitate pronunciation or interpretation of abbreviated or foreign text Checkpoint 4.1 Clearly identify changes in the natural language of a document's text and any text equivalents (e.g., captions). [Priority 1] (See and the techniques in WEBCONTENT-TECHS/#tech-identify-changes) (X)HTML tests Test 4.1_HTML_01 This test is targeted to find changes in natural language that are not marked up. Applicability criteria: all elements containing text. //*[true(text())] 1. Select elements starting at the lowest level (the "leaves" in the tree structure), and move upwards to parent elements, checking every element in the document. 2. For each text segment 22 in the current element, determine the natural language. 3. Determine if the segment contains proper names, so-called loanwords - words and phrases that have become de facto international, transcending many specific natural languages etc., which can be exempted from Checkpoint Check that the natural language found in #2 is the same for the rest of the content in the current element. 5. Determine the natural language of the text in the parent element. (If the parent element is the html element, its language is defined either by the lang attribute or from page context.) 6. If two different natural languages are found in #5, check that the (current) element has a lang attribute that identifies the natural language 22 In many languages, a text segment corresponds to single words or groups of words; in some languages, especially those with an ideographic writing system, a text segment can even correspond to a single character. 7 July 2006 Page 58 of 140

59 by means of the corresponding two-letter code defined in ISO Expected results: PASS if #3 or #4, #5 and #6 are true. FAIL if #4, #5 or #6 is false Test 4.1_HTML_02 This test is targeted to find changes in natural language that are not marked up. Applicability criteria: all attributes that specify alternative text, advisory information (the title attribute), a table summary, a label in a hierarchical menu (the label attribute), a standby text (the standby attribute) or other textual content. //img/@alt //applet/@alt //area/@alt //input/@alt //meta/@content //option/@label //optgroup/@label //object/@standby //table/@summary //*/@title 24 ) //input[@type='text']/@value //input[@type='submit']/@value //frame/@name //iframe/@name 1. Determine the natural language of the text in the attribute. 2. Determine the natural language of the text in the nearest ancestor of the attribute's parent element. 3. If the languages found in steps 1 and 2 are different, check that the attribute's parent element has a lang attribute that identifies the natural language of the attribute value by means of the corresponding two-letter code defined in ISO639. Expected results: PASS if true. FAIL if false Test 4.1_HTML_03 This test is targeted to find block-level elements with changes in text direction 23 The HTTP 'Content-Language' header is not taken into account here because this header should not be used for text processing. See for further information. 24 All elements except base, basefont, head, html, meta, param, script and title. 7 July 2006 Page 59 of 140

60 (for natural languages) that are not marked up. Applicability criteria: all block-level elements containing text 25. //*[true(text())] 1. Determine the direction of the text in the (current) element. 2. If the current element is the html or body element and the textdirection if right to left, check that the element has a dir attribute with the value "rtl". 3. If the current element is not the html element, determine the direction of the text in the parent element. 4. If two different text directions are found in step 3, check that the (current) element has a dir attribute that identifies the text direction by means of the corresponding value (rtl or ltr). Expected results: PASS if #2 and #4 are true. FAIL if #2 or #4 is false Test 4.1_HTML_04 This test is targeted to find changes in text direction (for natural languages) that are not marked up. Applicability criteria: all attributes that specify alternative text, advisory 25 Note the following information from the section "Inheritance of text direction information" in HTML 4.01 ( "The Unicode bidirectional algorithm requires a base text direction for text blocks. To specify the base direction of a block-level element, set the element's dir attribute. The default value of the dir attribute is "ltr" (left-to-right text). When the dir attribute is set for a block-level element, it remains in effect for the duration of the element and any nested block-level elements. Setting the dir attribute on a nested element overrides the inherited value. (...) Inline elements, on the other hand, do not inherit the dir attribute. This means that an inline element without a dir attribute does not open an additional level of embedding with respect to the bidirectional algorithm. (Here, an element is considered to be block-level or inline based on its default presentation. Note that the INS and DEL elements can be block-level or inline depending on their context.)" Also note the following information from the section "The effect of style sheets on bidirectionality" in HTML 4.01 ( "In general, using style sheets to change an element's visual rendering from block-level to inline or vice-versa is straightforward. However, because the bidirectional algorithm relies on the inline/block-level distinction, special care must be taken during the transformation. When an inline element that does not have a dir attribute is transformed to the style of a blocklevel element by a style sheet, it inherits the dir attribute from its closest parent block element to define the base direction of the block. When a block element that does not have a dir attribute is transformed to the style of an inline element by a style sheet, the resulting presentation should be equivalent, in terms of bidirectional formatting, to the formatting obtained by explicitly adding a dir attribute (assigned the inherited value) to the transformed element." 7 July 2006 Page 60 of 140

61 information (the title attribute), a table summary, a label in a hierarchical menu (the label attribute), a standby text (the standby attribute) or other textual content. //img/@alt //applet/@alt //area/@alt //input/@alt //meta/@content //option/@label //optgroup/@label //object/@standby //table/@summary //*/@title 26 ) //input[@type='text']/@value //input[@type='submit']/@value //frame/@name //iframe/@name 1. Determine the direction of the text in the attribute. 2. Check that the parent element has a dir attribute that identifies the text direction by means of the corresponding value (rtl or ltr). Expected results: PASS if #2 is true. FAIL if #2 is false CSS tests Test 4.1_CSS_01 This test is targeted to find each CSS style that generates text in a different natural language than the language of the parent element of the element or elements for which the style is defined. Note the following information from the CSS 2.0 specification: "As their names indicate, the :before and :after pseudo-elements specify the location of content before and after an element's document tree content" (emphasis added) 27. For this reason, the language of the generated content should be the same as the language of the parent of the element for which the CSS style is defined. Applicability criteria: each text string generated by CSS styles. *:after {content: "...";} *:before {content: "...";} 1. Determine the language of the text string. 2. Determine the language defined for or inherited by the parent element 26 All elements except base, basefont, head, html, meta, param, script and title July 2006 Page 61 of 140

62 of the element for which the CSS style is defined. 3. Check that the languages found in #1 and #2 are the same. Expected results: PASS if #3 true. FAIL if #3 is false Test 4.1_CSS_02 This test is targeted to find CSS styles that are used to control text directionality. For background information, see Authoring Techniques for "XHTML & HTML Internationalization: Handling Bidirectional Text 1.0" 28, "FAQ: CSS vs. markup for bidi support" 29 and the section "Text direction: the 'direction' and 'unicode-bidi' properties" in the CSS 2.0 specification 30. Applicability criteria: the CSS properties direction and unicode-bidi. * {direction: rtl; unicode-bidi: embed } Find CSS styles that use the properties direction and/or unicode-bidi. Expected results: FAIL if true. Fully automatable: yes. 5.6 Guideline 5 Create tables that transform gracefully. (See This guideline provides information on how to identify properly marked up tables Checkpoint 5.1 For data tables, identify row and column headers. [Priority 1] (See (X)HTML tests Test 5.1_HTML_01 This test is targeted to find data tables that do not have row and column headers July 2006 Page 62 of 140

63 Applicability criteria: all data tables. //table 1. Select the data cells in the data table. 2. For each data cell, check that there is a row header cell and a column header cell that can be identified by the sections "Algorithm to find heading information" or "Associating header information with data cells" in HTML Expected results: PASS if #2 is true. Fully automatable: No Test 5.1_HTML_02 This test is targeted to identify preformatted text used to display tabular information. Preformatted text does not have mechanisms to specify row and column headings. Applicability criteria: preformatted text. //pre Determine if the preformatted text is visually rendered as a table. Expected results: FAIL if true Checkpoint 5.2 For data tables that have two or more logical levels of row or column headers, use markup to associate data cells and header cells. [Priority 1] (See and the techniques in TECHS/#identifying-table-rows-columns) Test 5.2_HTML_01 This test is targeted to identify tables with two or more logical levels of rows or columns that are not marked up properly by using table markup that associates rows and columns. Applicability criteria: data tables where content in each data cell has a relationship with at least two row headers and/or at least two column July 2006 Page 63 of 140

64 headers. //table 1. For each data cell, check that at least one of the following applies: 1.a. the headers attribute contains a space-separated list of all the values of the id attributes of the header cells which with the data cell has a relationship; 1.b. all column header cells have a scope attribute with the value 'col' and all row header cells have a scope attribute with the value 'row'. Expected results: PASS if #1.a or #1.b is true Test 5.2_HTML_02 This test is targeted to determine if header cells in a heading with two or more levels are categorised consistently. This test does not require that axis should always be used, but that the categories identified by the attribute are appropriate or logical. Applicability criteria: table headers with two or more levels. //table[count(descendant::tr[th]) > 1] //table[count(descendant::tr[td[@scope]]) > 1] //table[descendant::tr[count(th) > 1]] //table[descendant::tr[count(td[@scope]) > 1]] //table[descendant::td[boolean(substring-after(substringafter(normalize-space(@headers), ' '), ' ')]] For each header cell in a table header with two or more levels, check that any axis attribute consistently labels the category to which the header cell belongs. Note that the value of the axis attribute is a label that may be presented to a user, instead of a merely machine-readable class or name. Expected results: FAIL if false Test 5.2_HTML_03 This test is targeted to find inconsistent structuring of tables. This test does not require that colgroup, thead, tfoot or tbody should always be used, but that their use is appropriate or logical. Applicability criteria: tables defining column groups, table headings, table footers and table bodies. 7 July 2006 Page 64 of 140

65 //table[colgroup] //table[thead] //table[tfoot] //table[tbody] Check that each of the selected elements correctly structures the table. Expected results: FAIL if false Checkpoint 5.3 Do not use tables for layout unless the table makes sense when linearized. Otherwise, if the table does not make sense, provide an alternative equivalent (which may be a linearized version). [Priority 2] (See (X)HTML tests Test 5.3_HTML_01 This test is targeted to find layout tables that do not convey the same information when linearised. Applicability criteria: layout tables. //table[not(@summary) and not(child::caption)] //table Check that the table conveys the same information when linearised. 32 Expected results: PASS if true. FAIL if false Checkpoint 5.4 If a table is used for layout, do not use any structural markup for the purpose of visual formatting. [Priority 2] (See and the techniques in TECHS/#tech-table-layout) 32 The W3C has a Table Linearizer Web Service at 7 July 2006 Page 65 of 140

66 (X)HTML tests Test 5.4_HTML_01 This test is targeted to check that table headers are only used in data tables. Applicability criteria: tables with header cells. //table[descendant::th] Check that the table is a data table. Expected results: PASS if true. FAIL if false Test 5.4_HTML_02 This test is targeted to check that table headers and table footers are only used in data tables. Applicability criteria: tables with headers and/or footers. //table[thead] //table[tfoot] Check that the table is a data table. Expected result: PASS if true. FAIL if false Test 5.4_HTML_03 This test is targeted to check that id and headers attributes are only used in data tables. Applicability criteria: tables with one or more data cells with a headers attribute and one or more header cells with the id attribute. //table[descendant::th[@id]] //table[descendant::td[@id]] //table[descendant::td[@headers]] Check that the table is a data table. Expected result: PASS if true. FAIL if false. 7 July 2006 Page 66 of 140

67 Test 5.4_HTML_04 This test is targeted to check that captions are only used for data tables. Applicability criteria: tables with a caption. //table[descendant::caption] Check that the table is a data table. Expected results: PASS if true. FAIL if false Test 5.4_HTML_05 This test is targeted to check that cells are only categorised in data tables. Applicability criteria: Tables in which cells are categorised by means of the axis attribute. //table[descendant::th[@axis]] //table[descendant::td[@axis]] Check that the table is a data table. Expected results: PASS if true. FAIL if false. 5.7 Guideline 6 Ensure that pages featuring new technologies transform gracefully. (See This guideline provides information on ensuring that pages are accessible even when newer technologies are not supported or are turned off Checkpoint 6.1 Organize documents so they may be read without style sheets. For example, when an HTML document is rendered without associated style sheets, it must still be possible to read the document. [Priority 1] (See and the techniques in 7 July 2006 Page 67 of 140

68 WEBCONTENT-TECHS/#tech-order-style-sheets) (X)HTML tests Test 6.1_HTML_01 This test analyses the effect on the readability of the document of CSS applied in standalone style sheets, embedded style sheets and style attributes of document elements. Note that the CSS 2.0 specification defines a style sheet as "A set of statements that specify presentation of a document" 33. This includes the statements in style attributes. Applicability criteria: HTML 4.01 and XHTML 1.0 documents with one or more associated style sheets. This includes style sheets attached to documents by means of HTTP headers (see the section "Linking to style sheets with HTTP headers" 34 in HTML 4.01). //link[@rel='stylesheet'] //link[@rel='alternate stylesheet'] //style //*/@style 1. Switch off, remove or deactivate all associated style sheets. 2. Check that content does not become invisible. 3. Check that content is not obscured by other content. 4. Check that the meaning is not changed by changes in reading order caused by step 1. Expected results: PASS if #2-4 are true. FAIL if #2, #3 or #4 is false Test 6.1_HTML_02 This test analyses the effect of programmatically applied styles on the readability of the document. Applicability criteria: any scripts and event handlers that change the presentation of content. //script //*/@onfocus //*/@onblur //*/@onkeypress //*/@onkeydown July 2006 Page 68 of 140

69 Deactivate all script in standalone script files applied to the content, script in script blocks and in event handler attributes. 2. Check that content does not become invisible. 3. Check that content is not obscured by other content. 4. Check that the meaning is not changed by changes in reading order caused by step Check that on the ocurrence of actions that would otherwise trigger events, content is still readable as defined in steps when listeners are turned off. Expected results: PASS if #2-5 are true. FAIL if #2, #3, #4 or #5 is false Checkpoint 6.2 Ensure that equivalents for dynamic content are updated when the dynamic content changes. [Priority 1] (See and the techniques in (X)HTML tests Test 6.2_HTML_01 This test analyses the text equivalent of any non-text content loaded into a frame. Applicability criteria: any non-text content referenced by the src attrbute of a frame element. 35 Every attribute for intrinsic events except onunload. See the attribute definitions in the section "Intrinsic events" in HTML 4.01 ( 7 July 2006 Page 69 of 140

70 Check that there is an appropriate text equivalent for the current version of the non-text content. Expected results: PASS if true. FAIL if false Test 6.2_HTML_02 This test analyses the text equivalent of any non-text content loaded into the frame by the browser as a result of link activation or script execution. Applicability criteria: any non-text content that is loaded into a frame as a result of the activation of a link or the execution of a script. //script //a/@href //*/@onfocus //*/@onblur //*/@onkeypress //*/@onkeydown //*/@onkeyup //*/@onsubmit //*/@onreset //*/@onselect //*/@onchange //*/@onload //*/@onclick //*/@ondblclick //*/@onmousedown //*/@onmouseup //*/@onmouseover //*/@onmousemove //*/@onmouseout Check that there is an appropriate text equivalent for the current version of the non-text content. Expected results: PASS if true. FAIL if false Test 6.2_HTML_03 This text is targeted to check that there are appropriate equivalents for non-text content that is added to the markup or the Document Object Model (DOM) by 7 July 2006 Page 70 of 140

71 means of scripts. Applicability criteria: any non-text content that is added to the markup or the DOM by means of scripts. //script Check that there is an appropriate text equivalent for the non-text content. Expected results: PASS if true. FAIL if false Checkpoint 6.3 Ensure that pages are usable when scripts, applets, or other programmatic objects are turned off or not supported. If this is not possible, provide equivalent information on an alternative accessible page. [Priority 1] (See and the techniques in TECHS/#tech-scripts) (X)HTML tests Test 6.3_HTML_01 This test determines whether information and functionality provided by embedded content is also available without said content. 7 July 2006 Page 71 of 140

72 Applicability criteria: applets (in a wide sense, not limited to Java; this includes Flash). //applet //object 1. Disable support for applets. 2. Check that the page is usable and that all functionality is still available. 3. If the page is not usable or some functionality is no longer available, check that there is an alternative accessible page with equivalent information. Expected results: PASS if #2 or #3 are true. FAIL if #2 and #3 are false Test 6.3_HTML_02 This test determines whether information and functionality provided by script is also available when script is not executed. Applicability criteria: (functionality provided by) scripts. //script //a[starts-with(@href, 'javascript:')] //*/@onfocus //*/@onblur //*/@onkeypress //*/@onkeydown //*/@onkeyup //*/@onsubmit //*/@onreset //*/@onselect //*/@onchange //*/@onload //*/@onunload //*/@onclick //*/@ondblclick //*/@onmousedown //*/@onmouseup //*/@onmouseover //*/@onmousemove //*/@onmouseout 1. Turn off or disable (the browser's support for) scripts. 2. Check that the page is usable and that all functionality is still available. 3. If the page is not usable or some functionality is no longer available, check that there is an alternative accessible page with equivalent information. 7 July 2006 Page 72 of 140

73 Expected results: PASS if #2 or #3 are true. FAIL if #2 and #3 are false Checkpoint 6.4 For scripts and applets, ensure that event handlers are input deviceindependent. [Priority 2] (See and the techniques in WEBCONTENT-TECHS/#tech-keyboard-operable-scripts) (X)HTML tests Test 6.4_HTML_01 This test is targeted to check that mouse-specific event handlers have a keyboard-specific (or device-independent) version. Applicability criteria: elements with attributes for mouse-specific events, including event attributes added by scripts. //*[@onclick] //*[@onmousedown] //*[@onmouseup] //*[@onmouseout] //*[@onmouseover] Check that each of these elements has a keyboard-specific event handler attribute that triggers exactly the same function or functions as the mouse-specific event handler attribute. The following table maps device-specific event handlers: Mouse-specific onmousedown onmouseup onclick 36 onmouseover onmouseout Expected results: PASS if true. Keyboard-specific or device-independent onkeydown onkeyup onkeypress onfocus onblur Fully automatable: yes. 36 Most browsers will trigger the mouse onclick event handler when the Enter key is pressed, therefore it is not required to provide onclick when onkeypress is specified. 7 July 2006 Page 73 of 140

74 Test 6.4_HTML_02 This test is targeted to check for mouse-specific event handlers for which no device-independent or keyboard-specific equivalent handlers are defined in the HTML 4 specification. Applicability criteria: //*[@ondblclick] //*[@onmousemove] 1. Select any elements with a mouse-specific event handler attribute that does not have a keyboard-specific event handler attribute that executes exactly the same function. 2. Check that the functions performed by the event handlers can also implemented in a mouse-independent way. Expected results: PASS if #2 is true. FAIL if #2 is false Tests for external objects _external_01 This test is targeted to check event handlers in applets are device-independent. Applicability criteria: applets (in a wide sense, not limited to Java; this includes Flash) Select any applets loaded in a page. 2. Check that each function can be triggered through a keyboard interface. Expected results: PASS if #2 is true. FAIL if #2 is false Checkpoint 6.5 Ensure that dynamic content is accessible or provide an alternative presentation or page. [Priority 2] (See and the techniques in TECHS/#tech-fallback-page) This checkpoint handles the accessibility of content that is dynamic. There are two ways of creating dynamic content: server-side dynamic content and client- 7 July 2006 Page 74 of 140

75 side dynamic content. In terms of (semi-)automatic testing, we are only interested in the client-side dynamic content, because we cannot really identify the presence of server-side dynamic content (X)HTML tests Test 6.5_HTML_01 This test is targeted to find framesets with inaccessible dynamic content and without a noframes section. Applicability criteria: framesets without a noframes section. //frameset[not(descendant::noframes)] Check that the dynamic content is accessible. Expected results: PASS if true. FAIL if false Test 6.5_HTML_02 This test is targeted to find framesets with inaccessible dynamic content and without a noframes section. Applicability criteria: framesets with dynamic content. //frameset 1. Check that the dynamic content is accessible. 2. If #1 is false, check that the frameset contains a noframes element with an alternative presentation or a link to an alternative presentation. Expected results: PASS if #1 or #2 is true. FAIL if #1 and #2 are false Test 6.5_HTML_03 This test is targeted to find links that use javascript. Applicability criteria: links with the 'javascript:' pseudo-protocol. //a[starts-with(@href, 'javascript:')] 1. Check that the URI does not use the 'javascript:' pseudo-protocol. 2. If step 1 is false, check that there is an alternative presentation or page with the same content. 7 July 2006 Page 75 of 140

76 Expected results: PASS if #1 or #2 is true. FAIL if #1 and #2 are false. 5.8 Guideline 7 Ensure user control of time-sensitive content changes. (See This guideline provides information on moving, blinking, scrolling, or autoupdating objects or pages, which make it difficult, sometimes even impossible, to read or access content Checkpoint 7.1 Until user agents allow users to control flickering, avoid causing the screen to flicker. [Priority 1] Note. People with photosensitive epilepsy can have seizures triggered by flickering or flashing in the 4 to 59 flashes per second (Hertz) range with a peak sensitivity at 20 flashes per second as well as quick changes from dark to light (like strobe lights). (See and the techniques in TECHS/#tech-avoid-flicker) Note: for checkpoints with an "until user agents" clause, WCAG 1.0 refers to the document "User Agent Support for Accessibility" 37 for information about user agent support for accessibility features. The current version of this document (last updated in on 11 August 2005) states: "Netscape Navigator (versions, platform), Microsoft Internet Explorer (versions, platform), and Opera (versions, platform) allow the user to turn off loading of images, scripts, and applets. Turning these off will allow the user to avoid flicker caused by images, scripts, and applets. For other plug-ins the user can choose not to load the plug-in. However, it would be ideal if users could stop, pause, or step through animations, scripts, or other dynamic content that may cause flicker as discussed in the UAGL checkpoint 3.7 and UAGL checkpoint 3.10." (X)HTML tests Test 7.1_HTML_01 This test is targeted to find marquee text that causes blinking. Marquee does not normally cause blinking, but certain combinations of scroll amount, scroll delay, font size and colour might cause parts of the screen to blink. Applicability criteria: marquee elements July 2006 Page 76 of 140

77 //marquee 1. Check that the marquee does not cause flickering or flashing in the 4 to 59 flashes per second (Hertz) range. 2. If false, check that the flicker is an unavoidable aspect of the presentation and that it can be controlled by the user. Expected results: PASS if #1 or #2 is true. FAIL if #1 and #2 are false Test 7.1_HTML_02 This test is targeted to find animated GIF files that cause flicker. (Other image file types for inclusion in HTML pages JPEG and PNG do not support animation.) Applicability criteria: animated GIF files. //img //object Check that the playback speed of frames and the colour contrast between subsequent frames do not cause flickering or flashing in the 4 to 59 flashes per second (Hertz) range. 2. If false, check that the flicker is an unavoidable aspect of the presentation and that it can be controlled by the user. Expected results: PASS if #1 or #2 is true. FAIL if #1 and #2 are false Test 7.1_HTML_03 This test is targeted to find client-side scripts that cause flicker or flashing. Applicability criteria: client-side scripts. //script //*/@onfocus //*/@onblur //*/@onkeypress //*/@onkeydown //*/@onkeyup //*/@onsubmit //*/@onreset //*/@onselect //*/@onchange 38 The selector //object[@type='image/gif'] will miss certain GIF files if the object element's type attribute is not defined or incorrect. 7 July 2006 Page 77 of 140

78 Check that the script does not cause flicker or flashing at a rate between 4 and 59 Hertz. 2. If false, check that the flicker is an unavoidable aspect of the presentation and that it can be controlled by the user. Expected results: PASS if #1 or #2 is true. FAIL if #1 and #2 are false CSS tests Test 7.1_CSS_01 This test is targeted to find CSS-generated content that causes flicker or flashing. Applicability criteria: objects (images, videos, animations) embedded into a web page by means of CSS. *:after {content: url(...);} *:before {content: url(...);} 1. Check that the object does not cause flicker at a rate between 4 and 59 Hertz. 2. If false, check that the flicker is an unavoidable aspect of the presentation and that it can be controlled by the user. Expected results: PASS if #1 or #2 is true. FAIL if #1 and #2 are false Test for external objects Test 7.1_external_01 This test is targeted to find Java applets that cause flicker or flashing. Applicability criteria: Java applets. //object[@codetype='application/java'] //object[@codetype='application/java-archive] //object[starts-with(@codetype, 'application/x-java- 7 July 2006 Page 78 of 140

79 applet)] //applet any content sent by HTTP with MIME types 'application/java', 'application/java-archive', 'application/x-java-applet' 1. Check that the applet does not cause flicker or flashing at a rate between 4 and 59 Hertz. 2. If false, check that the flicker is an unavoidable aspect of the presentation and that it can be controlled by the user. Expected results: PASS if #1 or #2 is true. FAIL if #1 and #2 are false Test 7.1_external_02 This test is targeted to find any video content that cause flicker or flashing. Applicability criteria: video content. //object[starts-with(@type, 'video/')] 1. Check that the video content does not cause flicker or flashing at a rate between 4 an 59 Hertz. 2. If false, check that the flicker is an unavoidable aspect of the presentation and that it can be controlled by the user. Expected results: PASS if #1 or #2 is true. FAIL if #1 and #2 are false Checkpoint 7.2 Until user agents allow users to control blinking, avoid causing content to blink (i.e., change presentation at a regular rate, such as turning on and off). [Priority 2] (See and the techniques in TECHS/#tech-avoid-blinking) (X)HTML tests Test 7.2_HTML_01 This test is targeted to find any blink elements. Applicability criteria: blink elements. 7 July 2006 Page 79 of 140

80 //blink Check that any such elements are found. Expected results: FAIL if true. Fully automatable: Yes Test 7.2_HTML_02 This test is targeted to find animated gif files that cause blinking. (Other image file types for inclusion in HTML pages JPEG and PNG do not support animation.) Applicability criteria: Animated GIF files. //img //object 39 Check that the image does not cause blinking. Expected results: PASS if true. FAIL if false. Fully automatable:yes, if contrast threshold is known Test 7.2_HTML_03 This test is targeted to find scripts that cause blinking. Applicability criteria: scripts. script The selector will miss certain GIF files if the object element's type attribute is not defined or incorrect. 7 July 2006 Page 80 of 140

81 Check that the script does not cause blinking. Expected results: PASS if true. FAIL if false. Fully automatable: No CSS tests Test 7.2_CSS_01 This test is targeted to find CSS-generated content that causes blinking. Applicability criteria: images, video and animations generated by CSS styles. *:after {content: url(...);} *:before {content: url(...);} Check that the content does not cause blinking. Expected results: PASS if true. FAIL if false. Fully automatable: No Test 7.2_CSS_02 This test is targeted to find CSS rules that cause content to blink. Applicability criteria: CSS rules with text-decoration: blink * { text-decoration: blink;} Check that there are no CSS rules with text-decoration: blink. Expected results: PASS if true. FAIL if false. Fully automatable: yes Tests for external objects Test 7.2_external_01 This test is targeted to find Java applets that cause blinking. Applicability criteria: applets (Java, Flash or other). //applet 7 July 2006 Page 81 of 140

82 //object 40 Check that the applet does not cause blinking. Expected results: PASS if true Test 7.2_external_02 This test is targeted to find any video content that causes blinking. Applicability criteria: video. //object 41 Check that the video content does not cause blinking. Expected results: PASS if true Checkpoint 7.3 Until user agents allow users to freeze moving content, avoid movement in pages. [Priority 2] (See and the techniques in TECHS/#tech-avoid-movement) (X)HTML tests Test 7.3_HTML_01 This test is targeted to find marquee elements. Applicability criteria: marquee elements. //marquee Check that any such elements are found. 40 Object elements for Java applets can have a codetype attribute with the values 'application/java', 'application/java-archive' or 'application/x-java-applet'. The test is also applicable to any content sent by HTTP with the MIME types 'application/java', 'application/javaarchive' or 'application/x-java-applet'. 41 Object element for video content can have a type attribute that starts with the value 'video/'. The test is also applicable to any content sent by HTTP with a MIME type that starts with 'video/'. 7 July 2006 Page 82 of 140

83 Expected results: FAIL if true. Fully automatable: yes Test 7.3_HTML_02 This test is targeted to check for scripts that cause movement. Applicability criteria: Scripts. //script Check that the script does not cause movement. 2. If false, check that the movement can be frozen. Expected results: PASS if #1 is true. PASS if #1 is false and #2 is true. FAIL if #1 and #2 are false CSS tests Test 7.3_CSS_01 This test is targeted to find CSS-generated content that causes movement. Applicability criteria: CSS-generated content. *:after {content: url(...);} *:before {content: url(...);} Check that the content does not cause movement. 7 July 2006 Page 83 of 140

84 Expected results: PASS if true. FAIL if false Tests for external objects Test 7.3_external_01 This test is targeted to check for external objects that cause or contain movement. Applicability criteria: applets (Java, Flash or other). //applet //object Check that the content of the applet does not contain or cause movement. 2. If false, check that the movement can be frozen. Expected results: PASS if #1 is true. PASS if #1 is false and #2 is true. FAIL if #1 and #2 are false Test 7.3_external_02 This test is targeted to check for video content that causes or contains movement. Applicability criteria: video. //object Check that the video content does not contain or cause movement. 2. If false, check that the movement can be frozen. Expected results: PASS if #1 is true. PASS if #1 is false and #2 is true. FAIL if #1 and #2 are false. 42 Object elements for Java applets can have a codetype attribute with the values 'application/java', 'application/java-archive' or 'application/x-java-applet'. The test is also applicable to any content sent by HTTP with the MIME types 'application/java', 'application/javaarchive' or 'application/x-java-applet'. 43 Object element for video content can have a type attribute that starts with the value 'video/'. The test is also applicable to any content sent by HTTP with a MIME type that starts with 'video/'. 7 July 2006 Page 84 of 140

85 5.8.4 Checkpoint 7.4 Until user agents provide the ability to stop the refresh, do not create periodically auto-refreshing pages. [Priority 2] (See and the techniques in TECHS/#tech-no-periodic-refresh) (X)HTML tests Test 7.4_HTML_01 This test is targeted to find elements that can cause page refreshing. Applicability criteria: meta elements with http-equiv="refresh". 1. Check that the meta element contains a URI in its content attribute. 2. If true, check that the URI in the meta content attribute is not equal to the URI of the HTML resource containing the meta element. Expected results: PASS if #1 and #2 are true. FAIL if #1 is false. FAIL if #1 is true and #2 is false. Fully automatable: yes Test 7.4_HTML_02 This test is targeted to find scripts objects that can cause page refreshing. Applicability criteria: scripts. //script //*/@onfocus //*/@onblur //*/@onkeypress //*/@onkeydown //*/@onkeyup //*/@onsubmit //*/@onreset //*/@onselect //*/@onchange //*/@onload //*/@onunload //*/@onclick //*/@ondblclick //*/@onmousedown 7 July 2006 Page 85 of 140

86 1. Check that the script does not cause page refreshing. 2. If false, check that the refreshing can be stopped. Expected results: PASS if #1 or #2 is true. FAIL if #1 and #2 are false Tests for external objects Test 7.4_external_01 This test is targeted to check for external objects that can cause page refreshing. Applicability criteria: external objects. //applet //object 1. Check that the object does not cause page refreshing. 2. If false, check that the refreshing can be stopped. Expected results: PASS if #1 or #2 is true. FAIL if #1 and #2 are false Checkpoint 7.5 Until user agents provide the ability to stop auto-redirect, do not use markup to redirect pages automatically. Instead, configure the server to perform redirects. [Priority 2] (See and the techniques in TECHS/#tech-no-auto-forward) (X)HTML tests Test 7.5_HTML_01 This test is targeted to find elements that can cause page redirecting. Applicability criteria: meta elements with http-equiv="refresh". 7 July 2006 Page 86 of 140

87 Check that the URI in the meta element's content attribute is equal to the URI of the document containing the meta element. Expected results: PASS if true. FAIL if false. Fully automatable: yes Test 7.5_HTML_02 This test is targeted to find scripts that cause redirecting without providing a mechanism to stop the redirect. Applicability criteria: scripts. //script //*/@onfocus //*/@onblur //*/@onkeypress //*/@onkeydown //*/@onkeyup //*/@onsubmit //*/@onreset //*/@onselect //*/@onchange //*/@onload //*/@onunload //*/@onclick //*/@ondblclick //*/@onmousedown //*/@onmouseup //*/@onmouseover //*/@onmousemove //*/@onmouseout 1. Check that the script does not cause redirecting. 2. If false, check that the redirecting can be stopped. Expected results: PASS if #1 is true. PASS if #1 is false and #2 is true. FAIL if #1 and #2 are false Tests for external objects Test 7.5_external_01 This test is targeted to check for external objects that can cause redirecting without providing a mechanism to stop this. 7 July 2006 Page 87 of 140

88 Applicability criteria: external objects. //applet //object 1. Check that the object does not cause redirecting. 2. If false, check that the redirecting can be stopped. Expected results: PASS if #1 is true. PASS if #1 is false and #2 is true. FAIL if #1 and #2 are false. 5.9 Guideline 8 Ensure direct accessibility of embedded user interfaces. (See This guideline provides information on how to create accessible embedded user interfaces Checkpoint 8.1 Make programmatic elements such as scripts and applets directly accessible or compatible with assistive technologies [Priority 1 if functionality is important and not presented elsewhere, otherwise Priority 2.] (See and the techniques in TECHS/#tech-directly-accessible) (X)HTML tests Test 8.1_HTML_01 This test is targeted to find scripts that are not directly accessible or compatible with assistive technologies. Applicability criteria: scripts. //script //*/@onfocus //*/@onblur //*/@onkeypress //*/@onkeydown //*/@onkeyup //*/@onsubmit //*/@onreset 7 July 2006 Page 88 of 140

89 If the script is a user agent, check that it conforms to the User Agent Accessibility Guidelines If the script is an authoring tool, check that it conforms to the Authoring Tool Accessibility Guidelines In other cases, check that the object is directly accessible or compatible with assistive technologies, either through an Accessibility API if available or by ensuring that all functionality is available when only a keyboard interface is used. Expected results: PASS if #1 or #2 or #3 is true. FAIL if #1, #2 and #3 are false Tests for external objects Test 8.1_external_01 This test is targeted find external objects that are not directly accessible or compatible with assistive technologies. Applicability criteria: all external objects. //applet //object 1. If the external object is a user agent, check that it conforms to the User Agent Accessibility Guidelines If the external object is an authoring tool, check that it conforms to the Authoring Tool Accessibility Guidelines In other cases, check that the object is directly accessible or compatible July 2006 Page 89 of 140

90 with assistive technologies, either through an Accessibility API if available or by ensuring that all functionality is available when only a keyboard interface is used. Expected results: PASS if #1 or #2 or #3 is true. FAIL if #1, #2 and #3 are false Guideline 9 Design for device-independence. (See This guideline provides information on how to create web content that does not rely on one specific input or output device Checkpoint 9.1 Provide client-side image maps instead of server-side image maps except where the regions cannot be defined with an available geometric shape. [Priority 1] (See and the techniques in TECHS/#tech-client-side-maps) (X)HTML tests Test 9.1_HTML_01 This test is targeted to find server-side image maps. Applicability criteria: server-side image maps. //a//img[@ismap] //input[@type='image'] Check that the server-side image map can not be replaced with a clientside image map. Expected results: PASS if true. FAIL if false. Fully automatable: yes This test is automatable as it will always produce a FAIL, because every geometric region can be defined by a polygon and every pixel can be defined as a circle with radius 0. 7 July 2006 Page 90 of 140

91 Checkpoint 9.2 Ensure that any element that has its own interface can be operated in a device-independent manner. [Priority 2] (See and the techniques in WEBCONTENT-TECHS/#tech-keyboard-operable) Tests for external objects Test 9.2_external_01 This test is targeted to check for the device-independence of the interface of embedded or included objects. Applicability criteria: all embedded or included objects. //applet //object 1. Check that the external object does not have its own interface. 2. If false, check that any operation supported by the interface can be operated in a device-independent manner by using only a keyboard or keyboard alternative interface. Expected results: PASS if #1 or #2 is true. FAIL if #1 and #2 are false Checkpoint 9.3 For scripts, specify logical event handlers rather than devicedependent event handlers. [Priority 2] (See and the techniques in (X)HTML tests Test 9.3_HTML_01 This test is targeted to check for possibly replacing device-dependent event handlers with device-independent ones. Applicability criteria: elements with device-dependent event handler attributes. 7 July 2006 Page 91 of 140

92 Check that the device-dependent event handler attributes can not be replaced with device-independent @onreset, Expected results: PASS if true. FAIL if false. Fully automatable: yes Guideline 10 Use interim solutions. (See Checkpoint 10.1 Until user agents allow users to turn off spawned windows, do not cause pop-ups or other windows to appear and do not change the current window without informing the user. [Priority 2] (See and the techniques in TECHS/#tech-avoid-pop-ups) (X)HTML tests Test 10.1_HTML_01 This test is targeted to find any target attributes that cause content to be opened in a new window without informing the user. Applicability criteria: a, area, form or link elements with a target attribute with a value that is not one of _self, _parent or _top. //a[@target!='_self' //area[@target!='_self' //form[@target!='_self' 7 July 2006 Page 92 of 140

93 Check that the content warns the user that activating the element spawns a new window. Expected results: PASS if true. FAIL if false Test 10.1_HTML_02 This test is targeted to find base elements with a target attribute that cause content to be opened in a new window without informing the user. Applicability criteria: base elements with a target attribute with a value that is not one of _self, _parent or _top. //base[@target!='_self' Check that the content warns the user that activating any a, area, form or link element spawns a new window. Expected results: PASS if true. FAIL if false Test 10.1_HTML_03 This test is targeted to find scripts that cause content to be opened in a new window without informing the user. Applicability criteria: scripts that cause content to be opened in a new window. //script //*/@onblur //*/@onchange //*/@onfocus //*/@onload //*/@onreset //*/@onselect //*/@onsubmit //*/@onunload //a[starts-with(@href, 'javascript:')] Check that the content warns the user that the script spawns a new window. 7 July 2006 Page 93 of 140

94 Expected results: PASS if true. FAIL if false Checkpoint 10.2 Until user agents support explicit associations between labels and form controls, for all form controls with implicitly associated labels, ensure that the label is properly positioned. [Priority 2] (See and the techniques in TECHS/#tech-unassociated-labels) (X)HTML tests Test 10.2_HTML_01 This test is targeted to find form controls with improperly positioned, implicitly associated labels. Applicability criteria: visible form controls that can have labels. //select //textarea Check that any implicitly associated label for the control is properly positioned. Expected results: PASS if true. FAIL if false Guideline 11 Use W3C technologies and guidelines. (See This guideline recommends using W3C technologies and describes what to do if other technologies are used Checkpoint 11.1 Use W3C technologies when they are available and appropriate for a task and use the latest versions when supported. [Priority 2] 7 July 2006 Page 94 of 140

95 (See and the techniques in TECHS/#tech-latest-w3c-specs) (X)HTML tests Test 11.1_HTML_01 This test is targeted to find out whether the latest versions of W3C technologies for HTML and XHML have been used. Applicability criteria: HTTP content header and document type declaration Check that the HTTP content type is text/html or application/xhtml+xml. 2. If HTTP content type is text/html, check that the document type is HTML 4.01 or XHTML If HTTP content type is application/xhtml+xml, check that the document type is XHTML If document type is HTML 4.01, check that the document conforms to the DTD with a validating SGML parser. 5. If document type is XHTML 1.0 or XHTML 1.1, check that the document conforms to the DTD with a validating XML parser. Expected results: FAIL if any is false. Fully automatable: yes Checkpoint 11.2 Avoid deprecated features of W3C technologies. [Priority 2] (See and the techniques in TECHS/#tech-avoid-deprecated) Note that deprecated can mean different things in different specifications. In HTML 4.01 it means the following: A deprecated element or attribute is one that has been outdated by newer constructs. Deprecated elements are defined in the reference manual in appropriate locations, but are clearly marked as deprecated. Deprecated elements may become obsolete in future versions of HTML. User agents should continue to support deprecated elements for reasons of backward compatibility. It is possible that a deprecated feature is deprecated in favour of another feature that is not well supported (e.g. object is intended to replace applet). 7 July 2006 Page 95 of 140

96 (X)HTML tests Test 11.2_HTML_01 This test is targeted to find deprecated HTML elements. Applicability criteria: deprecated elements in the HTML 4.01 specification. //applet //basefont //center //dir //font //isindex //menu //s //strike //u Check that any of these elements are present. Expected results: PASS if false. FAIL if true. Fully automatable: yes Test 11.2_HTML_02 This test is targeted to find deprecated HTML attributes. Applicability criteria: deprecated attributes in the HTML 4.01 specification July 2006 Page 96 of 140

97 XHTML formally deprecates the use of: Check that any of these attributes are present. Expected results: PASS if false. FAIL if true. Fully automatable: yes Checkpoint 11.4 If, after best efforts, you cannot create an accessible page, provide a link to an alternative page that uses W3C technologies, is accessible, has equivalent information (or functionality), and is updated as often as the inaccessible (original) page. [Priority 1] 49 Section 4.10 in the XHTML 1.0 specification: "The elements with 'id' and 'name' attributes": 7 July 2006 Page 97 of 140

98 (See and the techniques in TECHS/#tech-alt-pages) (X)HTML tests Test 11.4_HTML_01 This test looks for the alternative content and checks whether it is equivalent. Applicability criteria: linked alternative page. document(//a/href) 1. Compare functionality and information content of the two delivery units. 2. Check that they are equivalent. Expected results: PASS if #2 is true. FAIL if #2 is false Test 11.4_HTML_02 This test looks for the alternative content and checks whether it is accessible. Applicability criteria: linked alternative page. document(//a/href) 1. Identify the alternative content. 2. Check that it conforms to all tests for Priority 1 checkpoints in this section of UWEM other than this one, Expected results: PASS if #2 is true. FAIL if #2 is false Test 11.4_HTML_03 This test looks for the alternative content and checks whether the original content could have been made accessible. Applicability criteria: linked alternative page. document(//a/href) document(//link/@href) 7 July 2006 Page 98 of 140

99 1. Identify the alternative content. 2. Check that the original content could not have been made accessible without undue effort. Expected results: PASS if #2 is true. FAIL if #2 is false Guideline 12 Provide context and orientation information. (See This guideline provides information on how to provide contextual and orientation information to help users understand complex pages or elements Checkpoint 12.1 Title each frame to facilitate frame identification and navigation. [Priority 1] (See (X)HTML tests Test 12.1_HTML_01 This test is targeted to find frames without description. Applicability criteria: frame elements without title attribute. Check that any such frames are found. Expected results: FAIL if true. Fully automatable: yes Test 12.1_HTML_02 This test is targeted to check whether the title attribute identifies the frame. Applicability criteria: frame elements with title attribute. 7 July 2006 Page 99 of 140

100 1. Select elements. 2. Check that the title identifies the frame. Expected results: PASS if #2 is true. FAIL if #2 is false Checkpoint 12.2 Describe the purpose of frames and how frames relate to each other if it is not obvious by frame titles alone. [Priority 2] (See (X)HTML tests Test 12.2_HTML_01 This test is targeted to check whether the long description represents the context of the frame, if it is not clear by the frame title alone. Applicability criteria: documents referenced by a frame element's longdesc attribute or by an a element's href attribute in a noframes element, the purpose of which are to describe the context of a frame. document(//frame/@longdesc) document(//noframes//a/@href) 1. Select long description document referenced by the element. 2. Check that the frame and its context are described by the text in the document, if not obvious by the frame title alone. Expected results: PASS if #2 is true. FAIL if #2 is false Checkpoint 12.3 Divide large blocks of information into more manageable groups where natural and appropriate. [Priority 2] (See 7 July 2006 Page 100 of 140

101 (X)HTML tests Test 12.3_HTML_01 This test is targeted to find fieldsets without legend. Applicability criteria: fieldset elements without legend child element. //fieldset[not(legend)] Check that any such fieldsets are found. Expected results: PASS if false. FAIL if true. Fully automatable: yes Test 12.3_HTML_02 This test is targeted to check whether the legend describes the meaning of the fieldset. Applicability criteria: legend elements. //legend Check that the legend represents the context of the fieldset parent element. Expected results: PASS if true. FAIL if false Test 12.3_HTML_03 This test is targeted to check whether the elements are grouped in a practical way. Applicability criteria: fieldset elements. //fieldset Check that form control elements in the fieldset are grouped in a practical way. Expected results: PASS if true. FAIL if false. 7 July 2006 Page 101 of 140

102 Test 12.3_HTML_04 This test is targeted to find optgroup elements without label. Applicability criteria: optgroup elements without a label attribute. //optgroup[not(@label)] Check that any such optgroup elements are found. Expected results: FAIL if true. Fully automatable: yes Test 12.3_HTML_05 This test is targeted to check whether the label describes the meaning of the optgroup. Applicability criteria: label attribute of optgroup elements. //optgroup/@label Check that the label represents the context of the optgroup. Expected results: PASS if true. FAIL if false Test 12.3_HTML_06 This test is targeted to check whether the option elements are grouped in a practical way. Applicability criteria: optgroup elements. //optgroup Check that the elements are grouped in a practical way. Expected results: PASS if true. FAIL if false Test 12.3_HTML_07 This test is targeted to find tables without caption. Applicability criteria: data tables implemented by a table element without caption child element. 7 July 2006 Page 102 of 140

103 //table[not(caption)] Check that any such table elements are found. Expected results: PASS if false. FAIL if true Test 12.3_HTML_08 This test is targeted to check whether the caption describes the meaning of the table. Applicability criteria: caption element in a data table. //caption Check that caption describes the nature of the table. Expected results: PASS if true. FAIL if false Test 12.3_HTML_09 This test is targeted to check whether the elements are grouped in a practical way. Applicability criteria: thead, tbody, tfoot, and colgroup elements. //thead //tbody //tfoot //colgroup Check that their child elements are grouped in a practical way. Expected results: PASS if true. FAIL if false Test 12.3_HTML_10 This test is targeted to check whether the elements are grouped in a practical way. Applicability criteria: ul, ol, and dl elements. //ul //ol 7 July 2006 Page 103 of 140

104 //dl Check that their child elements are grouped in a practical way. Expected results: PASS if true. FAIL if false Test 12.3_HTML_11 This test is targeted to check whether the elements are structured in a practical way. Applicability criteria: paragraphs (p elements). //p Check that the paragraph is used to structure text in a practical way. Expected results: PASS if true. FAIL if false Test 12.3_HTML_12 This test is targeted to check whether form controls need grouping. Applicability criteria: form element without a fieldset descendant element. //form[not(.//fieldset)] Check that the form controls in the form do not need grouping. Expected results: PASS if true. FAIL if false Test 12.3_HTML_13 This test is targeted to check whether the options need grouping. Applicability criteria: select element without optgroup child element. //select[not(optgroup)] Check that options in the select element do not need grouping. Expected results: PASS if true. 7 July 2006 Page 104 of 140

105 Test 12.3_HTML_14 This test is targeted to check whether the table rows need grouping. Applicability criteria: tables without headers or footers. //table[not(thead) or not(tfoot) or count(tbody)<2] Check that the table rows do not need grouping. Expected results: PASS if true Test 12.3_HTML_15 This test is targeted to check whether text needs grouping with headings and paragraphs. Applicability criteria: body text. //body//text() Check that the text is grouped appropriately with heading and paragraph elements. Expected results: PASS if true. FAIL if false Checkpoint 12.4 Associate labels explicitly with their controls. [Priority 2] (See (X)HTML tests Test 12.4_HTML_01 This test is targeted to find form control elements without id. Applicability criteria: input or selection form control elements without id attribute. 7 July 2006 Page 105 of 140

106 Check that any such elements are found. Expected results: PASS if false. FAIL if true. Fully automatable: yes Test 12.4_HTML_02 This test is targeted to find form control elements without label element. Applicability criteria: input or selection form control elements without associated label element. Check that there is a label element in the document, the for attribute of which equals the form control element's id attribute. Expected results: PASS if true. FAIL if false. Fully automatable: yes Guideline 13 Provide clear navigation mechanisms. (See This guideline provides information on how to provide contextual and orientation information to help users understand complex pages or elements Checkpoint 13.1 Clearly identify the target of each link. [Priority 2] (See and the techniques in WEBCONTENT-TECHS/#tech-meaningful-links) (X)HTML tests Test 13.1_HTML_01 This test is targeted to find a, input and area elements with the same title and text with different different link target (href). If no title attribute is provided, only the element text is checked. 7 July 2006 Page 106 of 140

107 Applicability criteria: a, input and area elements with same link text and title attributes, but different link targets. //a //area //input 1. Select elements pairwise. 2. Check that two links with same link text (including alternative text for non-text elements) and same title attribute (if provided) point to the same resource. Expected results: PASS if #2 is true. FAIL if #2 is false. Fully automatable: yes Test 13.1_HTML_02 This test is targeted to find a, input and area elements with link text that does not clearly identify the target of the link. Applicability criteria: a, input and area elements with same link text and title attributes, but different link targets. //a //area //input 1. Check that the link text (including alternative text for non-text elements) clearly describes the effect of activating the element. Expected results: PASS if true. FAIL if false. Fully automatable: no Checkpoint 13.2 Provide metadata to add semantic information to pages and sites. [Priority 2] (See and the techniques in TECHS/#tech-use-metadata) (X)HTML tests Test 13.2_HTML_01 This test is targeted to find a document title element in a web resource. 7 July 2006 Page 107 of 140

108 Applicability criteria: documents. 1. Check that there is a title element. 2. Check that the title is not empty. 3. Check that the title element provides an appropriate title for the resource. Expected results: PASS if true. FAIL if false. Fully automatable: no Checkpoint 13.3 Provide information about the general layout of a site (e.g., a site map or table of contents). [Priority 2] In describing site layout, highlight and explain available accessibility features. (See and the techniques in WEBCONTENT-TECHS/#tech-site-description) (X)HTML tests Test 13.3_HTML_01 This test is targeted to find a web site without information about the general layout of the site. Applicability criteria: the whole web site. 1. Check that the web site contains a document containing a description of the general layout of a site. A site map or a table of contents would meet this requirement. 2. Check that this document can be reached from the index page. Expected results: PASS if #1 and #2 true. FAIL if #1 or #2 is false Checkpoint 13.4 Use navigation mechanisms in a consistent manner. [Priority 2] 7 July 2006 Page 108 of 140

109 (See and the techniques in WEBCONTENT-TECHS/#tech-clear-nav-mechanism) (X)HTML tests Test 13.4_HTML_01 This test is targeted to check whether navigation mechanisms are used in a consistent manner. Applicability criteria: navigation menus and navigation bars. 1. Select navigation elements on several pages. 2. Check that the navigation facilities are similar with respect to presentation (location in rendered document, location in content rendered without style sheets, colours, font) and behaviour. Expected results: PASS if #2 is true. FAIL if #2 is false Guideline 14 Ensure that documents are clear and simple. (See This guideline provides information on how to create clear and simple documents Checkpoint 14.1 Use the clearest and simplest language appropriate for a site's content. [Priority 1] (See and the techniques in WEBCONTENT-TECHS/#tech-icons) (X)HTML tests Test 14.1_HTML_01 This test is targeted to analyse the readability of the content. Applicability criteria: whole document. 7 July 2006 Page 109 of 140

110 1. Read the text. 2. Check that the language is simple. 3. If not, check that the language used in the document is necessary to convey the information. Expected results: PASS if #2 or #3 is true. FAIL if #2 and #3 are false. 7 July 2006 Page 110 of 140

111 6 Aggregation model for test results 6.1 Introduction For automatic and/or expert evaluation it is important that the individual assessments can be aggregated into an end-result that is easy to understand and has a clear interpretation. Aggregation of test results is possible on different levels like the checkpoint or test level. One of the metrics that can also be interesting for policy makers, is the accessibility barrier probability. The European Commission regulation 808/2004 concerning community statistics for the information society explicitly states that one characteristic to be provided is barriers to the use of ICT, Internet and other electronic networks, e-commerce and e-business processes. This section describes a model for calculating the accessibility barrier probability for single Web pages and Web sites Approach Web accessibility evaluation can be viewed as a three stage process. Figure 2 and Figure 3 summarise the Web accessibility evaluation process. The notation in the figures that will be used throughout this section is introduced in section 6.3. In the first stage, W3C's Evaluation and Report Language (EARL) is used as a standardised format for collecting and conveying test results from accessibility assessment tools according to any given standard. The model also supports the weighting of the test reports according to their error probability such that the contribution of tests with lower confidence to the overall result is reduced. The second stage performs aggregation of the individual test results into one comprehensive figure for a web page. The calculation is based on a statistical model (user centric accessibility barrier model, UCAB, introduced in section 6.3.4). The underlying assumption being that the accessibility barriers within a web page accumulate. To present the results to the public, e.g., people who experience barriers (such as people with disabilities) or users of the data for planning and development purposes (such as policy makers and stakeholders) the third stage of Web accessibility evaluation needs to provide means for interpreting the results. This includes statistical analyses of the findings, presentation of average values for a Web site or for groups of Web sites by geographical region or business sector. This stage can be viewed as the business logic for estimating the accessibility barriers. The reporting of findings will be covered in section 7 and section This probabilistic model for accessibility barriers is work in progress. It will be evaluated and refined if necessary during the evaluation phases of UWEM. Aggregation of accessibility barriers will probably be based around these general ideas, but the exact model is subject to change during the evaluation phase. 7 July 2006 Page 111 of 140

112 Figure 2: The first and second stage of the Web site evaluation process. Figure 3: The third stage of the Web site evaluation process 7 July 2006 Page 112 of 140

113 Section 6.4 concentrates on the second stage of Web accessibility evaluation and provides further details on the statistical model used in the aggregation. Section 6.5 describes how an average value for a Web site can be derived from the results of stage Definitions and mathematical background In this section we explain the main concepts involved in the modelling of Web accessibility barriers. Subsequently we show how they can be transferred into a statistical model and introduce the UWEM User Centric Accessibility Barrier Model (UCAB) Definitions Web page A resource on the web as defined by section 4. Barrier An accessibility barrier is modelled as a product failure caused by an incompatibility between the needs of a disabled user and product functionality. The incompatibility is caused by the web page, i.e., it is not the user's fault. Barrier type 7 July 2006 Page 113 of 140

D-WAB4 Unified Web Evaluation Methodology (UWEM 1.1)

D-WAB4 Unified Web Evaluation Methodology (UWEM 1.1) Web Accessibility Benchmarking Cluster Sixth Framework Programme Information Society Technologies Priority D-WAB4 Unified Web Evaluation Methodology (UWEM 1.1) Contractual Date of Delivery to the EC: 05

More information

Support-EAM. Publishable Executive Summary SIXTH FRAMEWORK PROGRAMME. Project/Contract no. : IST SSA. the 6th Framework Programme

Support-EAM. Publishable Executive Summary SIXTH FRAMEWORK PROGRAMME. Project/Contract no. : IST SSA. the 6th Framework Programme Support-EAM Supporting the creation of an eaccessibility Mark SIXTH FRAMEWORK PROGRAMME Project/Contract no. : IST-2-004754-SSA Project acronym: Project full title: Instrument: Thematic Priority: SUPPORT-

More information

Deployment & collecting public Web Site URLs

Deployment & collecting public Web Site URLs European Internet Accessibility Observatory EIAO Deployment & collecting public Web Site URLs Mikael Snaprud Brussels 2006-12-18 EIAO project is co-founded by the European Commission under the IST contract

More information

Is It Possible to Predict the Manual Web Accessibility Result Using the Automatic Result?

Is It Possible to Predict the Manual Web Accessibility Result Using the Automatic Result? Is It Possible to Predict the Manual Web Accessibility Result Using the Automatic Result? Carlos Casado Martínez, Loic Martínez-Normand, and Morten Goodwin Olsen Abstract. The most adequate approach for

More information

EPiServer s Compliance to WCAG and ATAG

EPiServer s Compliance to WCAG and ATAG EPiServer s Compliance to WCAG and ATAG An evaluation of EPiServer s compliance to the current WCAG and ATAG guidelines elaborated by the World Wide Web Consortium s (W3C) Web Accessibility Initiative

More information

A Proposed Architecture for Large Scale Web Accessibility Assessment

A Proposed Architecture for Large Scale Web Accessibility Assessment A Proposed Architecture for Large Scale Web Accessibility Assessment Mikael Holmesland Snaprud 1, Nils Ulltveit-Moe 1, Anand Balachandran Pillai 2, and Morten Goodwin Olsen 1 1 Faculty of Engineering,

More information

Duke Library Website Preliminary Accessibility Assessment

Duke Library Website Preliminary Accessibility Assessment Duke Library Website Preliminary Accessibility Assessment RAW OUTPUT FROM CYNTHIASAYS December 15, 2011 Michael Daul, Digital Projects Developer Digital Experience Services HiSoftware Cynthia Says - Web

More information

VPAT. Voluntary Product Accessibility Template. Version 1.3

VPAT. Voluntary Product Accessibility Template. Version 1.3 VPAT Version 1.3 The purpose of the Voluntary Product Accessibility Template, or VPAT, is to assist Federal contracting officials and other buyers in making preliminary assessments regarding the availability

More information

UWEM Indicator Refinement

UWEM Indicator Refinement EIAO European Internet Accessibility Observatory Project no.: IST-2003-004526- STREP UWEM Indicator Refinement Version: 0.9 Date: 2007-05-25 Author: Annika Nietzio, Nils Ulltveit-Moe, Terje Gjøsæter, Morten

More information

Web Accessibility: Why Standards Harmonization Matters

Web Accessibility: Why Standards Harmonization Matters Web Accessibility: Why Standards Harmonization Matters Judy Brewer Director, Web Accessibility Initiative World Wide Web Consortium Washington, DC, June 24-25, 2008 1 Web Accessibility: Why Standards Harmonization

More information

The BenToWeb Test Case Suites for the Web Content Accessibility Guidelines (WCAG) 2.0

The BenToWeb Test Case Suites for the Web Content Accessibility Guidelines (WCAG) 2.0 The BenToWeb Test Case Suites for the Web Content Accessibility Guidelines (WCAG) 2.0 Christophe Strobbe 1, Johannes Koch 2, Evangelos Vlachogiannis 3, Reinhard Ruemer 4, Carlos A. Velasco 2 and Jan Engelen

More information

VPAT. Voluntary Product Accessibility Template. Version 1.3

VPAT. Voluntary Product Accessibility Template. Version 1.3 Version 1.3 The purpose of the Voluntary Product Accessibility Template, or VPAT, is to assist Federal contracting officials and other buyers in making preliminary assessments regarding the availability

More information

Vovici Enterprise Web Accessibility Guidelines

Vovici Enterprise Web Accessibility Guidelines know more. go far. Vovici Enterprise Web Accessibility Guidelines Version 4.0 March 2009 45365 Vintage Park Plaza, Suite 250, Dulles, VA 20166 t: + 1 703 481 9326 f: + 1 703 783 0069 e: sales@vovici.com

More information

VPAT. Voluntary Product Accessibility Template. Version 1.3

VPAT. Voluntary Product Accessibility Template. Version 1.3 Version 1.3 The purpose of the Voluntary Product Accessibility Template, or VPAT, is to assist Federal contracting officials and other buyers in making preliminary assessments regarding the availability

More information

This document is a preview generated by EVS

This document is a preview generated by EVS CEN WORKSHOP CWA 15554 June 2006 AGREEMENT ICS 35.240.99 English version Specifications for a Web Accessibility Conformity Assessment Scheme and a Web Accessibility Quality Mark This CEN Workshop Agreement

More information

Adobe Experience Manager (AEM) 5.6 for Forms Portal Voluntary Product Accessibility Template

Adobe Experience Manager (AEM) 5.6 for Forms Portal Voluntary Product Accessibility Template Adobe Experience Manager (AEM) 5.6 for Forms Portal Voluntary Product Accessibility Template The purpose of the Voluntary Product Accessibility Template is to assist Federal contracting officials in making

More information

GRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi

GRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi GRIDS INTRODUCTION TO GRID INFRASTRUCTURES Fabrizio Gagliardi Dr. Fabrizio Gagliardi is the leader of the EU DataGrid project and designated director of the proposed EGEE (Enabling Grids for E-science

More information

VPAT. Voluntary Product Accessibility Template. Version 1.3

VPAT. Voluntary Product Accessibility Template. Version 1.3 VPAT Version 1.3 The purpose of the Voluntary Product Accessibility Template, or VPAT, is to assist Federal contracting officials and other buyers in making preliminary assessments regarding the availability

More information

Voluntary Product Accessibility Template Retina Network Security Scanner

Voluntary Product Accessibility Template Retina Network Security Scanner Voluntary Product Accessibility Template Retina Network Security Scanner The VPAT (Voluntary Product Accessibility Template) product is a tool developed by ITIC (Information Technology Industry Council)

More information

VPAT. Voluntary Product Accessibility Template

VPAT. Voluntary Product Accessibility Template VPAT Voluntary Product Accessibility Template Version 1.3 The purpose of the Voluntary Product Accessibility Template, or VPAT, is to assist Federal contracting officials and other buyers in making preliminary

More information

Accessibility Evaluation of Pearson REVEL Platform VPAT Document

Accessibility Evaluation of Pearson REVEL Platform VPAT Document Accessibility Evaluation of Pearson REVEL Platform VPAT Document Presented to: Pearson July 23, 2015 Document Version 1.0 Prepared By: Tech For All, Inc. www.tfaconsulting.com August 13, 2015 Document

More information

Adobe Experience Manager 6.0 Voluntary Product Accessibility Template

Adobe Experience Manager 6.0 Voluntary Product Accessibility Template Adobe Experience Manager 6.0 Voluntary Product Accessibility Template The purpose of the Voluntary Product Accessibility Template is to assist Federal contracting officials in making preliminary assessments

More information

Adobe CQ5.4 Voluntary Product Accessibility Template

Adobe CQ5.4 Voluntary Product Accessibility Template Adobe CQ5.4 Voluntary Product Accessibility Template The purpose of the Voluntary Product Accessibility Template is to assist Federal contracting officials in making preliminary assessments regarding the

More information

On the application of W3C Guidelines in Website Design from scratch

On the application of W3C Guidelines in Website Design from scratch On the application of W3C Guidelines in Website Design from scratch Diamantino Freitas, Helder Ferreira Faculty of Engineering of the University of Porto LPF-ESI / DEEC / FEUP / Portugal dfreitas@fe.up.pt,

More information

Design & Manage Persistent URIs

Design & Manage Persistent URIs Training Module 2.3 OPEN DATA SUPPORT Design & Manage Persistent URIs PwC firms help organisations and individuals create the value they re looking for. We re a network of firms in 158 countries with close

More information

Voluntary Product Accessibility Template

Voluntary Product Accessibility Template Voluntary Product Accessibility Template The purpose of the Voluntary Product Accessibility Template is to assist Federal contracting officials in making preliminary assessments regarding the availability

More information

VPAT. Voluntary Product Accessibility Template. Version 1.3

VPAT. Voluntary Product Accessibility Template. Version 1.3 VPAT Voluntary Product Accessibility Template Version 1.3 The purpose of the Voluntary Product Accessibility Template, or VPAT, is to assist Federal contracting officials and other buyers in making preliminary

More information

SecurityCenter 508 Compliance

SecurityCenter 508 Compliance SecurityCenter 508 Compliance Summary Table Section 508 Voluntary Product Accessibility Template Tenable Network Security, Inc. Updated May 5, 2015 SecurityCenter 5 The latest version of this document

More information

Cybersecurity. Quality. security LED-Modul. basis. Comments by the electrical industry on the EU Cybersecurity Act. manufacturer s declaration

Cybersecurity. Quality. security LED-Modul. basis. Comments by the electrical industry on the EU Cybersecurity Act. manufacturer s declaration Statement Comments by the electrical industry on the EU Cybersecurity Act manufacturer s declaration industrial security Cybersecurity Quality basis security LED-Modul Statement P January 2018 German Electrical

More information

Voluntary Product Accessibility. Retina CS Enterprise Vulnerability Management

Voluntary Product Accessibility. Retina CS Enterprise Vulnerability Management Voluntary Product Accessibility Template Retina CS Enterprise Vulnerability Management The VPAT (Voluntary Product Accessibility Template) product is a tool developed by ITIC (Information Technology Industry

More information

Voluntary Product Accessibility Template PowerBroker for Mac

Voluntary Product Accessibility Template PowerBroker for Mac Voluntary Product Accessibility Template PowerBroker for Mac The VPAT (Voluntary Product Accessibility Template) product is a tool developed by ITIC (Information Technology Industry Council) and government

More information

Voluntary Product Accessibility Template. Version 1.3

Voluntary Product Accessibility Template. Version 1.3 VPAT Voluntary Product Accessibility Template Version 1.3 The purpose of the Voluntary Product Accessibility Template, or VPAT, is to assist Federal contracting officials and other buyers in making preliminary

More information

Adobe LiveCycle Form Guides ES4 Voluntary Product Accessibility Template

Adobe LiveCycle Form Guides ES4 Voluntary Product Accessibility Template Adobe LiveCycle Form Guides ES4 Voluntary Product Accessibility Template The purpose of the Voluntary Product Accessibility Template is to assist Federal contracting officials in making preliminary assessments

More information

Voluntary Product Accessibility Template. Version 1.3

Voluntary Product Accessibility Template. Version 1.3 VPAT Version 1.3 The purpose of the Voluntary Product Accessibility Template, or VPAT, is to assist Federal contracting officials and other buyers in making preliminary assessments regarding the availability

More information

UNIVERSITY OF NORTH CAROLINA WILMINGTON

UNIVERSITY OF NORTH CAROLINA WILMINGTON Department: Contact Person: Date: Email: Online - Web & Course Accessibility Checklist Text, presentation & content Text equivalent is provided for every non-text element (i.e., image) (Solution: add an

More information

State of Colorado. ADA IT Accessibility Standards For the Blind and Visually Impaired

State of Colorado. ADA IT Accessibility Standards For the Blind and Visually Impaired State of Colorado ADA IT Accessibility Standards For the Blind and Visually Impaired and IT Accessibility Procurement Criteria Adopted January 19, 2001 Version 2.0 July 2005 State of Colorado ADA Standards

More information

Blackboard. Voluntary Product Accessibility Template Blackboard Learn Release 9.1 SP11. (Published January 14, 2013) Contents: Introduction

Blackboard. Voluntary Product Accessibility Template Blackboard Learn Release 9.1 SP11. (Published January 14, 2013) Contents: Introduction Blackboard Voluntary Product Accessibility Template Blackboard Learn Release 9.1 SP11 (Published January 14, 2013) Contents: Introduction Key Improvements VPAT Section 1194.21: Software Applications and

More information

Adobe Experience Manager (AEM) 6.2 Forms - Reader Extensions Voluntary Product Accessibility

Adobe Experience Manager (AEM) 6.2 Forms - Reader Extensions Voluntary Product Accessibility Adobe Experience Manager (AEM) 6.2 Forms - Reader Extensions Voluntary Product Accessibility The purpose of the Voluntary Product Accessibility Template is to assist Federal contracting officials in making

More information

VPAT. Voluntary Product Accessibility Template. Version 1.3

VPAT. Voluntary Product Accessibility Template. Version 1.3 Version 1.3 The purpose of the Voluntary Product Accessibility Template, or VPAT, is to assist Federal contracting officials and other buyers in making preliminary assessments regarding the availability

More information

Adobe Contribute 6.5 Voluntary Product Accessibility Template

Adobe Contribute 6.5 Voluntary Product Accessibility Template Adobe Contribute 6.5 Voluntary Product Accessibility Template The purpose of the Voluntary Product Accessibility Template is to assist Federal contracting officials in making preliminary assessments regarding

More information

Voluntary Product Accessibility Template. Version 1.3

Voluntary Product Accessibility Template. Version 1.3 VPAT Version 1.3 The purpose of the Voluntary Product Accessibility Template, or VPAT, is to assist Federal contracting officials and other buyers in making preliminary assessments regarding the availability

More information

VPAT. Voluntary Product Accessibility Template. Version 1.3

VPAT. Voluntary Product Accessibility Template. Version 1.3 VPAT Voluntary Product Accessibility Template Version 1.3 The purpose of the Voluntary Product Accessibility Template, or VPAT, is to assist Federal contracting officials and other buyers in making preliminary

More information

Adobe Digital Publishing Solution for Windows Voluntary Product Accessibility Template

Adobe Digital Publishing Solution for Windows Voluntary Product Accessibility Template Adobe Digital Publishing Solution for Windows Voluntary Product Accessibility Template The purpose of the Voluntary Product Accessibility Template is to assist Federal contracting officials in making preliminary

More information

Voluntary Product Accessibility Template. Version 1.3

Voluntary Product Accessibility Template. Version 1.3 VPAT Voluntary Product Accessibility Template Version 1.3 The purpose of the Voluntary Product Accessibility Template, or VPAT, is to assist Federal contracting officials and other buyers in making preliminary

More information

The emerging EU certification framework: A role for ENISA Dr. Andreas Mitrakas Head of Unit EU Certification Framework Conference Brussels 01/03/18

The emerging EU certification framework: A role for ENISA Dr. Andreas Mitrakas Head of Unit EU Certification Framework Conference Brussels 01/03/18 The emerging EU certification framework: A role for ENISA Dr. Andreas Mitrakas Head of Unit EU Certification Framework Conference Brussels 01/03/18 European Union Agency for Network and Information Security

More information

Voluntary Product Accessibility Template (VPAT ) WCAG Edition. About This Document. Version 2.2 July 2018

Voluntary Product Accessibility Template (VPAT ) WCAG Edition. About This Document. Version 2.2 July 2018 This document is broken into two main sections: Voluntary Product Accessibility Template (VPAT ) WCAG Edition Version 2.2 July 2018 About This Document...1 Essential Requirements and Best Practices for

More information

Schedule Planner Section 508 Voluntary Product Accessibility Template

Schedule Planner Section 508 Voluntary Product Accessibility Template Schedule Planner Section 508 Voluntary Product Accessibility Template The purpose of the Voluntary Product Accessibility Template is to assist federal contracting officials in making preliminary assessments

More information

Consolidation Team INSPIRE Annex I data specifications testing Call for Participation

Consolidation Team INSPIRE Annex I data specifications testing Call for Participation INSPIRE Infrastructure for Spatial Information in Europe Technical documents Consolidation Team INSPIRE Annex I data specifications testing Call for Participation Title INSPIRE Annex I data specifications

More information

VMware vrealize Code Stream 6.2 VPAT

VMware vrealize Code Stream 6.2 VPAT VMware, Inc. 3401 Hillview Avenue Palo Alto, CA 94304 (877) 486-9273 main (650) 427-5001 fax www.vmware.com VMware vrealize Code Stream 6.2 VPAT June 2015 Since the VPAT must be comprehensive, all Section

More information

Voluntary Product Accessibility Template. Version 1.3

Voluntary Product Accessibility Template. Version 1.3 VPAT Voluntary Product Accessibility Template Version 1.3 The purpose of the Voluntary Product Accessibility Template, or VPAT, is to assist Federal contracting officials and other buyers in making preliminary

More information

Student Schedule Planner Section 508 Voluntary Product Accessibility Template

Student Schedule Planner Section 508 Voluntary Product Accessibility Template Student Schedule Planner Section 508 Voluntary Product Accessibility Template The purpose of the Voluntary Product Accessibility Template is to assist federal contracting officials in making preliminary

More information

Web-based Internet Information and Application Checklist

Web-based Internet Information and Application Checklist REVIEWER INFORMATION Product Name: Version #: Reviewer Name: Date: Filenames/URL: Locations: Intranet Training Academy DCMA360 Other (explain) REVIEW GUIDELINES Complete this review, using the following

More information

Adobe Experience Manager (AEM) 6.2 Forms - Output Voluntary Product Accessibility Template

Adobe Experience Manager (AEM) 6.2 Forms - Output Voluntary Product Accessibility Template Adobe Experience Manager (AEM) 6.2 Forms - Output Voluntary Product Accessibility Template The purpose of the Voluntary Product Accessibility Template is to assist Federal contracting officials in making

More information

VMware vrealize Code Stream 1.0 VPAT

VMware vrealize Code Stream 1.0 VPAT VMware, Inc. 3401 Hillview Avenue Palo Alto, CA 94304 (877) 486-9273 main (650) 427-5001 fax www.vmware.com VMware vrealize Code Stream 1.0 VPAT June 2015 Since the VPAT must be comprehensive, all Section

More information

Voluntary Product Accessibility Template

Voluntary Product Accessibility Template Voluntary Product Accessibility Template The purpose of the Voluntary Product Accessibility Template is to assist Federal contracting officials in making preliminary assessments regarding the availability

More information

VPAT. Voluntary Product Accessibility Template. Version 1.3

VPAT. Voluntary Product Accessibility Template. Version 1.3 VPAT Voluntary Product Accessibility Template Version 1.3 The purpose of the Voluntary Product Accessibility Template, or VPAT, is to assist Federal contracting officials and other buyers in making preliminary

More information

VPAT (Voluntary Product Accessibility Template)

VPAT (Voluntary Product Accessibility Template) VPAT (Voluntary Product Accessibility Template) The VPAT (Voluntary Product Accessibility Template) product is a tool developed by ITIC Information Technology Industry Council and government GSA Government

More information

Adobe Flash Professional CS5.5 Voluntary Product Accessibility Template

Adobe Flash Professional CS5.5 Voluntary Product Accessibility Template Adobe Flash Professional CS5.5 Voluntary Product Accessibility Template The purpose of the Voluntary Product Accessibility Template is to assist Federal contracting officials in making preliminary assessments

More information

Adobe Illustrator CS5.1 Voluntary Product Accessibility Template

Adobe Illustrator CS5.1 Voluntary Product Accessibility Template Adobe Illustrator CS5.1 Voluntary Product Accessibility Template The purpose of the Voluntary Product Accessibility Template is to assist Federal contracting officials in making preliminary assessments

More information

Adobe RoboHelp 9 Voluntary Product Accessibility Template

Adobe RoboHelp 9 Voluntary Product Accessibility Template Adobe RoboHelp 9 Voluntary Product Accessibility Template The purpose of the Voluntary Product Accessibility Template is to assist Federal contracting officials in making preliminary assessments regarding

More information

Voluntary Product Accessibility Template

Voluntary Product Accessibility Template Voluntary Product Accessibility Template The purpose of the Voluntary Product Accessibility Template is to assist Federal contracting officials in making preliminary assessments regarding the availability

More information

Adobe Flash Professional CC Voluntary Product Accessibility Template

Adobe Flash Professional CC Voluntary Product Accessibility Template Adobe Flash Professional CC Voluntary Product Accessibility Template The purpose of the Voluntary Product Accessibility Template is to assist Federal contracting officials in making preliminary assessments

More information

Adobe Illustrator CC Voluntary Product Accessibility Template

Adobe Illustrator CC Voluntary Product Accessibility Template Adobe Illustrator CC Voluntary Product Accessibility Template The purpose of the Voluntary Product Accessibility Template is to assist Federal contracting officials in making preliminary assessments regarding

More information

VMware AirWatch 8 VPAT

VMware AirWatch 8 VPAT VMware, Inc. 3401 Hillview Avenue Palo Alto, CA 94304 (877) 486-9273 main (650) 427-5001 fax www.vmware.com VMware AirWatch 8 VPAT May 2015 Since the VPAT must be comprehensive, all Section 508 issues

More information

Adobe Story CC Plus Voluntary Product Accessibility Template

Adobe Story CC Plus Voluntary Product Accessibility Template Adobe Story CC Plus Voluntary Product Accessibility Template The purpose of the Voluntary Product Accessibility Template is to assist Federal contracting officials in making preliminary assessments regarding

More information

Voluntary Product Accessibility Template PowerBroker Identity Services

Voluntary Product Accessibility Template PowerBroker Identity Services Voluntary Product Accessibility Template PowerBroker Identity Services The VPAT (Voluntary Product Accessibility Template) product is a tool developed by ITIC Information Technology Industry Council and

More information

Adobe LiveCycle PDF Generator ES4 Voluntary Product Accessibility Template

Adobe LiveCycle PDF Generator ES4 Voluntary Product Accessibility Template Adobe LiveCycle PDF Generator ES4 Voluntary Product Accessibility Template The purpose of the Voluntary Product Accessibility Template is to assist Federal contracting officials in making preliminary assessments

More information

Adobe EchoSign Voluntary Product Accessibility Template

Adobe EchoSign Voluntary Product Accessibility Template Adobe EchoSign Voluntary Product Accessibility Template The purpose of the Voluntary Product Accessibility Template is to assist Federal contracting officials in making preliminary assessments regarding

More information

Adobe Experience Manager (AEM) 6.2 Forms Designer Voluntary Product Accessibility Template

Adobe Experience Manager (AEM) 6.2 Forms Designer Voluntary Product Accessibility Template Adobe Experience Manager (AEM) 6.2 Forms Designer Voluntary Product Accessibility Template The purpose of the Voluntary Product Accessibility Template is to assist Federal contracting officials in making

More information

Adobe Photoshop CS6 Voluntary Product Accessibility Template

Adobe Photoshop CS6 Voluntary Product Accessibility Template Adobe Photoshop CS6 Voluntary Product Accessibility Template The purpose of the Voluntary Product Accessibility Template is to assist Federal contracting officials in making preliminary assessments regarding

More information

VPAT. Voluntary Product Accessibility Template. Version 1.3

VPAT. Voluntary Product Accessibility Template. Version 1.3 VPAT Voluntary Product Accessibility Template Version 1.3 The purpose of the Voluntary Product Accessibility Template, or VPAT, is to assist Federal contracting officials and other buyers in making preliminary

More information

Solution Architecture Template (SAT) Design Guidelines

Solution Architecture Template (SAT) Design Guidelines Solution Architecture Template (SAT) Design Guidelines Change control Modification Details Version 2.0.0 Alignment with EIRA v2.0.0 Version 1.0.0 Initial version ISA² Action - European Interoperability

More information

Section Software Applications and Operating Systems Web Conferencing Service (WCS) Detail Voluntary Product Accessibility Template

Section Software Applications and Operating Systems Web Conferencing Service (WCS) Detail Voluntary Product Accessibility Template Section 1194.21 Software Applications and Operating Systems Web Conferencing Service (WCS) Detail Voluntary Product Accessibility Template Criteria Supporting Features Remarks and explanations (a) When

More information

PROJECT FINAL REPORT. Tel: Fax:

PROJECT FINAL REPORT. Tel: Fax: PROJECT FINAL REPORT Grant Agreement number: 262023 Project acronym: EURO-BIOIMAGING Project title: Euro- BioImaging - Research infrastructure for imaging technologies in biological and biomedical sciences

More information

ARTICLE 29 DATA PROTECTION WORKING PARTY

ARTICLE 29 DATA PROTECTION WORKING PARTY ARTICLE 29 DATA PROTECTION WORKING PARTY 18/EN WP261 Article 29 Working Party Draft Guidelines on the accreditation of certification bodies under Regulation (EU) 2016/679 Adopted on 6 february 2018 1 THE

More information

Adobe Experience Manager (AEM) 6.2 Forms Workbench Voluntary Product Accessibility Template

Adobe Experience Manager (AEM) 6.2 Forms Workbench Voluntary Product Accessibility Template Adobe Experience Manager (AEM) 6.2 Forms Workbench Voluntary Product Accessibility Template The purpose of the Voluntary Product Accessibility Template is to assist Federal contracting officials in making

More information

Adobe FrameMaker 12 Voluntary Product Accessibility Template

Adobe FrameMaker 12 Voluntary Product Accessibility Template Adobe FrameMaker 12 Voluntary Product Accessibility Template The purpose of the Voluntary Product Accessibility Template is to assist Federal contracting officials in making preliminary assessments regarding

More information

Adobe LiveCycle Designer ES3 Voluntary Product Accessibility Template

Adobe LiveCycle Designer ES3 Voluntary Product Accessibility Template Adobe LiveCycle Designer ES3 Voluntary Product Accessibility Template The purpose of the Voluntary Product Accessibility Template is to assist Federal contracting officials in making preliminary assessments

More information

Qualification details

Qualification details Qualification details Title New Zealand Diploma in Web Development and Design (Level 5) Version 1 Qualification type Diploma Level 5 Credits 120 NZSCED 020115 Information Technology > Computer Science

More information

ASSESSMENT SUMMARY XHTML 1.1 (W3C) Date: 27/03/ / 6 Doc.Version: 0.90

ASSESSMENT SUMMARY XHTML 1.1 (W3C) Date: 27/03/ / 6 Doc.Version: 0.90 ASSESSMENT SUMMARY XHTML 1.1 (W3C) Date: 27/03/2017 1 / 6 Doc.Version: 0.90 TABLE OF CONTENTS 1. INTRODUCTION... 3 2. ASSESSMENT SUMMARY... 3 3. ASSESSMENT RESULTS... 5 4. ASSESSMENT OBSERVATIONS... 5

More information

Adobe Business Catalyst Voluntary Product Accessibility Template

Adobe Business Catalyst Voluntary Product Accessibility Template Adobe Business Catalyst Voluntary Product Accessibility Template The purpose of the Voluntary Product Accessibility Template is to assist Federal contracting officials in making preliminary assessments

More information

Voluntary Product Accessibility Template

Voluntary Product Accessibility Template Voluntary Product Accessibility Template The purpose of the Voluntary Product Accessibility Template is to assist Federal contracting officials in making preliminary assessments regarding the availability

More information

Voluntary Product Accessibility Template

Voluntary Product Accessibility Template Voluntary Product Accessibility Template The purpose of the Voluntary Product Accessibility Template is to assist Federal contracting officials in making preliminary assessments regarding the availability

More information

YuJa Enterprise Video Platform Voluntary Product Accessibility Template (VPAT)

YuJa Enterprise Video Platform Voluntary Product Accessibility Template (VPAT) Platform Accessibility YuJa Enterprise Video Platform Voluntary Product Accessibility Template (VPAT) Updated: April 18, 2018 Introduction YuJa Corporation strives to create an equal and consistent media

More information

ehaction Joint Action to Support the ehealth Network

ehaction Joint Action to Support the ehealth Network Stakeholder Engagement - Consultation (22 August 2017) ehaction Joint Action to Support the ehealth Network 3 rd Joint Action to Support the ehealth Network Open Consultation 1 Participants of the 3 rd

More information

Adobe Fireworks CS6 Voluntary Product Accessibility Template

Adobe Fireworks CS6 Voluntary Product Accessibility Template Adobe Fireworks CS6 Voluntary Product Accessibility Template The purpose of the Voluntary Product Accessibility Template is to assist Federal contracting officials in making preliminary assessments regarding

More information

Adobe LiveCycle Reader Extensions ES3 Voluntary Product Accessibility Template

Adobe LiveCycle Reader Extensions ES3 Voluntary Product Accessibility Template Adobe LiveCycle Reader Extensions ES3 Voluntary Product Accessibility Template The purpose of the Voluntary Product Accessibility Template is to assist Federal contracting officials in making preliminary

More information

List of Checkpoints for User Agent Accessibility

List of Checkpoints for User Agent Accessibility List of Checkpoints for User Agent Accessibility Guidelines 1.0 12 September 2001 This version: http://www.w3.org/tr/2001/cr-uaag10-20010912/uaag10-chklist (Formats: plain text, PostScript, PDF) This document

More information

VPAT. Voluntary Product Accessibility Template. Version 1.3

VPAT. Voluntary Product Accessibility Template. Version 1.3 VPAT Version 1.3 The purpose of the Voluntary Product Accessibility Template, or VPAT, is to assist Federal contracting officials and other buyers in making preliminary assessments regarding the availability

More information

Section 508 Evaluation Template

Section 508 Evaluation Template Section 508 Evaluation Template Date: Name of Product: EIS BSS Contact for more Information: Harris Corporation ** Denotes Required Refer to the ITIC Best Practices for filling out the following form.

More information

Voluntary Product Accessibility Template

Voluntary Product Accessibility Template Voluntary Product Accessibility Template The purpose of the Voluntary Product Accessibility Template is to assist Federal contracting officials in making preliminary assessments regarding the availability

More information

Adobe Campaign (15.12) Voluntary Product Accessibility Template

Adobe Campaign (15.12) Voluntary Product Accessibility Template Adobe Campaign 6.1.1 (15.12) Voluntary Product Accessibility Template The purpose of the Voluntary Product Accessibility Template is to assist Federal contracting officials in making preliminary assessments

More information

Enterprise Infrastructure Solutions (EIS) Contract Number GS00Q17NSD3000

Enterprise Infrastructure Solutions (EIS) Contract Number GS00Q17NSD3000 Enterprise Infrastructure Solutions (EIS) Contract Number GS00Q17NSD3000 CDRL 33 Section 508 Conformance Audit - Voluntary Product Accessibility Template (VPAT) Managed Security - Intrusion Detection and

More information

Support-EAM. Final Activity Report SIXTH FRAMEWORK PROGRAMME. Project/Contract no. : IST SSA. the 6th Framework Programme

Support-EAM. Final Activity Report SIXTH FRAMEWORK PROGRAMME. Project/Contract no. : IST SSA. the 6th Framework Programme Support-EAM Supporting the creation of an eaccessibility Mark SIXTH FRAMEWORK PROGRAMME Project/Contract no. : IST-2-004754-SSA Project acronym: Project full title: Instrument: Thematic Priority: SUPPORT-

More information

Web Accessibility Checklist

Web Accessibility Checklist Web Accessibility Checklist = Web Content Accessibility Guidelines published by the World Wide Web Consortium (W3C) 508 = Section 508 of the Rehabilitation Act = Both CATE and Moodle take care of the rule

More information

Adobe LiveCycle Data Services Modeler ES3 Voluntary Product Accessibility Template

Adobe LiveCycle Data Services Modeler ES3 Voluntary Product Accessibility Template Adobe LiveCycle Data Services Modeler ES3 Voluntary Product Accessibility Template The purpose of the Voluntary Product Accessibility Template is to assist Federal contracting officials in making preliminary

More information

Adobe Acrobat.com Voluntary Product Accessibility Template

Adobe Acrobat.com Voluntary Product Accessibility Template Adobe Acrobat.com Voluntary Product Accessibility Template The purpose of the Voluntary Product Accessibility Template is to assist Federal contracting officials in making preliminary assessments regarding

More information

Report of the Working Group on mhealth Assessment Guidelines February 2016 March 2017

Report of the Working Group on mhealth Assessment Guidelines February 2016 March 2017 Report of the Working Group on mhealth Assessment Guidelines February 2016 March 2017 1 1 INTRODUCTION 3 2 SUMMARY OF THE PROCESS 3 2.1 WORKING GROUP ACTIVITIES 3 2.2 STAKEHOLDER CONSULTATIONS 5 3 STAKEHOLDERS'

More information

Adobe LiveCycle Forms Manager ES4 Voluntary Product Accessibility Template

Adobe LiveCycle Forms Manager ES4 Voluntary Product Accessibility Template Adobe LiveCycle Forms Manager ES4 Voluntary Product Accessibility Template The purpose of the Voluntary Product Accessibility Template is to assist Federal contracting officials in making preliminary assessments

More information

Adobe Photoshop Lightroom 3 Voluntary Product Accessibility Template

Adobe Photoshop Lightroom 3 Voluntary Product Accessibility Template Adobe Photoshop Lightroom 3 Voluntary Product Accessibility Template The purpose of the Voluntary Product Accessibility Template is to assist Federal contracting officials in making preliminary assessments

More information