Hands-off Use of Computer towards Universal Access through Voice Control Human-Computer Interface

Size: px
Start display at page:

Download "Hands-off Use of Computer towards Universal Access through Voice Control Human-Computer Interface"

Transcription

1 Hands-off Use of Computer towards Universal Access through Voice Control Human-Computer Interface Dalila Landestoy, Melvin Ayala, Malek Adjouadi, and Walter Tischer Center for Advanced Technology and Education Electrical & Computer Engineering Department Florida International University W. Flagler Street, Miami FL 33174, USA ; Abstract: This study presents a user friendly programming tool developed to assist persons with motor disabilities in communicating with computers via voice commands. The main purpose of the study was to develop a human computer interface (HCI) able to convert voice commands into computer actions. To this end, an interface was developed for facilitating the communication between MS Speech Recognition Engine and the computer operating system MS Windows such that the dictated commands are executed as if they were triggered by mouse or keyboard operations. Experiments were conducted to statistically establish a confusion matrix to assess the recognition accuracy of the voice-controlled HCI. Validation results from 15 subjects yielded a system performance with an 84% accuracy in the recognition of commands, and interestingly and unexpectedly, in most of the subjects, a high confusion degree between voice commands such as Mouse Right Click and Mouse Drag command was found. Such an outcome seems to be an issue of the MS Speech Recognition Engine which calls for an investigation as to why such a confusion arises in order to initiate further improvements of the voice interface. Key-Words: Speech Recognition, Motor Disability, Voice Commands, Confusion Degree. 1 Introduction The HCI design proposed in this study abides by the design principle less obvious but powerful, and the notion that people with disabilities should be able to use products and services like everyone else does, and if assistive or adaptive technologies are needed, they should be as free of constraints as possible. Therefore, this design approach is one that seeks intelligent software developments that will support our goal for universal access without burdening the users with any aspects of intrusion or overwhelming them with complex control mechanisms that are time consuming and difficult to integrate. The interface of choice is one that would be multimodal in order to address the different constraints imposed by the disability. The proposed voice-based interface is destined however for those individuals with severe motor disability but can still make use of their voice in order to initiate the proper commands. As designed this HCI is augmented with real-time software voice-induced commands for real-time applications, and to facilitate human-computer interaction in a userfriendly manner. For web browsing, , and editing. This is a major undertaking that will result in an alpha prototype that is reliable, accessible, cost effective, and certainly compatible with up to date assistive/adaptive technology based interfaces and with common computer-based platforms. Current speech recognition (SR) systems have evolved technologically to begin having an impact in HCI designs [1, 2, 3] that can be used by disabled individuals to interact with computers. Speech recognition in this case refers to a technology that runs on a personal computer and converts spoken words into text in a word processor [4, 5]. Words are entered into the computer via a microphone which is connected to a sound-card. Companies devoted to the commercialization of speech recognition systems have became more successful as the

2 technology became more viable. Currently, SR systems are being produced and used by a large number of companies [6]. Inherent problems to these systems continue to be related to the time needed to train the system in order to produce a voice profile that is unique to the user, and in minimizing training errors which result in unrecognized voice commands [7]. Other problems are in the scalability due to increasing vocabulary and in the adaptability of the engine to different users. Modalities such as mouse button operations by voice commands have unfortunately not been widely used to assist motor disabled individuals [8]. The main thrusts for this study are (1) to develop a user friendly speech recognition computer interface for motor disabled persons at low cost, and (2) to test and validate the system on different subjects for a performance evaluation. 1 General Design of the System The proposed application is a combination of several technical resources such as Microsoft Visual Studio.NET [9], User32 libraries, Microsoft Access database [10] and Microsoft Speech Recognition Technology [11]. The main functionality is enclosed in the code implementation using Visual Basic.NET as the programming language, which is the bridge to communicate with the different components used in the development of this application. Its basic structure consists of an on screenkeyboard that receives events from the voice recognition engine as illustrated in Fig. 1. The.NET application is the on-screen keyboard, whose input is a human voice. Once this input is received by the application, the speech recognition engine is initialized, and it loads its grammar rules from the XML file. During that process, the recognition event gets the results obtained from the engine and the result is passed on as an argument to a function call in order to execute the desired command. Finally, the execution of command is passed to the operation system, and the result can be physically seen, for example when the mouse moves to a certain direction. Fig.1 Block Diagram of the recognition process 2 Speech Recognition Engine The SR engine was instantiated and initialized with grammar rules from an XML file, which forms the basis of the SR engine vocabulary. In order to create the recognition engine object, a recognition context, which is the primary means by which an application interacts with a speech application programming interface (SAPI) for speech recognition, was created by using a two steps process. First, the context was declared and then, it was created by instantiating the specified class with a reference to a new object. By doing this, the SR engine was initialized. All applications on the machine using shared recognition contexts share a single audio input, the SR engine, and grammars. When the user speaks, the SR engine will do recognition, and SAPI decides which context to send the recognition result to, based on which grammar best matches the result. Furthermore, an IspeechRecoGrammar object was created, which enables applications to manage the words and phrases for the SR engine. After that, the grammar was explicitly created. In this specific application, a command and control grammar is used, which limits a word list by restricting the speaker to a small set of words. By doing this, users can speak a command, usually a single word, with greater chance of recognition. Moreover, it is necessary to initialize the grammar by loading it from a file. This was achieved by the CmdLoad

3 FromFile method, which loads a command and control grammar from the specified file, and provides two main parameters. The first parameter is the complete path where the XML file name is located in the machine, and the second one is the load option, which specifies whether the grammar is to be loaded for static or dynamic use. In this case, the grammar was loaded dynamically. Finally, the grammar was set to active by using the state property. All these previous steps are executed when the form is initially loaded. A SR application has a wide variety of events, of which most are by default active. Once the application receives a recognition event, the Recognition code is invoked automatically and immediately. Besides, each event is associated with a particular recognition context. There are two important possible results from a recognition attempt: Recognition event and FalseRecognition event. Therefore, in the development of this computer interface, these two relevant events were used. 3 XML File Description The voice commands that the application responds to are defined in an XML file that is dynamically loaded into the Grammar object at runtime (Table 1). Each command in the file is considered as a grammar rule that is defined with an ID name and value, which is unique and identifies the spoken word that is to be recognized by the SR engine as a voice command. In the tags that define a command, attributes can be set to specify precedence order of a particular word over another. This enables the use of phrases as commands rather than single words. For example, the engine would recognize the words Mouse Click as a command, but not the words Click Mouse, because the word Mouse has precedence over the word Click and it is recognized as the first part of a command. If the engine successfully recognizes a command from the XML file, it will include the associated ID in the event that passes to the application. The text grammar format is composed of XML tags, which can be structured to define the phrases that the speech recognition engine recognizes. The formal XML schema for the text grammar format is defined in a separate document, called XML Shema:Grammar. Grammar rules are elements that SR engines use to restrict the possible word or sentence choices during the SR process. SR engines employ grammar rules to control the elements of sentence construction using the predetermined list of recognized word or phrase choices. This list of recognized words or phrase choices contained in the grammar rules forms the basis of the SR engine vocabulary. Table 1: Commands defined in the XML file Voice Command Resulting Action Mouse Click A click is simulated at the present cursor position Mouse Double Click A double click is simulated at the present cursor position Mouse Right Click A right click is simulated at the present cursor position Mouse Move Left A move to the left is simulated Mouse Move Right A move to the right is simulated Mouse Move Down A move down is simulated Mouse Move Up A move up is simulated Mouse Drag A right down click is simulated Mouse Release A right up is simulated 4 Mouse Functionalities In this study, the problem of mouse operation was addressed by importing several API (application programming interface) from the User32 libraries in order to communicate directly with the operating system. Most of the functionalities of the mouse were implemented using a combination of API functions and SR technologies. In order to accomplish this, the mouse_event function was imported to synthesize mouse events and motion. Its parameters are extremely important since each one of them will provide significant information for implementing desired tasks. The dwflags specifies various aspects of mouse motion and button clicking. Parameter dx specifies the mouse's absolute position along the x-axis. Similarly, dy specifies the mouse's absolute position along the y-axis. Other important constants were imported in order to be passed as a parameter to the mouse_event function. To implement the mouse click event, a subroutine called MouseClick was created. A similar process was used when implementing the mouse double click action. To achieve this,

4 the MouseClick subroutine is called twice. Another subroutine was implemented to produce a right click action. In addition, a timer component, which allows the execution of a specific event at a defined interval, models a smooth transition of the cursor position in the specified direction at every 100 microseconds. Since the time interval is very small, the mouse easily navigates through the monitor as if manually executed. To achieve the desired movement in a certain direction, the coordinate parameters (x and y) are passed with different values. By passing the mouse's absolute position equal 1 or -1 in the x and y direction, the mouse will move one pixel at a time. To perform faster movements, a higher number is passed (for example 5 or - 5). In order to stop the mouse from moving the stop method of the timer component is invoked. 5 Training In order to use the software, the user needs to train the computer with the voice by using the included speech utility, to improve the recognition efficiency of the engine. This is needed because the engine itself is not user independent; it will only recognize someone s commands if it was previously trained to this subject s voice by building his or her own profile. The recognition profile can be updated for improvements. The profile could be trained by reading several sessions, which are short stories or fables that will help to interact with the training process. In addition, since the system is voice sensitive, noise should be kept to a minimum when speaking to a system. 6 Results A convenient tool for analyzing the results of this system is the confusion matrix, which is a matrix containing information about the actual and recognized commands. In this matrix, rows C i (i = 1,2,.,15) represent the actual commands, and columns C i * (i = 1,2,.,15) represents the commands recognized by the program. The commands to be executed as specified column wise (C i ) are recognized by the program as C i *. A perfect classification thus results in a matrix with 0's everywhere but on the diagonal. A cell which is not on the diagonal, but has a high count signifies that the command of the row is somewhat confused with the command of the column by the voice recognition system. In this study, 15 subjects were used to test the SR interface, a confusion matrix tool [12] was used to evaluate the performance of the SR engine. Table 2 shows the confusion matrix for this experiment. The symbols used for the commands are listed in Table 3. Table 2: Confusion matrix obtained after conducting the experiment, showing absolute values. It can be observed that the numbers of incorrect recognitions are not precisely low. Voice Commands Recognized Commands C*1 C*2 C*3 C*4 C*5 C*6 C*7 C*8 C*9 C*10 C*11 C*12 C*13 C*14 C*15 C C C C C C C C C C C C C C C

5 Table 3: Symbolic commands representation Symbol C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12 C13 C14 C15 Meaning Mouse Move Left Mouse Move Up Mouse Move Right Mouse Move Down Mouse Move Left Faster Mouse Stop Mouse Double Click Mouse Move Right Faster Mouse Right Click Mouse Move Up Faster Mouse Click Mouse Move Down Faster Mouse Drag Mouse Release Unrecognized Command The confusion matrix was used for finding pairs of commands that were most often confused. For example, C1's state of confusion with C2 is given by the amount of C1 s commands classified as C2 as a percentage of the amount of C1 s classified as either C1 or C2. Table 4 shows the confusion state matrix obtained in this study. From these results, it is observed that the highest confusion is between command C9 (Mouse Right Click) and C13 (Mouse Drag) with % confusion rate. A confusion matrix was obtained for every subject. Column 2 of Table 5 shows the correct classification rate per subject, while column 3 displays the highest rate of confusion and specifies the corresponding command pair. Fig. 2 shows the state of confusion of each command with respect to the others. The Correct Classification Rate (CCR) is the proportion of the total number of recognitions that were correct based on the total number of vocalized commands. It is determined using the equation: # of correct recognitions CCR = (1) # of given commands 6 Conclusions This study presented an integrated HCI approach to voice control commands as means to facilitate access and use of computers to persons with motor disabilities. Efforts were extended in the simplification of the user interface for real-time applications with viable performance. The developed application can be easily installed on a home computer, at no additional cost, and trained for later use without major difficulties. The system evaluation yielded what seems to be an internal problem in the MS Speech Recognition Engine, since a pair of commands was repeatedly incorrectly recognized across most of the subjects during the experiments. A logical continuation of this study will be the installation of the system in the homes of some motor disabled persons. This study is also intended to be extended to motor disabled patients from Miami Children s Hospital and will involve a patient follow up for future system enhancements. Table 4: Confusion state matrix obtained between voice commands and recognized commands. Voice Commands C Recognized Commands C*1 C*2 C*3 C*4 C*5 C*6 C*7 C*8 C*9 C*10 C*11 C*12 C*13 C*14 C*15 C C C C C C C C C C C C C C

6 Table 5: Confusion rate analysis based on individual subjects. Correct Classificati on Rate Highest Confusion Rate Value Subject (%) (%) Command C5 vs C C9 vs C C9 vs C C5 vs C C10 vs C C12 vs C C9 vs C C9 vs C C1 vs C C5 vs C C9 vs C C9 vs C C5 vs C C9 vs C C2 vs C15 Average C9 vs C13 State of Confusion C 9 C 5C1 C12 C 4C 6 C 2 Com ands C10 C 8C 3 C11 C 7 C14 C13 C15 Fig. 2: State of confusion of the voice commands based on the 15 subjects study. References [1] Becchetti, C., Speech Recognition Systems: Theory and C++ Implementation, Wiley, [2] Riverdeep Corp. Website. Just Speak Naturelly [Online]. Available: /04/front speech.jhtml [3] Hideki, H., Yoshifumi N., Yoichi, T., Speech Recognition Interface for General Purpose Workstation, IPSJ SIG Notes Human Interface, Abstract No , [4] Holmes, J., Holmes, J. N., Holmes, W., Speech Synthesis and Recognition, Taylor & Francis, Inc., January [5] Schroeder, M. R., Computer Speech: Recognition, Compression, Synthesis, Springer Series in Information Sciences, 35, Springer Verlag, July [6] Out-Loud. Website, Computing Out- Load [Online]. Available: [7] Ainsworth, W. A. ; and Pratt, S. R., Feedback Strategies for Error Correction in Speech Recognition Systems, International Journal of Man-Machine Studies 36(6), , [8] ScanSoft Inc. Website, QualiLife Selects ScanSoft Speech Technology to Help People with Disabilities Achieve Greater Independence [Online]. ses/2002/ _qualilife.asp [9] Beamer, G., "Upgrading Microsoft Visual Basic 6.0 to Microsoft Visual Basic.NET" By Microsoft Press, July 1, [10] Howard, T., The Smalltalk Developer's Guide to VisualWorks,SIGS Books, [11] Microsoft Corp. Website, Speech SDK 5.1 for Windows applications oad/sdk51/ [12] Adjouadi, M.; and Ayala M., NeuralStudio (A Neural Networks Simulator) [Online]. Available: Acknowledgments The authors gratefully acknowledge the support from the National Science Foundation Grants EIA and HRD , and the Office of Naval Research Grant N

Summary. Speech-Enabling Visual Basic 6 Applications with Microsoft SAPI Microsoft Corporation. All rights reserved.

Summary. Speech-Enabling Visual Basic 6 Applications with Microsoft SAPI Microsoft Corporation. All rights reserved. Speech-Enabling Visual Basic 6 Applications with Microsoft SAPI 5.1 2001 Microsoft Corporation. All rights reserved. Summary Prerequisites You need no previous experience with speech recognition, but you

More information

Integrate Speech Technology for Hands-free Operation

Integrate Speech Technology for Hands-free Operation Integrate Speech Technology for Hands-free Operation Copyright 2011 Chant Inc. All rights reserved. Chant, SpeechKit, Getting the World Talking with Technology, talking man, and headset are trademarks

More information

Abstract. 1. Conformance. 2. Introduction. 3. Abstract User Interface

Abstract. 1. Conformance. 2. Introduction. 3. Abstract User Interface MARIA (Model-based language for Interactive Applications) W3C Working Group Submission 3 February 2012 Editors: Fabio Paternò, ISTI-CNR Carmen Santoro, ISTI-CNR Lucio Davide Spano, ISTI-CNR Copyright 2012

More information

Speech Recognition, The process of taking spoken word as an input to a computer

Speech Recognition, The process of taking spoken word as an input to a computer Speech Recognition, The process of taking spoken word as an input to a computer program (Baumann) Have you ever found yourself yelling at your computer, wishing you could make it understand what you want

More information

Quick Start Guide MAC Operating System Built-In Accessibility

Quick Start Guide MAC Operating System Built-In Accessibility Quick Start Guide MAC Operating System Built-In Accessibility Overview The MAC Operating System X has many helpful universal access built-in options for users of varying abilities. In this quickstart,

More information

Using Speech Recognition for controlling a Pan-Tilt-Zoom Network Camera

Using Speech Recognition for controlling a Pan-Tilt-Zoom Network Camera Using Speech Recognition for controlling a Pan-Tilt-Zoom Network Camera Enrique Garcia Department of Computer Science University of Lund Lund, Sweden enriqueg@axis.com Sven Grönquist Department of Computer

More information

A Kinect Sensor based Windows Control Interface

A Kinect Sensor based Windows Control Interface , pp.113-124 http://dx.doi.org/10.14257/ijca.2014.7.3.12 A Kinect Sensor based Windows Control Interface Sang-Hyuk Lee 1 and Seung-Hyun Oh 2 Department of Computer Science, Dongguk University, Gyeongju,

More information

Voice Activated Command and Control with Speech Recognition over Wireless Networks

Voice Activated Command and Control with Speech Recognition over Wireless Networks The ITB Journal Volume 5 Issue 2 Article 4 2004 Voice Activated Command and Control with Speech Recognition over Wireless Networks Tony Ayres Brian Nolan Follow this and additional works at: https://arrow.dit.ie/itbj

More information

A NEURAL NETWORK APPLICATION FOR A COMPUTER ACCESS SECURITY SYSTEM: KEYSTROKE DYNAMICS VERSUS VOICE PATTERNS

A NEURAL NETWORK APPLICATION FOR A COMPUTER ACCESS SECURITY SYSTEM: KEYSTROKE DYNAMICS VERSUS VOICE PATTERNS A NEURAL NETWORK APPLICATION FOR A COMPUTER ACCESS SECURITY SYSTEM: KEYSTROKE DYNAMICS VERSUS VOICE PATTERNS A. SERMET ANAGUN Industrial Engineering Department, Osmangazi University, Eskisehir, Turkey

More information

User Manual English Edition Version 2.0

User Manual English Edition Version 2.0 User Manual English Edition Version 2.0 Copyright 2005 QualiLife SA. 2008 essential Accessibility Inc. essential Accessibility Powered by QualiLife essential Accessibility Inc. has made every effort to

More information

The Grid 2 is accessible to everybody, accepting input from eye gaze, switches, headpointer, touchscreen, mouse, and other options too.

The Grid 2 is accessible to everybody, accepting input from eye gaze, switches, headpointer, touchscreen, mouse, and other options too. The Grid 2-89224 Product Overview The Grid 2 is an all-in-one package for communication and access. The Grid 2 allows people with limited or unclear speech to use a computer as a voice output communication

More information

UNIVERSITY OF NEBRASKA AT OMAHA Computer Science 4620/8626 Computer Graphics Spring 2014 Homework Set 1 Suggested Answers

UNIVERSITY OF NEBRASKA AT OMAHA Computer Science 4620/8626 Computer Graphics Spring 2014 Homework Set 1 Suggested Answers UNIVERSITY OF NEBRASKA AT OMAHA Computer Science 4620/8626 Computer Graphics Spring 2014 Homework Set 1 Suggested Answers 1. How long would it take to load an 800 by 600 frame buffer with 16 bits per pixel

More information

Hands Free Mouse: Comparative Study on Mouse Clicks Controlled by Humming

Hands Free Mouse: Comparative Study on Mouse Clicks Controlled by Humming Hands Free Mouse: Comparative Study on Mouse Clicks Controlled by Humming Ondřej Poláček Faculty of Electrical Engineering, Czech Technical University in Prague Karlovo nám. 13 12135 Praha 2 Czech Republic

More information

Voice. Voice. Patterson EagleSoft Overview Voice 629

Voice. Voice. Patterson EagleSoft Overview Voice 629 Voice Voice Using the Microsoft voice engine, Patterson EagleSoft's Voice module is now faster, easier and more efficient than ever. Please refer to your Voice Installation guide prior to installing the

More information

Digital Newsletter. Editorial. Second Review Meeting in Brussels

Digital Newsletter. Editorial. Second Review Meeting in Brussels Editorial The aim of this newsletter is to inform scientists, industry as well as older people in general about the achievements reached within the HERMES project. The newsletter appears approximately

More information

Intelligent Hands Free Speech based SMS System on Android

Intelligent Hands Free Speech based SMS System on Android Intelligent Hands Free Speech based SMS System on Android Gulbakshee Dharmale 1, Dr. Vilas Thakare 3, Dr. Dipti D. Patil 2 1,3 Computer Science Dept., SGB Amravati University, Amravati, INDIA. 2 Computer

More information

OCR Interfaces for Visually Impaired

OCR Interfaces for Visually Impaired OCR Interfaces for Visually Impaired TOPIC ASSIGNMENT 2 Author: Sachin FERNANDES Graduate 8 Undergraduate Team 2 TOPIC PROPOSAL Instructor: Dr. Robert PASTEL March 4, 2016 LIST OF FIGURES LIST OF FIGURES

More information

Speech Recognizing Robotic Arm for Writing Process

Speech Recognizing Robotic Arm for Writing Process Speech Recognizing Robotic Arm for Writing Process 1 Dhanshri R. Pange, 2 Dr. Anil R. Karwankar 1 M. E. Electronics Student, 2 Professor, Department of Electronics and Telecommunication Govt. Engineering

More information

Static Gesture Recognition with Restricted Boltzmann Machines

Static Gesture Recognition with Restricted Boltzmann Machines Static Gesture Recognition with Restricted Boltzmann Machines Peter O Donovan Department of Computer Science, University of Toronto 6 Kings College Rd, M5S 3G4, Canada odonovan@dgp.toronto.edu Abstract

More information

User Guide Contents The Toolbar The Menus The Spell Checker and Dictionary Adding Pictures to Documents... 80

User Guide Contents The Toolbar The Menus The Spell Checker and Dictionary Adding Pictures to Documents... 80 User Guide Contents Chapter 1 The Toolbar... 40 Unique Talking Toolbar Features... 40 Text Navigation and Selection Buttons... 42 Speech Buttons... 44 File Management Buttons... 45 Content Buttons... 46

More information

2. On classification and related tasks

2. On classification and related tasks 2. On classification and related tasks In this part of the course we take a concise bird s-eye view of different central tasks and concepts involved in machine learning and classification particularly.

More information

Screen Reader for Windows Based on Speech Output

Screen Reader for Windows Based on Speech Output Screen Reader for Windows Based on Speech Output Paolo Graziani 1 and Bruno Breschi ~ 1 - I.R.O.E. "Nello Carrara" - C.N.R., Via Panciatichi 64 1-50127 Firenze 2 - IDEA I.S.E.s.n.c., Via S. Francesco d'assisi

More information

Zing Speak Tutorial. By Sandy McCauley January 13, 2012

Zing Speak Tutorial. By Sandy McCauley January 13, 2012 Zing Speak Tutorial By Sandy McCauley January 13, 2012 What is Zing Speak? Zing Speak is a feature in the KNK Zing plug-in, beginning with version 2.0. It allows you to communicate by voice with the computer

More information

Authoring Guide. Revised: November 26, 2004 Version: 1.3

Authoring Guide. Revised: November 26, 2004 Version: 1.3 Authoring Guide Revised: November 26, 2004 Version: 1.3 This document contains information proprietary to Sounds Virtual Inc., and may not be reproduced, disclosed, or used in whole or in part without

More information

Microsoft speech offering

Microsoft speech offering Microsoft speech offering Win 10 Speech APIs Local Commands with constrained grammar E.g. Turn on, turn off Cloud dictation Typing a message, Web search, complex phrases Azure marketplace Oxford APIs LUIS

More information

This document explains several types of Mac OS X assistance available for people with visual, dexterity, or hearing impairments.

This document explains several types of Mac OS X assistance available for people with visual, dexterity, or hearing impairments. This document explains several types of Mac OS X assistance available for people with visual, dexterity, or hearing impairments. For people with low vision............ 2 Increase the visiblity of the items

More information

LetterScroll: Text Entry Using a Wheel for Visually Impaired Users

LetterScroll: Text Entry Using a Wheel for Visually Impaired Users LetterScroll: Text Entry Using a Wheel for Visually Impaired Users Hussain Tinwala Dept. of Computer Science and Engineering, York University 4700 Keele Street Toronto, ON, CANADA M3J 1P3 hussain@cse.yorku.ca

More information

Voice Recognition Implementation: Voice recognition software development kit (SDK), downloadable as freeware or shareware.

Voice Recognition Implementation: Voice recognition software development kit (SDK), downloadable as freeware or shareware. 1 General Description: The purpose of this project is to increase the speed and accuracy of name recall in elderly patients by creating an installable software package which will be used as a game. The

More information

System 44 Next Generation Software Manual

System 44 Next Generation Software Manual System 44 Next Generation Software Manual For use with System 44 Next Generation version 2.4 or later and Student Achievement Manager version 2.4 or later PDF0836 (PDF) Houghton Mifflin Harcourt Publishing

More information

DRAGON FOR AMBULATORY CARE PROVIDERS

DRAGON FOR AMBULATORY CARE PROVIDERS DRAGON FOR AMBULATORY CARE PROVIDERS Presented by the IS Training Department, Children s Hospital of The King s Daughters August 2011 INTRODUCTION... 1 OBJECTIVES... 1 DRAGON SETUP... 2 COMPONENTS OF

More information

set in Options). Returns the cursor to its position prior to the Correct command.

set in Options). Returns the cursor to its position prior to the Correct command. Dragon Commands Summary Dragon Productivity Commands Relative to Dragon for Windows v14 or higher Dictation success with Dragon depends on just a few commands that provide about 95% of the functionality

More information

SOFTWARE DESIGN AND DEVELOPMENT OF MUTIMODAL INTERACTION

SOFTWARE DESIGN AND DEVELOPMENT OF MUTIMODAL INTERACTION SOFTWARE DESIGN AND DEVELOPMENT OF MUTIMODAL INTERACTION Marie-Luce Bourguet Queen Mary University of London Abstract: Key words: The multimodal dimension of a user interface raises numerous problems that

More information

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra Pattern Recall Analysis of the Hopfield Neural Network with a Genetic Algorithm Susmita Mohapatra Department of Computer Science, Utkal University, India Abstract: This paper is focused on the implementation

More information

A Smart Power System Weihan Bo, Mi Li, Xi-Ping Peng, Xiang Li, Xin Huang *

A Smart Power System Weihan Bo, Mi Li, Xi-Ping Peng, Xiang Li, Xin Huang * 3rd International Conference on Mechanical Engineering and Intelligent Systems (ICMEIS 2015) A Smart Power System Weihan Bo, Mi Li, Xi-Ping Peng, Xiang Li, Xin Huang * Xi'an Jiaotong-Liverpool University,

More information

CS Human Computer Interaction

CS Human Computer Interaction Part A 1. Define HCI CS6008 - Human Computer Interaction UNIT-I Question Bank FOUNDATIONS OF HCI 2. What are the basic requirements of an Successful Interactive System? 3. What is STM & LTM? 4. List out

More information

FACE ANALYSIS AND SYNTHESIS FOR INTERACTIVE ENTERTAINMENT

FACE ANALYSIS AND SYNTHESIS FOR INTERACTIVE ENTERTAINMENT FACE ANALYSIS AND SYNTHESIS FOR INTERACTIVE ENTERTAINMENT Shoichiro IWASAWA*I, Tatsuo YOTSUKURA*2, Shigeo MORISHIMA*2 */ Telecommunication Advancement Organization *2Facu!ty of Engineering, Seikei University

More information

Voice Activated Command and Control with Speech Recognition over WiFi

Voice Activated Command and Control with Speech Recognition over WiFi Dublin Institute of Technology ARROW@DIT Articles Computational Functional Linguistics 2006 Voice Activated Command and Control with Speech Recognition over WiFi Brian Nolan Institute of Technology, Blanchardstown,

More information

WFSTDM Builder Network-based Spoken Dialogue System Builder for Easy Prototyping

WFSTDM Builder Network-based Spoken Dialogue System Builder for Easy Prototyping WFSTDM Builder Network-based Spoken Dialogue System Builder for Easy Prototyping Etsuo Mizukami and Chiori Hori Abstract This paper introduces a network-based spoken dialog system development tool kit:

More information

Niusha, the first Persian speech-enabled IVR platform

Niusha, the first Persian speech-enabled IVR platform 2010 5th International Symposium on Telecommunications (IST'2010) Niusha, the first Persian speech-enabled IVR platform M.H. Bokaei, H. Sameti, H. Eghbal-zadeh, B. BabaAli, KH. Hosseinzadeh, M. Bahrani,

More information

ISSN: (Online) Volume 2, Issue 3, March 2014 International Journal of Advance Research in Computer Science and Management Studies

ISSN: (Online) Volume 2, Issue 3, March 2014 International Journal of Advance Research in Computer Science and Management Studies ISSN: 2321-7782 (Online) Volume 2, Issue 3, March 2014 International Journal of Advance Research in Computer Science and Management Studies Research Article / Paper / Case Study Available online at: www.ijarcsms.com

More information

The WordRead Toolbar lets you use WordRead's powerful features at any time without getting in your way.

The WordRead Toolbar lets you use WordRead's powerful features at any time without getting in your way. Welcome to WordRead Welcome to WordRead. WordRead is designed to make it easier for you to do things with your computer by making it speak and making things easier to read. It is closely integrated with

More information

Random projection for non-gaussian mixture models

Random projection for non-gaussian mixture models Random projection for non-gaussian mixture models Győző Gidófalvi Department of Computer Science and Engineering University of California, San Diego La Jolla, CA 92037 gyozo@cs.ucsd.edu Abstract Recently,

More information

MPML: A Multimodal Presentation Markup Language with Character Agent Control Functions

MPML: A Multimodal Presentation Markup Language with Character Agent Control Functions MPML: A Multimodal Presentation Markup Language with Character Agent Control Functions Takayuki Tsutsui, Santi Saeyor and Mitsuru Ishizuka Dept. of Information and Communication Eng., School of Engineering,

More information

CHAPTER 4: PATTERNS AND STYLES IN SOFTWARE ARCHITECTURE

CHAPTER 4: PATTERNS AND STYLES IN SOFTWARE ARCHITECTURE CHAPTER 4: PATTERNS AND STYLES IN SOFTWARE ARCHITECTURE SESSION II: DATA-CENTERED, DATA-FLOW, AND DISTRIBUTED SYSTEMS Software Engineering Design: Theory and Practice by Carlos E. Otero Slides copyright

More information

Setting Accessibility Options in Windows 7

Setting Accessibility Options in Windows 7 Setting Accessibility Options in Windows 7 Windows features a number of different options to make it easier for people who are differently-abled to use a computer. Opening the Ease of Access Center The

More information

Front-end Specification of XISL

Front-end Specification of XISL Front-end Specification of XISL Input Modalities The attributes of element are,, and so on. The attribute specifies the of input, and the attribute specifies the input. Input mode is written in

More information

Programming-By-Example Gesture Recognition Kevin Gabayan, Steven Lansel December 15, 2006

Programming-By-Example Gesture Recognition Kevin Gabayan, Steven Lansel December 15, 2006 Programming-By-Example Gesture Recognition Kevin Gabayan, Steven Lansel December 15, 6 Abstract Machine learning and hardware improvements to a programming-by-example rapid prototyping system are proposed.

More information

Hands-Free Internet using Speech Recognition

Hands-Free Internet using Speech Recognition Introduction Trevor Donnell December 7, 2001 6.191 Preliminary Thesis Proposal Hands-Free Internet using Speech Recognition The hands-free Internet will be a system whereby a user has the ability to access

More information

mapping IFC versions R.W. Amor & C.W. Ge Department of Computer Science, University of Auckland, Auckland, New Zealand

mapping IFC versions R.W. Amor & C.W. Ge Department of Computer Science, University of Auckland, Auckland, New Zealand mapping IFC versions R.W. Amor & C.W. Ge Department of Computer Science, University of Auckland, Auckland, New Zealand ABSTRACT: In order to cope with the growing number of versions of IFC schema being

More information

Progress in Spoken Programming

Progress in Spoken Programming Progress in Spoken Programming Benjamin M. Gordon George F. Luger Department of Computer Science University of New Mexico Abstract The dominant paradigm for programming a computer today is text entry via

More information

User Manual. Tellus smart

User Manual. Tellus smart User Manual Tellus smart Content Introduction 3 How to turn on the Tellus smart. 4 Connectors and buttons.. 5 Touch screen. 8 On-screen keyboard. 9 Battery and charging 10 How to create a communication

More information

Basic Structure of Denotational Definitions

Basic Structure of Denotational Definitions asic Structure of Denotational Definitions This chapter presents the format for denotational definitions. We use the abstract syntax and semantic algebra formats to define the appearance and the meaning

More information

SMSVoiceIt The SMS-to-Speech Gadget By Yiannis Hatzopoulos Scientific Engineering Services

SMSVoiceIt The SMS-to-Speech Gadget By Yiannis Hatzopoulos Scientific Engineering Services SMSVoiceIt The SMS-to-Speech Gadget By Yiannis Hatzopoulos Scientific Engineering Services www.ses-ltd.gr Abstract: SMS messaging is universal, instant and cheap. SMSVoiceIt is a SIM-Card SMS vocalization

More information

Starting-Up Fast with Speech-Over Professional

Starting-Up Fast with Speech-Over Professional Starting-Up Fast with Speech-Over Professional Contents #1 Getting Ready... 2 Starting Up... 2 Initial Preferences Settings... 3 Adding a Narration Clip... 3 On-Line Tutorials... 3 #2: Creating a Synchronized

More information

A personal digital assistant as an advanced remote control for audio/video equipment

A personal digital assistant as an advanced remote control for audio/video equipment A personal digital assistant as an advanced remote control for audio/video equipment John de Vet & Vincent Buil Philips Research Prof. Holstlaan 4 5656 AA Eindhoven The Netherlands Email: {devet, builv}@natlab.research.philips.com

More information

Fully Automatic Methodology for Human Action Recognition Incorporating Dynamic Information

Fully Automatic Methodology for Human Action Recognition Incorporating Dynamic Information Fully Automatic Methodology for Human Action Recognition Incorporating Dynamic Information Ana González, Marcos Ortega Hortas, and Manuel G. Penedo University of A Coruña, VARPA group, A Coruña 15071,

More information

Massachusetts Institute of Technology Department of Electrical Engineering & Computer Science Automatic Speech Recognition Spring, 2003

Massachusetts Institute of Technology Department of Electrical Engineering & Computer Science Automatic Speech Recognition Spring, 2003 Massachusetts Institute of Technology Department of Electrical Engineering & Computer Science 6.345 Automatic Speech Recognition Spring, 2003 Issued: 02/07/03 Assignment 1 Information An Introduction to

More information

Talking Books in PowerPoint

Talking Books in PowerPoint Talking Books in PowerPoint Quick Guide Created 10/03 Updated 10/09 JC Creating a template The following instructions are based on PowerPoint XP (2000, 2002,2003) Create a blank page Open up PowerPoint

More information

TMG Clerk. User Guide

TMG  Clerk. User Guide User Guide Getting Started Introduction TMG Email Clerk The TMG Email Clerk is a kind of program called a COM Add-In for Outlook. This means that it effectively becomes integrated with Outlook rather than

More information

A Visualization Tool to Improve the Performance of a Classifier Based on Hidden Markov Models

A Visualization Tool to Improve the Performance of a Classifier Based on Hidden Markov Models A Visualization Tool to Improve the Performance of a Classifier Based on Hidden Markov Models Gleidson Pegoretti da Silva, Masaki Nakagawa Department of Computer and Information Sciences Tokyo University

More information

Optical Character Recognition Based Speech Synthesis System Using LabVIEW

Optical Character Recognition Based Speech Synthesis System Using LabVIEW Optical Character Recognition Based Speech Synthesis System Using LabVIEW S. K. Singla* 1 and R.K.Yadav 2 1 Electrical and Instrumentation Engineering Department Thapar University, Patiala,Punjab *sunilksingla2001@gmail.com

More information

SpeakToText 2.5 Speech Recognition User Manual (Version 2.51)

SpeakToText 2.5 Speech Recognition User Manual (Version 2.51) Making it FUN and EASY to use SPEECH with your COMPUTER! CoolSoft, LLC INTRODUCTION SpeakToText 2.5 Speech Recognition User Manual (Version 2.51) SpeakToText 2.5 Speech Recognition, Version 2.51 is a powerful

More information

MIKE: a Multimodal Cinematographic Editor for Virtual Worlds

MIKE: a Multimodal Cinematographic Editor for Virtual Worlds MIKE: a Multimodal Cinematographic Editor for Virtual Worlds Bruno de Araújo, André Campos, Joaquim A. Jorge Department of Information Systems and Computer Science INESC-ID/IST/Technical University of

More information

2006 VCE VET Multimedia GA 2: Computer-based examination

2006 VCE VET Multimedia GA 2: Computer-based examination 2006 VCE VET Multimedia GA 2: Computer-based examination GENERAL COMMENTS Overall, students in 2006 demonstrated a well-developed understanding of the fundamental concept areas Apply principles of visual

More information

ios Accessibility Features

ios Accessibility Features 1. Introduction Apple, since the birth of ios in 2007, has quietly been adding more and more sophisticated accessibility features to its mobile operating system. All of the following features are built

More information

I. INTRODUCTION ABSTRACT

I. INTRODUCTION ABSTRACT 2018 IJSRST Volume 4 Issue 8 Print ISSN: 2395-6011 Online ISSN: 2395-602X Themed Section: Science and Technology Voice Based System in Desktop and Mobile Devices for Blind People Payal Dudhbale*, Prof.

More information

In fact, in many cases, one can adequately describe [information] retrieval by simply substituting document for information.

In fact, in many cases, one can adequately describe [information] retrieval by simply substituting document for information. LµŒ.y A.( y ý ó1~.- =~ _ _}=ù _ 4.-! - @ \{=~ = / I{$ 4 ~² =}$ _ = _./ C =}d.y _ _ _ y. ~ ; ƒa y - 4 (~šƒ=.~². ~ l$ y C C. _ _ 1. INTRODUCTION IR System is viewed as a machine that indexes and selects

More information

Multiple Constraint Satisfaction by Belief Propagation: An Example Using Sudoku

Multiple Constraint Satisfaction by Belief Propagation: An Example Using Sudoku Multiple Constraint Satisfaction by Belief Propagation: An Example Using Sudoku Todd K. Moon and Jacob H. Gunther Utah State University Abstract The popular Sudoku puzzle bears structural resemblance to

More information

GUIDELINES FOR SPEECH- ACCESSIBLE HTML FOR DRAGON NATURALLYSPEAKING AND DRAGON MEDICAL WHITE PAPER

GUIDELINES FOR SPEECH- ACCESSIBLE HTML FOR DRAGON NATURALLYSPEAKING AND DRAGON MEDICAL WHITE PAPER GUIDELINES FOR SPEECH- ACCESSIBLE HTML FOR DRAGON NATURALLYSPEAKING AND DRAGON MEDICAL WHITE PAPER CONTENTS Overview... 2 General Requirements... 3 Dictation... 3 Elements Problematic For Diction... 4

More information

COS 116 The Computational Universe Laboratory 4: Digital Sound and Music

COS 116 The Computational Universe Laboratory 4: Digital Sound and Music COS 116 The Computational Universe Laboratory 4: Digital Sound and Music In this lab you will learn about digital representations of sound and music, especially focusing on the role played by frequency

More information

COPYRIGHTED MATERIAL. Introduction. 1.1 Introduction

COPYRIGHTED MATERIAL. Introduction. 1.1 Introduction 1 Introduction 1.1 Introduction One of the most fascinating characteristics of humans is their capability to communicate ideas by means of speech. This capability is undoubtedly one of the facts that has

More information

Disconnecting the application from the interaction model

Disconnecting the application from the interaction model Disconnecting the application from the interaction model Ing-Marie Jonsson, Neil Scott, Judy Jackson Project Archimedes, CSLI Stanford University {ingmarie,ngscott,jackson}@csli.stanford.edu Abstract:

More information

Human Computer Interaction Using Speech Recognition Technology

Human Computer Interaction Using Speech Recognition Technology International Bulletin of Mathematical Research Volume 2, Issue 1, March 2015 Pages 231-235, ISSN: 2394-7802 Human Computer Interaction Using Recognition Technology Madhu Joshi 1 and Saurabh Ranjan Srivastava

More information

Volume key. Power Key: Turn the PTN2 on and off by pressing and holding for 2 seconds.

Volume key. Power Key: Turn the PTN2 on and off by pressing and holding for 2 seconds. PLEXTALK PTN2 : Pocket Guide 1. Keys with the Mask Tone key Volume key Sleep Timer key Power key Eject key Title key Rewind key /Stop key Fast Forward key 2. Description of the PLEXTALK PTN2 Speaker is

More information

User Guide: How-To Unified Communications

User Guide: How-To Unified Communications User Guide: How-To Unified Communications DOCUMENT REVISION DATE: October, 2010 Unified Communications: How-To / Table of Contents Page 2 of 69 Table of Contents COMMUNICATOR PRESENCE AND INSTANT MESSAGING...

More information

Theories of User Interface Design

Theories of User Interface Design Theories of User Interface Design High-Level Theories Foley and van Dam four-level approach GOMS Goals, Operators, Methods, and Selection Rules Conceptual level: Foley and van Dam User's mental model of

More information

Multimodal Interfaces. Remotroid

Multimodal Interfaces. Remotroid Multimodal Interfaces Remotroid Siavash Bigdeli / Christian Lutz University of Neuchatel and University of Fribourg 1. June 2012 Table of contents 1 Introduction...3 2 Idea of the application...3 3 Device

More information

10.1 Introduction. Higher Level Processing. Word Recogniton Model. Text Output. Voice Signals. Spoken Words. Syntax, Semantics, Pragmatics

10.1 Introduction. Higher Level Processing. Word Recogniton Model. Text Output. Voice Signals. Spoken Words. Syntax, Semantics, Pragmatics Chapter 10 Speech Recognition 10.1 Introduction Speech recognition (SR) by machine, which translates spoken words into text has been a goal of research for more than six decades. It is also known as automatic

More information

Announcements. Reading: Chapter 16 Project #5 Due on Friday at 6:00 PM. CMSC 412 S10 (lect 24) copyright Jeffrey K.

Announcements. Reading: Chapter 16 Project #5 Due on Friday at 6:00 PM. CMSC 412 S10 (lect 24) copyright Jeffrey K. Announcements Reading: Chapter 16 Project #5 Due on Friday at 6:00 PM 1 Distributed Systems Provide: access to remote resources security location independence load balancing Basic Services: remote login

More information

Say-it: Design of a Multimodal Game Interface for Children Based on CMU Sphinx 4 Framework

Say-it: Design of a Multimodal Game Interface for Children Based on CMU Sphinx 4 Framework Grand Valley State University ScholarWorks@GVSU Technical Library School of Computing and Information Systems 2014 Say-it: Design of a Multimodal Game Interface for Children Based on CMU Sphinx 4 Framework

More information

VRbot with ROBONOVA-I

VRbot with ROBONOVA-I VRbot Module VRbot with ROBONOVA-I The VRbot module provides voice recognition functions for built-in Speaker Independent (SI) commands and up to 32 user-defined commands (Speaker Dependent (SD) trigger

More information

y texthelp Read&Write for Google Chrome Quick Reference Guide Docs, Slides and Web read&write - j & Google Docs

y texthelp Read&Write for Google Chrome Quick Reference Guide Docs, Slides and Web read&write - j & Google Docs y texthelp Read&Write for Chrome Quick Reference Guide 12.17 f m El 11 s, Slides and i >» := n i* - j Tool Symbol Where it works How it works Text to Speech Reads text aloud with dual color highlighting

More information

Voice Annotation Technique for Reading-disabled People on Mobile System

Voice Annotation Technique for Reading-disabled People on Mobile System , pp.53-57 http://dx.doi.org/10.14257/astl.2014.60.14 Voice Annotation Technique for Reading-disabled People on Mobile System Joo Hyun Park 1, KyungHee Lee 1, JongWoo Lee 1, Soon-Bum Lim 1, 1 Multimedia

More information

Table of Content. Installing Read & Write Gold New & Enhanced Features General Options 31-33

Table of Content. Installing Read & Write Gold New & Enhanced Features General Options 31-33 Table of Content Installing Read & Write Gold 11 4-22 New & Enhanced Features 23-30 General Options 31-33 Reading Features 34-44 o Dictionary o Picture Dictionary 1 P age o Sounds Like o Screenshot Reader

More information

MRCP. Google SR Plugin. Usage Guide. Powered by Universal Speech Solutions LLC

MRCP. Google SR Plugin. Usage Guide. Powered by Universal Speech Solutions LLC Powered by Universal Speech Solutions LLC MRCP Google SR Plugin Usage Guide Revision: 6 Created: May 17, 2017 Last updated: January 22, 2018 Author: Arsen Chaloyan Universal Speech Solutions LLC Overview

More information

Speech Articulation Training PART 1. VATA (Vowel Articulation Training Aid)

Speech Articulation Training PART 1. VATA (Vowel Articulation Training Aid) Speech Articulation Training PART 1 VATA (Vowel Articulation Training Aid) VATA is a speech therapy tool designed to supplement insufficient or missing auditory feedback for hearing impaired persons. The

More information

PECS IV+ Getting Started. Designed for a stress-free transition from traditional PECS to a tablet-based communication

PECS IV+ Getting Started. Designed for a stress-free transition from traditional PECS to a tablet-based communication PECS IV+ Getting Started Designed for a stress-free transition from traditional PECS to a tablet-based communication 2017 Pyramid Educational Consultants, PECS IV+ PECS IV+ is the solution for transitioning

More information

System 44 Next Generation Software Manual

System 44 Next Generation Software Manual System 44 Next Generation Software Manual For use with System 44 Next Generation version 3.x or later and Student Achievement Manager version 3.x or later Table of Contents Overview... 5 Instructional

More information

Getting Started with Java Using Alice. 1 Copyright 2013, Oracle and/or its affiliates. All rights reserved.

Getting Started with Java Using Alice. 1 Copyright 2013, Oracle and/or its affiliates. All rights reserved. Getting Started with Java Using Alice Develop a Complete Animation 1 Copyright 2013, Oracle and/or its affiliates. All rights Objectives This lesson covers the following objectives: Use functional decomposition

More information

Digital Storytelling with Photo Story 3

Digital Storytelling with Photo Story 3 Digital Storytelling with Photo Story 3 Before you begin, upload your digital images and save them into an easily identifiable folder in a convenient location on your hard drive eg. the my pictures directory.

More information

Journal of Applied Research and Technology ISSN: Centro de Ciencias Aplicadas y Desarrollo Tecnológico.

Journal of Applied Research and Technology ISSN: Centro de Ciencias Aplicadas y Desarrollo Tecnológico. Journal of Applied Research and Technology ISSN: 1665-6423 jart@aleph.cinstrum.unam.mx Centro de Ciencias Aplicadas y Desarrollo Tecnológico México Singla, S. K.; Yadav, R. K. Optical Character Recognition

More information

SR Telephony Applications. Designing Speech-Only User Interfaces

SR Telephony Applications. Designing Speech-Only User Interfaces SR Telephony Applications Designing Speech-Only User Interfaces Overview Types of Services Designing Speech-Only Applications Feedback and Latency Prompting Strategies Handling Errors Usability Testing

More information

Research on the Voice Control and its Audio Signal Processing in Flexible Manufacturing Cell

Research on the Voice Control and its Audio Signal Processing in Flexible Manufacturing Cell Sensors & Transducers 203 by IFSA http://www.sensorsportal.com Research on the Voice Control and its Audio Signal Processing in Flexible Manufacturing Cell Li Jing, 2 Xu Ting and 3 Shen anyan Shanghai

More information

Investigator Site OC RDC PDF User Guide

Investigator Site OC RDC PDF User Guide Investigator Site OC RDC PDF User Guide Version 1.0 Page 1 of 40 TABLE OF CONTENTS Accessing OC RDC Steps for Access 3 Logging On 4 Change Password 4 Laptop and System Security 5 Change Study 5 Navigating

More information

BUILDING CORPORA OF TRANSCRIBED SPEECH FROM OPEN ACCESS SOURCES

BUILDING CORPORA OF TRANSCRIBED SPEECH FROM OPEN ACCESS SOURCES BUILDING CORPORA OF TRANSCRIBED SPEECH FROM OPEN ACCESS SOURCES O.O. Iakushkin a, G.A. Fedoseev, A.S. Shaleva, O.S. Sedova Saint Petersburg State University, 7/9 Universitetskaya nab., St. Petersburg,

More information

SFU CMPT week 11

SFU CMPT week 11 SFU CMPT-363 2004-2 week 11 Manuel Zahariev E-mail: manuelz@cs.sfu.ca Based on course material from Arthur Kirkpatrick, Alissa Antle and Paul Hibbits July 21, 2004 1 Analytic Methods Advantages can be

More information

Using Adobe Contribute 4 A guide for new website authors

Using Adobe Contribute 4 A guide for new website authors Using Adobe Contribute 4 A guide for new website authors Adobe Contribute allows you to easily update websites without any knowledge of HTML. This handout will provide an introduction to Adobe Contribute

More information

A User s Guide to the Cure4Kids Web Conferencing System

A User s Guide to the Cure4Kids Web Conferencing System An online collaboration tool used in Cure4Kids An online medical education initiative of the International Outreach Program St. Jude Children's Research Hospital Memphis, Tennessee www.stjude.org 26 August

More information

Verbio Software Reference

Verbio Software Reference Verbio Software Reference Grammar Manager User's Manual Verbio Technologies, S.L. Verbio Software Reference: Grammar Manager User's Manual Verbio Technologies, S.L. Published September, 2011 Copyright

More information

Guide for Creating Accessible Content in D2L. Office of Distance Education. J u n e 2 1, P a g e 0 27

Guide for Creating Accessible Content in D2L. Office of Distance Education. J u n e 2 1, P a g e 0 27 Guide for Creating Accessible Content in D2L Learn how to create accessible web content within D2L from scratch. The guidelines listed in this guide will help ensure the content becomes WCAG 2.0 AA compliant.

More information