Development and Evaluation of an Abstract User Interface for Performing Maintenance Scenarios with Wearable Computers

Size: px
Start display at page:

Download "Development and Evaluation of an Abstract User Interface for Performing Maintenance Scenarios with Wearable Computers"

Transcription

1 Development and Evaluation of an Abstract User Interface for Performing Maintenance Scenarios with Wearable Computers A Thesis submitted to Faculty 3 of University Bremen in Fulfillment of the Requirements of the Degree of Diplom Informatiker (Dipl.-Inf.) Claas Ahlrichs Department of Computer Science University Bremen First Reviewer: Prof. Dr. Michael Lawo Second Reviewer: Prof. Dr. Rainer Koschke Advisor: Dipl.-Inf. Hendrik Iben February 17, 2011

2

3 Contents 1 Introduction Motivation Research Question Scope and Limitation of the Thesis Requirements Thesis Organization A Brief Review of Wearable Computing and User Interfaces The Wearable Computer Historical Views Properties and Constraints A Metaphor for Wearables From Mobile to Wearable Computing Mobile Computing Ubiquitous and Pervasive Computing Wearable Computing Towards an Abstract User Interface Graphical, Auditory and Tactile User Interfaces Wearable User Interfaces Abstract User Interfaces Summary Related Work Regarding Abstract User Interfaces WUI Toolkit Huddle SUPPLE Summary An Abstract User Interface for Wearable Computers Development i

4 4.1.1 Design Problems Structure and Design of Abstract User Interface Components Dispatching an Event Integration of Context Rendering of Abstract User Interface Components Summary (of Used Design Patterns) Extensibility Using Existing Components Modifying the Rendering Process Addition of New User Interface Components Summary Rendition Common User Interface Elements General Things to Consider Developing a Prototypical Renderer Evaluation Description of User Study Results Discussion Discussion Comparison with Related Work Fulfillment of Requirements Limitations of AbstractUI Personal Opinion Concise Summary Conclusions Conclusions Contributions Future Work A Source Code 63 B Evaluation 69 ii

5 List of Figures 1.1 Illustrates a wearable system that an aircraft technician could use during maintenance activities. A vest is shown with a miniaturized computer inside. When closed (left), it can be used in conjunction with a head-mounted display (HMD) and data glove [69]. When open (right), a personal digital assistant (PDA) with pen input is used for interaction. (1) OQO computer. (2) Bluetooth keyboard. (3) HMD pocket. (4) HMD controller. This image has been taken from [44] Shows the SCIPIO ( Semi-Complex Intelligent Programmable Input and Output ) [69] platform, a data glove for wearable computing applications. It is a miniaturized interaction device with a [... ] number of flexibly combinable input and output channels [69]. This image has been taken from [2] Illustrates an abstract representation of an user interface (UI) for a quiz application. It shows how UI elements are structured and introduces different kinds of abstract UI elements. A concrete representation of this composition is shown in Figure Shows a concrete representation of the abstract UI description that is illustrated in Figure 4.1. A few exemplary representations of abstract UI elements are presented. The background colors are used to highlight the dimensions of the Container components Illustrates the structure of Composite design pattern as defined by [23, p. 163]. The generic Component interface is shown as well as how Leaf and Composite elements are accordingly implemented Shows the implementation of the Composite design pattern as part of the structure of an abstract user interface (AUI). It supports treating compositions (e.g. Container) as well as individual components (e.g. Choice, Text and Trigger) in a homogenous way Defines the structure of Iterator design pattern [23, p. 257]. The Aggregate and Iterator interfaces are shown as well as specializations of both Illustrates how the Iterator design pattern has been implemented. It shows the adapted Iterator and Component (former Aggregate) interfaces. Concrete implementations of both are presented as well The design of an abstract UI components data model is shown. The Component and Model interfaces are declared as well as specialized versions of them Illustrates the implementation of the abstract UI component s data models in the proposed framework. Several primitive Component classes (Text, Trigger and Choice) and their corresponding data models (TextModel, TriggerModel and ChoiceModel) are shown iii

6 4.9 Shows the structure of the Abstract Factory design pattern as defined by [23, p. 87]. Two families of products (AbstractProduct1 and AbstractProduct2 ) are presented along with the AbstractFactory class. The concrete implementations of both are shown as well. Furthermore the products with their concrete implementations and how they relate are shown Illustrates how the Abstract Factory design pattern can be used to facilitate the creation of different families of data models. Shown are the ModelFactory interface and an exemplary family of data model classes (DefaultModelFactory, DefaultTextModel, etc.) The structure of the Singleton design pattern is illustrated. It shows the Singleton interface Shows the implementation of the Singleton design pattern for providing a unique instance of an ModelFactory object Shows the basic design of the event system in the proposed framework. It illustrates the Event class, the EventListener and EventProvider interfaces as well as their corresponding concrete versions. Furthermore it is shown how they relate to each other Illustrates how the event system is implemented. It shows the ActionEvent and PropertyChangeEvent classes as well as their according EventListener (Action- Listener and PropertyChangeListener) and EventProvider (ActionProvider and PropertyChangeProvider) interfaces. The relationship among them is presented along with event-specific operations Illustrates the structure of the Observer design pattern as defined by [23, p. 293]. The interfaces for a Subject and an Observer are shown as well as their corresponding ConcreteSubject and ConcreteObserver class. The relationship between them may be taken from this figure as well as the relationship among their concrete implementations Illustrates how the Observer design pattern is integrated in the solution implementation. It shows the relationship among the fundamental classes from Figure 4.13 when looking from the viewpoint of event distribution. The EventProvider and EventListener interfaces as well as their specialized interfaces are presented Shows the implementation of the Abstract Factory and Singleton design patterns as a means of supporting different families of EventProvider classes. Illustrated are the ProviderFactory class and an exemplary family of EventProvider classes (DefaultProviderFactory, DefaultPropertyChangeProvider, etc.) Shows the implementation of a Context object that can be used to manage contextual information (getproperty and setproperty). It can be used to store and load the information from an XML-based source (loadfromxml and storetoxml) Illustrates how the Singleton design pattern is used to provide global access to a single Context object and how abstract components manage their Context objects The implementation of the Context object as a concrete EventProvider is shown. The PropertyChangeProvider interface is implemented and notifies all Property- ChangeListener objects when required iv

7 4.21 Shows a chain of renderers, each being responsible rendering a certain abstract components. E.g. the CompositeRenderer class render compositions whereas the PrimitiveRenderer1 class renders all Text and Choice components. PrimitiveRenderer2 takes care of Trigger components The structure of the Chain of Responsibility design pattern is presented as defined [23, p. 223]. A Handler interface is defined and are ConcreteHandler classes are shown as well The implementation of the Chain of Responsibility and Singleton design patterns are illustrated as a means of rendering abstract components to concrete ones. The Renderer class and a few concrete versions of it (AWTCompositeRenderer, AWTComponentRenderer, DummyRenderer, etc.) are presented Illustrates the structure of the Mediator design pattern as defined by [23, p. 273]. Shown are Mediator and Colleague classes as well as concrete implementations of them Shows the communications among a mediator and mediated objects. An AbstractComponent requests to update its ConcreteComponent classes through the ConcreteMediator object and vice versa. The ConcreteMediator object delegates the requests to the appropriate recipients Illustrates the average NASA TLX rating of all subjects (0 meaning very low to 100 meaning very high) Shows the average NASA TLX rating of each subject Illustrates the average usability rating of all subjects (1 meaning very poor to 10 meaning very good) Shows the average usability rating of each subject v

8 vi

9 List of Tables 2.1 A possible dialogue between a wearable personal assistant (PA) and its wearer as stated by Clark et al. in [15]. Information entered by the wearer are highlighted Summarizes pros and cons of WUI toolkit. A list of arguments regarding the intentions, goals and usability is shown Summarizes pros and cons of Huddle framework. The approach followed by this system possess various advantages and disadvantages which are listed above Summarizes pros and cons of SUPPLE framework. Diverse (dis-)advantages regarding the usability, goals and intentions are listed The representation of contextual information within the proposed framework is presented. Keys and their corresponding values are shown Summarizes pros and cons for using existing abstract components in order to create high-level components Lists dis-/advantages for modifying the rendering process in order to adjust the representation of particular abstract components Shows up- and downsides for adding a completely new abstract components to the framework vii

10 viii

11 Listings A.1 Demonstrates an example of how to create a high level component. The development process is explained in subsection A.2 Shows an example of how to create a custom Renderer and how it is inserted into the actual rendering process. Subsection explains the development process of this class A.3 Lists an implementation of a data model for a weather component which fetches information for a weather station from the internet. Subsection described the development process of this class. Furthermore Listing A.4 and A.5 are related to this listing A.4 Demonstrates an example of how to implement a new abstract component. Subsection described the development process of this class. Furthermore Listing A.3 and A.5 are related to this listing A.5 Shows how a renderer for a new abstract component can be implemented. Subsection described the development process of this class. Furthermore Listing A.3 and A.4 are related to this listing B.1 Lists an example of how to use Text components of the AbstractUI framework. A corresponding implementation in the WUI toolkit is shown in Listing B B.2 Shows how to create text information in the WUI toolkit. This is a semantically equivalent implementation to Listing B B.3 Shows an example of how to use Trigger components of the AbstractUI framework. A corresponding implementation in the WUI toolkit is shown in Listing B B.4 Demonstrates the use of the ExplicitTrigger component in the WUI toolkit. Listing B.3 contains a semantically equivalent implementation B.5 Lists an example of how to use Choice components of the AbstractUI framework. A corresponding implementation in the WUI toolkit is shown in Listing B B.6 Demonstrate the use of the SelectionList component of the WUI toolkit. Listing B.5 shows the same example in the AbstractUI framework B.7 Shows the use of Container components in the AbstractUI framework. A corresponding implementation in the WUI toolkit is shown in Listing B B.8 Demonstrates the process of using Group components in the WUI toolkit. This is a semantically equivalent implementation to the contents of Listing B ix

12

13 Chapter 1 Introduction This chapter presents a short introduction and motivation of this work. The research question is defined, its scope and limitation are highlighted. At the end of this chapter the structure of the remainder of the thesis is outlined. 1.1 Motivation Desktop computers present no longer the only affordable technology with reasonable computing power. Over the last decades a new trend in computing has emerged: mobile computing. In recent years, mobile computing platforms have become available to the broad masses. Their price has decreased and today almost everybody can profit from mobile devices, like smartphones or personal digital assistants (PDAs). Nowadays many people use mobile phones, MP3 players or digital cameras in their daily lives. Devices like mobile phones are frequently used by consumers. However, professionals in the business field have recognized their benefits as well. The use of mobile technology has not just increased connectivity but also productivity. Management of contacts or appointments and use of web-services are just a few examples. Modern mobile devices, such as a Blackberry or iphone, provide the possibility to install applications (e.g. [3, 5, 7]). Users can choose from thousands of available applications and in some cases get the opportunity to create own applications by using appropriate development environments (e.g. [4, 8]). A central part of all applications is their user interface (UI), typically graphically represented. An appropriate level of usability and information presentation must be ensured in order to control an application. However, using mobile applications may be a burden as their UIs are still frequently based on desktop applications. Instead of creating an application that is specialized to solve a (single) problem, desktop applications are usually rather general purpose and feature-rich. While this may be a useful approach for desktop computers, it is not for mobile devices as they significantly differ from stationary systems (e.g. in terms of size, usage, computing power, etc.). Unnecessary features tend to hinder usability than being a useful extension. Their context of use can vary drastically (e.g. changing light conditions, noise levels, while walking, being in a meeting or on a train) and a user expects to interact with the application in all situations. These problems also apply to other devices such as the emerging special kind of mobile device called wearable computer. A wearable system can be worn, e.g. directly on the (human) body or as integral part of clothing. For the sake of a simple introduction the wearable computer may be seen as a personal assistant. Just like a good (human) personal assistant a wearable system is unobtrusive, predicts and prepares needed information and schedules appointments. In other words, an assistant that stays in the background, handles timeconsuming tasks (e.g. management, organization, etc.) and makes a less distracted working environment possible. The concept and metaphor are further explained in the following chapter (see section 2.1). In contrast to desktop and mobile applications the development of wearable user interfaces (WUIs) does turn out to be even more challenging [58]. When using a wearable computer 1

14 one quickly comes to the conclusion that regular desktop software becomes hardly operable. Imagine manipulating a spreadsheet while walking in the park or composing an while crossing a busy intersection. In a mobile setting most desktop software is simply no longer usable. The reason is that it has been designed with the underlying assumption that the user s attention is devoted to the interface. While this approach has worked for stationary systems it does not enable mobile usage where the primary task typically lies in the real world. Mobile users experience real world influences and may easily be distracted by them in such a way that makes the use of standard (desktop) software difficult. E.g. when in movement the hand-eye coordination, required to operate a mouse, degrades. Consequently a wearable system should minimize the cognitive load and attention needed to control it. Another point is that stationary systems often rely on vision in order to use them. One can imagine that a graphical user interface (GUI) of wearable system is even more limited than it is on mobile devices. In fact, a graphical representation of information is sometimes not required or helpful, e.g. when the visual attention is focused on a real world task like crossing the street or talking to a colleague. A wearable system might utilize multiple modalities to relay information. In the previous example auditory or tactile (touch) feedback would have been more adequate. In general, the same UI problems apply to wearable systems as they do to mobile devices. The difference is that WUIs are usually even more limited than mobile UIs (regarding the amount of information that can be displayed at once). Sensors are an integral part of wearable systems and can be used to recognize a wide range of contexts. Context information (e.g. user activities, user intentions, environmental conditions, etc.) can be utilized in applications to optimize and ensure usability. Context-awareness provides an enrichment and opportunity to create more reactive and intuitive applications. E.g. crossing the street or talking to a colleague could be recognized and the UI automatically switched to an auditory or tactile one. An exemplary scenario, requiring minimal cognitive load and utilizing sensors, could be the maintenance of large passenger aircraft, where economic maintenance procedures are imperative [43]. The key figure in the economically efficient operation of an aircraft is the ratio of actual in-flight time vs. ground time used for refueling, restocking, passenger boarding and maintenance [44]. Imagine an aircraft maintenance technician. Before beginning his work, he must identify the required maintenance task. As he carries out his work, he might need access to taskrelated documents, like maintenance reports from other technicians or further documentation resources. Having finished the maintenance task, the technician has to file a maintenance report. Equipping the technician with a wearable computer, that is integrated in his work clothing, has several benefits (e.g. rid him of paper documentation, fast access to taskrelated documentation, etc.). During the maintenance task sensors collect information on the technicians activity and environment. Such contextual information is used to support him in a way that automatically adapts the UI (e.g. when light conditions change) or offer useful documentation (e.g. related maintenance reports or manufacturer s documentation). A wearable system for this scenario might be integrated in a vest. It could use a headmounted display (HMD) or PDA to present information (e.g. replacement for monitor) and a data glove [69] as input source (e.g. replacement for mouse and keyboard). Figure 1.1 and 1.2 illustrate how such a wearable system could possibly look. In general, wearable computing systems typically do not use standard hardware like keyboard, mouse or display. Instead specialized I/O devices (e.g. HMD, twiddler, etc.) are used. The heterogeneous nature of the hardware in use and absence of common development environments make it difficult to create applications and reusable UIs. Those are two reasons why wearable applications are often implemented from scratch and potentially include programming errors that could have been avoided through the use of a standardized de- 2

15 Figure 1.1: Illustrates a wearable system that an aircraft technician could use during maintenance activities. A vest is shown with a miniaturized computer inside. When closed (left), it can be used in conjunction with a HMD and data glove [69]. When open (right), a PDA with pen input is used for interaction. (1) OQO computer. (2) Bluetooth keyboard. (3) HMD pocket. (4) HMD controller. This image has been taken from [44]. Figure 1.2: Shows the SCIPIO ( Semi-Complex Intelligent Programmable Input and Output ) [69] platform, a data glove for wearable computing applications. It is a miniaturized interaction device with a [... ] number of flexibly combinable input and output channels [69]. This image has been taken from [2]. 3

16 velopment framework. Using such a framework also reduces development efforts. The design of such a development framework is the main content of this work. Goal of this thesis is the development and evaluation of a framework for abstract user interfaces (AUIs), allowing application developers to implement wearable applications in an abstract manner and use them on multiple devices. The proposed outcome is a middle-ware which reduces development efforts through reusability, supports creation of context-aware UIs as well as device independent applications. Application developers can concentrate on actual development of an application and do not have to worry about how their UI is displayed. 1.2 Research Question This section highlights the research question of the thesis. Scope, limitations and requirements of a proposed solution are formulated. Application developers encounter a number of real world problems during the actual development process of a wearable application. A wearable system typically utilizes non-standard I/O hardware and provides context information (such as environmental conditions or user activities), but the absence of a unified framework for creating WUIs does not only increase development efforts but possibly increases errorrates as well. Ideally an (average) application developer should not need expert knowledge on how to create WUIs in order to create a wearable application. However, this is currently not the case and wearable applications are [... ] still implemented from scratch [71]. Three research questions, pursued in the thesis, are stated as follows: 1. How to appropriately design an AUI for wearable computing applications? How to support deviceindependence and dynamic interface changes? As stated before, wearable computing systems typically utilize specialized I/O devices, unlike desktop systems where keyboard, mouse and display are expected. Instead of depending on special I/O devices and therefore making reuse of implementations difficult, a UI should be described in a device independent way, an AUI. An inappropriate design is likely to cause usability issues or other problems at some point of time. This is even more true when considering the heterogeneous nature of current wearable computing systems. 2. What is necessary to build a context-aware application? Research has shown that a wide range of contexts can be recognized using sensors [14, 25, 32, 38, 64]. Available context information, such as environmental conditions or user activities, is detectable and may be used to optimize the UI of an application. Context-awareness means that a sensor provides some kind of context which in turn can be used by an application. E.g. the illumination can be measured and used to change the color contrast accordingly. 3. What interface components are required for creating a broad range of maintenance applications? Maintenance work usually involves documentation to be read, protocols to be filled as well as measurement and adjustment of real world objects. E.g. imaginable interface components are text, selection and container components. These items will be discussed and answered in Chapter 4. The first question is the most important research question of this thesis, in the following referred to as research question. Consequently less effort will be put into answering the other questions Scope and Limitation of the Thesis This thesis proposes the design and implementation of an AUI for creating WUIs with reusable components. The framework includes a device independent description of UIs where 4

17 the UI components are based on a model-driven approach. Context-aware UIs are supported by utilizing available context information (from sensors, the user, the wearable system, etc.) and support automatic adaption. The design of the framework is not limited to a specific programming language nor does it suggest one in particular. However, the implementation is written in Java and therefore can only be used on Java-enabled devices. Furthermore the presence of at least Java 1.6 is expected. Future releases will target additional programming languages and may include support for the.net programming environment or the iphone/ipad. Thus extending the range of supported devices. Although WUIs are not limited to a graphical representation, the primary focus of the proposed solution will be on graphical output. The existence of different output modalities, such as auditory or tactile output, is known and kept in mind during the development. The optimal rendition of UI components is not evaluated during the thesis and the visualization of individual components is not examined. Instead all components are rendered in a way the author feels is most appropriate and intuitive. The focus is rather on the development of an AUI framework that supports easy extension and integration of rendering mechanisms. The author does not claim to create a rendering system that displays an AUI in the best or optimal way. Instead a foundation is created that is flexible enough to integrating further rendering mechanisms Requirements Six requirements are extracted from the research questions and described in the following. The proposed solution (see Chapter 4) will be viewed in the light of those requirements. Device independent UI description: The UI is to be described in a way that allows the rendering on a wide range of devices and usage of UI toolkits or frameworks. It should be specified on an abstract level that is independent of a particular rendering software (e.g. abstract window toolkit (AWT), Swing, standard widget toolkit (SWT), etc.) or device (e.g. PDA, HMD, etc.). Reusability of components: Reusability of UI components allows a more productive and effective development of wearable applications. Instead of creating a wearable application from scratch with a specialized interface, a list of default UI components (that are probable of being reused) should be identified and provided for re-usage. The identified components are not expected to be exhaustive, meaning that they will cover a wide range of wearable applications but not every possible UI. Support for integration of context: Gathered contextual information (by sensors, user, wearable) can help to optimize the rendering of a UI. The usage should not be restricted to internal software components but also permitted to be directly used within wearable applications. A global storage place for context information would allow the propagation of newly gained or changed contextual information. Extensibility: New ways of displaying information emerge from time to time and create the necessity to adapt the rendering process. Existing UI components may be required to be rendered in a different way or new UI components may need to be integrated into the toolkit. The identified list of UI components is not exhaustive and therefore must permit the addition of new UI components. As the optimal rendering of information is not part of the thesis, the option to modify and create renderers should be provided. This includes the creation of renderers for new devices that have reached the market. Support for distribution of toolkit components: Wearable systems are typically very limited in terms of available computing power and energy consumption. Therefore the 5

18 possibility to distribute a wearable application across multiple systems in a network should be considered. A wearable system could act as a display of a wearable application and communicate with a second system (with more computing power) on which the actual wearable application is executed. Support for multi-modal information presentation: WUIs are not restricted to a graphical representation. In fact there are cases in which a graphical representation is not the preferred way to display information. E.g. when walking, crossing the street or in general when a user s visual attention is occupied by a real world task. Some information can equally well be displayed using tactile or auditory interfaces. The UI components should be designed in a way that allows the possibility to partially render them in a non-visual modality. related work to WUIs. They are used to highlight different approaches on generating WUIs. Chapter 4 An Abstract User Interface for Wearable Computers : Solves the research question of the previous section. The development process and evaluation of an AUI for wearable computing applications are shown. This chapter proposes a solution with respect to the requirements and related work presented in the previous chapter. Chapter 5 Conclusions : Includes a summary of contributions made to the research field and a list of conclusions. Possible directions of future work are presented as well. This chapter closes the thesis. These requirements merely present an overview. Their satisfiability relies on the fulfillment of diverse problems and constraints connected to them. All requirements will be picked up and referred to in Chapter 4 during the development section (see section 4.1). 1.3 Thesis Organization The remainder of the thesis is structured as described in the following. Chapter 2 A Brief Review of Wearable Computing and User Interfaces : Presents background information on different kinds of UIs. A number of computing paradigms, such as ubiquitous or wearable computing, are described. This chapter is used to establish a common understanding of terminology used throughout the thesis. Chapter 3 Related Work Regarding Abstract User Interfaces : Various wearable computing frameworks and toolkits are introduced which represent the 6

19 Chapter 2 A Brief Review of Wearable Computing and User Interfaces The goal of this chapter is to establish a common understanding regarding terminology used in the thesis. Therefore terms like wearable computing and user interface (UI) will be described in more detail. At first a definition of the term wearable computer and an introduction to various computing paradigms are given. The wearable computing paradigm is highlighted. A definition of abstract user interfaces (AUIs) is presented. The chapter is closed with a summary of information that is directly linked to the research question of the thesis. 2.1 The Wearable Computer The term wearable computer is somewhat pictorial, but what does it mean? The field of wearable computing can be considered young, when compared to other academic disciplines. Therefore several viewpoints from wearable computing pioneers exist and are highlighted in the course of this section. Then a list of properties of a wearable computer and a desktop - equivalent metaphor follow. The term wearable computer and wearable are synonyms and are used as such throughout this work Historical Views Bass et al. [6] comment that the term wearable computer is often viewed as a small version of a desktop computer. Meaning they use standard operating systems, software and standard I/O devices like a mouse or keyboard. This is a rather narrow and constraining view, as it does not take into account new design opportunities and openings to new application areas that this discipline offers. Instead the authors of [6] suggest that a wearable computer should be seen as a new type of device. In fact a wearable computer may be anything from small embedded devices with very limited capabilities [6] to computers with standard I/O capabilities that are worn [... ] on the body [6]. Therefore a rigid definition based on physical properties of wearable computers cannot be given and should rather be described by their attributes and constraints [6]. In 1995 one of the first notions of a wearable computer was introduced by Thad Starner. He identified in [56] persistence and consistency as two main characteristics of wearable computer interfaces. He defines persistence when a wearable computer interface is always available and simultaneously used with a primary real world task. Constancy is being able to use the same wearable computer interface in all situations. Starner is one of the few people who almost constantly wears a wearable computer. Consequently is his definition more inclined towards a full integration of wearable computers into everyday life. Two years later Rhodes [50] took up Starner s approach. He identified and described the following five attributes of a wearable computer system: Portable while operational: A wearable computer system is usable while be- 7

20 ing on the move. This is the most distinguishable feature of a wearable from both desktop and laptop computers. Hands-free use: Interaction should not require or at least limit the use of hands (e.g. speech) as they are often otherwise occupied during real world tasks. Sensors: A wearable computer system uses sensors as input sources in addition to user inputs. Proactive: Information can be conveyed to a user through use of various attention grabbing methods. Always on, always running: It is always working, sensing and acting by default. Steve Mann is another pioneer in wearable computing. In [33] Mann highlighted three criterions: eudeamonic, existential and ephemeral criterion. In 1997 he used those criterions to define WearComp, his wearable computer. The eudeamonic criterion (named after the Eudaemons: a group of physicists who pioneered unobtrusive wearable computers [33, p. 66]) refers to the fact that the wearable computer needs to be a part of its mobile user. Consequently the computation apparatus may not be connected to anything that would restrict the way of using it. The existential criterion, characterizes the fact that a user must be able to control the computation capabilities of the wearable computer at hand. The cognitive load and attention, which are required for controlling it, need to be at a minimum. The ephemeral criterion describes the nonexistence of operational or interactional delays. The wearable apparatus is constant in both operation and its potential for interaction with the user [33, p. 66]. Constancy in operation implies that the apparatus is constantly active while being worn. This does not mean that the device may not have power saving modes, but is expected to wake itself up when necessary. Constancy in interaction means that all output channels are always active and not only when the user is interacting with the apparatus. E.g. a head-mounted display (HMD) would display a UI all the time. The constant availability reduces the mental effort required for switching between interaction and non interaction mode to almost zero [33]. Over the years Mann and Starner refined their definitions of wearable computing and wearable computers. In 2001 Mann [35] introduced the theory of humanistic intelligence. Humanistic intelligence is referred to as the intelligence that arises when a human is part of the feedback loop of a computational process in which the human and computer are inextricably intertwined [35, p. 10]. Starner refined the properties of a wearable computer in several publications [56 62]. Even though the basic properties remain the same, Starner added more details to the attributes as he gained more experiences with his wearable apparatus. In his opinion a wearable computer should persist and provide constant access [68, p. 13]. Consequently the system must be unobtrusive and mobile. However the system has the ability to draw the user s attention to itself if it is necessary. The usage of a wearable computer is intended to become a secondary task, that supports a physical task in the real world (primary task) with minimal efforts by the user. In order to provide useful cues the apparatus collects contextual information (e.g. gathered by sensors) of the user s environment, the user s physical and mental state and the wearable s own state [57, p. 23]. This information is utilized to adapt input and output modalities automatically to those which are most appropriate and socially graceful at the time [57, p. 23]. In other words: the wearable does not use auditory output when its user in the midst of a meeting or presentation talk, instead other output modalities are applied (e.g. tactile feedback). Summarizing the previous views on wearable computing and computers from Thad Starner and Steve Mann: Starner s view on wearable computing is focused on supporting activities in daily life whereas Mann s comprehension on wearables tends towards a total integration into our lives. As preceding paragraphs haven shown, a short definition cannot be eas- 8

21 ily given. Instead fundamental properties can be described needed to be fulfilled in order for a system to be considered a wearable computer system. Such attributes will be presented hereafter Properties and Constraints Witt [68] summarizes the viewpoints from the previous section and describes five attributes of a wearable computer system. In addition to the wearable hardware and software running on the hardware does a wearable system typically have the following properties and constraints: Limited Capabilities: Comparing to capabilities of a stationary computer system a wearable computer system is often very limited. Significant differences in terms of computing power, energy consumption, and available I/O modalities are typical [57]. Operation Constancy: A wearable computer system provides useful information at all times. It is always on, runs in the background, and supports a primary task in the real world [33, 50, 57]. Seamless Environment Integration: Unobtrusive and non-distracting behavior during primary physical tasks must be ensured by a wearable computer system. Therefore the context of use should be reflected in the UI [60]. Context-Awareness: Support and interaction during a primary task may be optimized through the recognition of context (e.g. environmental or user context). [28, 50, 57] Adapted Interaction: A wearable computer system may automatically adapt a running application and/or interaction style. Thus, making interaction easier and more efficient while reducing mental efforts. [28, 53, 54, 57, 60] These outlined items provide a basic definition of what a wearable is about. Wherever terms like wearable or wearable computer are used throughout the succeeding chapters, they are meant to comply with the previously presented attributes A Metaphor for Wearables Clark et al. state that the desktop metaphor is dead [15]. The desktop metaphor has been around for a few decades and is usually associated with stationary or desktop computers. It is often spoken of where WIMP ( Windows, Icons, Menus and Pointers ) interfaces are thought of. The desktop metaphor utilizes common knowledge on desk work: sheets of paper are symbolized as windows, folders as disk directories, etc. This symbolism made computers accessible by novice and casual users, but restricted expert usage. Expert users tend to prefer typed commands and make more frequent use of short-cuts. They feel limited by such kind of UIs [15]. This metaphor turned out to be inappropriate not just for expert users, but also for wearable computers [51]. Users of wearables encounter practical problems. E.g. interaction devices such as a mouse or keyboard become unmanageable while walking and cause frustration as the interaction speed drops remarkably, HMDs tend to be too low-contrast in order to allow practical usage in augmented (seethrough) mode, etc. Another major drawback is the assumption that graphical user interfaces (GUIs) are usually meant to be guided by its users, meaning that they actively control the UI. However the trend in wearable computing goes towards proactive UIs, where the wearable unobtrusively offers useful information to its user or self-adapting UIs [16, 17, 28, 55]. The authors of [15] claim to have found a more appropriate metaphor for wearable user interfaces (WUIs): the personal assistant (PA). It is a metaphor for a good (human) PA which makes a less disturbed working environment possible. He takes care of time-consuming or annoying tasks like scheduling of meetings or appointments, trivial paper work, etc. He stays in the background and prepares possibly helpful information and offers it in an unobtru- 9

22 PA wakeup. While you were teaching, there were two telephone calls to your office and six incoming s. None of the s were marked urgent but one of them is from the head of department. Would you like to process it now? No. Did the telephone callers leave messages? Yes. I can play them to you; but since that lecture was to first-year students, why don t you have a cup of coffee first, like you usually do after seeing them? OK. Has Neill arrived at the University yet? Not yet; he s half-way between his home and the University. Table 2.1: A possible dialogue between a wearable PA and its wearer as stated by Clark et al. in [15]. Information entered by the wearer are highlighted. sive way when necessary. Clark et al. [15] argue that in the context of wearable computing a human-human-like interaction might be more desirable than human-computer interaction. They envision a wearable system that automatically adapts to the wearer s mood. So in order to achieve human-like characteristics such systems would have to monitor its user s physical and mental state as well as itself and the user s environment, thus likely to involve aspects of affective computing [48]. A possible dialogue between a wearable PA and its wearer (where the user s inputs are highlighted) is shown in Table 2.1. Such a dialogue is likely to work in various modalities, e.g. in a audibly or visual form. Obviously a wearable computer does require a network connection, access to the wearer s telephone and knowledge about the wearer s position before such dialogues are possible. Not quite as obvious is its ability to monitor its wearer s habits, learn them and schedule things accordingly to accommodate them. Summarizing, a better metaphor for wearable computers is a (wearable) PA. An assistant that accompanies its wearer all-day and presents helpful information at the right time. 2.2 From Mobile to Wearable Computing A number of different computing paradigms are introduced. First mobile, ubiquitous and pervasive computing are explained. Wearable computing is explained in the second part of this section, which closes with a description of how the term will be used throughout the thesis Mobile Computing The constant decrease in size and cost of mobile platforms, like mobile phones or personal digital assistants (PDAs), has inspired the creation of a new computing paradigm: mobile computing. The previous decades have introduced internet-enabled devices in a small form-factor to the masses and rang in the new trend of mobile computing. In recent years touch-sensitive devices, like the iphone or Blackberry, caused further popularity and lifted the importance of mobile computing to new heights. Today many people possess such a mobile device and use it in everyday life. Constant availability of information and social acceptance increased the necessity of usable UIs, which represent a central part of every application. On mobile devices are UIs typically visualized but also make use of auditory and tactile (touch) interfaces. Just like any other UI, UIs on mobile platforms must ensure an appropriate level of usability and information presentation. However ported UIs based on desktop applications to a mobile device are likely to hinder usability as UIs of stationary systems are created with underlying assumptions that do not apply in mobile settings. E.g. the device and application have its user s (full) attention as it is their primary task to operate it. In most cases mobile users cannot devote much of their attention to their mobile device, because they are easily distracted by real world influences. E.g. they might risk walking into 10

23 a road sign or another person when walking down the street. Additionally desktop applications usually tend to be stuffed with features and rather general purpose which in turn might hinder usability on mobile platforms. They do not have the necessary screen size [10] and computing power to enable the operation of desktop applications, instead applications should be specialized to solve a (single) problem. The context of usage may quickly change in mobile settings. This includes but is not limited to varying light conditions and noise levels, being on a train or in a meeting. In all those situations does the user expect to be able to interact with the application. Mobile computing refers to the ability to carry applications with you in the form of small but still visible and recognizable computational devices. Mobile computing created a hole new kind of services. The field of research of mobile computing is concerned with communication among mobile users, mobile devices and applications [52] Ubiquitous and Pervasive Computing As the development of processors and memory elements continues to fulfill Moore s prediction [39] so does today s technology continue to become more affordable to the masses. He postulated a thesis, today remembered as Moore s Law, that predicts the approximate doubling of complexity of integrated circuits, with respect to minimum component cost [31, p. 375] every 24 month (or decrease in size and cost at constant performance). As the devices get smaller and gain social acceptance new ways of using this technology emerge. Imagine a world inhabiting a large amount of smallest computing devices that are capable of spontaneous communication with each other. These devices are even tiny enough to be embedded in everyday artifacts such as coffee cups, clothing or umbrellas. Consequently those devices are no longer perceived as computational devices but may be viewed as regular objects of utility. These so called smart devices are likely to be first incorporated into objects that obviously benefit from the integration of information processing technologies (e.g. household appliances, cars, toys, or tools). But even pencils (that digitize everything being written with them), clothes (that memorize locations they have been to or remember conversations) and umbrellas (that have a subscription to an online weather forecast service and prompt its owner with a friendly reminder in case of rain) will eventually be equipped with information processing technologies. In such a world almost anything is possible. Even though some of the things might sound absurd, it is hard to imagine what would actually be accepted by people. The cross-linking of everyday artifacts enable a hole lot new opportunities and whether they will find social acceptance is still to be seen. The previous envisioned world of communicating everyday devices has been introduced by Mark Weiser [66]. He used the term ubiquitous computing to refer to a world in which technology is omnipresent and seamlessly integrated in our environment/world. It will enable us to focus more on the things we are doing by removing annoyances from our lives. Weiser writes: The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it [66]. The term pervasive computing is closely related to Weiser s notion of ubiquitous computing. The main difference lies in their rather commercial (pervasive computing) or academic (ubiquitous computing) orientation. Where Weiser understood his term as futuristic and thought of an ideally human centered and unobtrusive technology, did the industry coin the term pervasive computing. It also involves omnipresent processing of information in embedded appliances, but with a stronger focus on commercial usability Wearable Computing The previous section (see section 2.1) introduced the wearable and presented properties of such devices. The wearable computing paradigm is obviously based on wearable com- 11

24 puters. The problems discussed under the subsection Mobile Computing can also be applied to the field of wearable computing. In fact the mentioned problems are even more relevant as wearable devices tend to be still more limited than mobile devices (e.g. in terms of screen size and computing power). In wearable computing the computational apparatus is always on and ready for use as it is worn on the users body (e.g. clothing, belt, etc.) [34]. It utilizes benefits of multi-modal interaction (e.g. visual or tactile representations, speech or textual input) in order to allow unobtrusive human-computer interaction in a form that is socially graceful at the time. A user of a wearable system typically pursues a primary task, that resides in the real world and needs information which his wearable system provides. Furthermore wearable systems can be used to aid their wearers in interacting with surrounding information technology (IT)- systems and existing IT infrastructure [29]. However the interaction should require a minimum of attention as the wearer is likely to be distracted by real world influences. An inappropriate interaction speed would degrade the usability of the entire system and possibly cause its user to trip or bump into something, when walking during interaction periods. The more attention is required for interaction the more likely is a user to miss things in the real world. Wearable computing refers to the ability to wear applications but does not require the computational device to be visible or recognizable as such. It may seamlessly be integrated into the clothing of its user or be hidden underneath it. 2.3 Towards an Abstract User Interface The main purpose of this section is to lead to an understanding of AUIs. Consequently different types of UIs will be introduced and shown that certain UI elements can be expressed on a more abstract level. This leads towards a description of AUIs and how WUIs might benefit from them, in contrast to concrete ones. At first graphical, auditory and tactile UIs are described. This is followed by a description of wearable and abstract UIs. While UIs typically cover both directions, human-computer and computer-human transmission of information, the following part focuses on information presentation. Therefore the transmission towards a human will be in the foreground. In contrast to human-computer interaction (HCI), a description of modalities is given that a computer can use to convey information. The most important function of a display is to convey information and therefore the term display is not only used in the context of GUIs. In the following several displays will be introduced that utilize several senses Graphical, Auditory and Tactile User Interfaces How humans perceive their environment has been studied for many years. Detailed knowledge exists on their sensation and perception. E.g. senses like sight and hearing are well understood. They are used to bi-directionally convey information, thus they can be used to display and receive information. In principal, all of five human senses (sight, hearing, taste, smell and touch) can be used to convey information. However the most commonly used senses used for information transmission purposes are: Sight: Probably the most established way to convey information. GUIs are part of a large number of computer systems and can display a lot of information. Hearing: Auditory UIs are commonly used for attention grabbing purposes. E.g. when an alert box appears on the screen or as a form of acknowledgment for clicking a button or deleting a file. Touch: Tactile UIs use the sense of touch to display information. E.g. small vibration-motors can just as well be used 12

25 for attention grabbing purposes as auditory signals. These three types of UI will be highlighted in the following. The remaining two types of interfaces will be omitted as the senses of taste and smell are still underutilized in the field of computational controlled user interfaces. Possible reasons might be that interfaces based on such senses are not practical enough or do not share social acceptance. Especially utilizing the sense of smell has privacy issues as the user is not necessarily the only person perceiving interface-generated odors. Graphical Graphical UIs are a common way for information presentation. The sense of sight has been conveying information over thousands of years. Not just computer applications typically use visual representations but also books, news papers, pictures and sign language are dependent on vision. Stationary settings are dominated by WIMP interfaces. Almost anyone having used a desktop computer or regular notebook is familiar with the representation of paper sheets as windows and folders as disk directories. Auditory The sense of hearing has been utilized for quite a few thousands years. Speech still represents a common way for conveying information. In most cases do computers have the ability to give auditory feedback, which is often used to grab the user s attention or transport simple messages (e.g. recycle bin has been emptied, file has been deleted, etc.) rather than complex messages (e.g. John is on his way to work and needs to talk to you about last weeks staff meeting ). Tactile Tactile feedback is a way of providing information to a user by their sense of touch. This type of interaction has been made possible by integrating tactile stimulators (e.g. tactors, piezo-electric elements, voice coils, motors and solenoids [36]) into mobile devices. As of today various forms of tactile feedback exist and have been incorporated into virtually all everyday mobile devices like mobile phones, PDAs or pagers. Their use is not limited to communication devices: the military, medicine or entertainment industries apply the same principle to their specific domain [18, 26, 47, 65]. Game pads and controllers usually have one or more tactile stimulators built-in to convey information back to a user instead of using the game pad as an input device only. Tactile stimulators provide a way of transmitting information through the cutaneous sense. Speaking more generally, they can be considered as displays which are commonly called tactile or tactual displays. Such displays have the potential of greatly improving interactions on mobile devices and wearable computers. Existing visual and auditory interfaces can be enhanced where the visual and auditory channels are overloaded with information or where auditory and visual information are simply obscured. In other words overloaded displays can be cleaned up by distributing the displayed information across multiple senses. Tactile displays also provide ways for communication with visually impaired people as they do not rely on the visual sense. The cutaneous sense (sense of touch) of humans is known to be an effective way of transporting information [12]. The Tadoma method is used by deaf-blind people to tactually receive speech information. In Tadoma one perceives speech through placing one hand on the face and neck of a talker [63] and monitoring the movement of lips and vibrations on surface of the talker s cervical. Research by Reed and Rabinowitz highlights the capabilities of experienced Tadoma users [49] as such users can understand everyday speech at high levels. Although this example shows that the human sense of touch is capable of perceiving complex information, it is still underutilized in the field of human computer interfaces in comparison to vision and audition [12, 63]. 13

26 2.3.2 Wearable User Interfaces The previous section (see section 2.2) mentioned some of the problems encountered on mobile platforms when applications are designed for mobile settings. These problems apply to WUIs in the same way as the capabilities of wearables are often as (or even more) limited as mobile devices. A WUI is a UI for wearable computers. The most probable senses being used by WUIs reside in research related to the visual, audio and tactile senses [68, p.22]. Detailed knowledge on the perception of these senses exists as well as social acceptance regarding the usage of them as a UI. All three senses are addressable by virtually all modern mobile phones (e.g. screen, speaker and vibrations motor) and share familiarity amongst broad masses as many people have used such a device. As stated in 2.1.3, a wearable system offers helpful information at the right time and in the right modality, of course in an unobtrusive and socially appealing way. Moreover when using traditional UIs (e.g. graphical, auditory and tactile), a WUI would require redundant and extensive modeling, because all modalities need to be implemented individually. However some information can be represented in multiple modalities and cause unnecessarily increased modeling efforts. Therefore it is desirable to have a single UI description that covers all modalities or at least automatic transportation methods of some information from one modality to another. This could be achieved with a meta-ui or abstract UI, which is described hereafter Abstract User Interfaces An abstract UI can just as well as any other UI be used to convey information. Instead of specifying how information is presented, an AUI describes what is presented. In contrast to a concrete UI, an additional layer of abstraction is added which allows the distribution of information across multiple senses and is not bound to a single monolithic representation. An abstract description can be used to generate many concrete UIs, each looking or behaving in a different way. In other words: an AUI is not dependent on a single rigid visualization but rather enables having various representations for one AUI description. The same AUI can also be used on different devices, thus facilitating reusability. Furthermore an AUI can adapt to accommodate user preferences, device constraints or contextual information. Knowledge on human perception can be used for automatic information presentation across diverse modalities. Meaning that certain kind of information (e.g. alarms or textual output) can actually be transported from one modality to another (e.g. graphical auditory, graphical tactile, etc.). Besides textual information the concept of icons can be applied to all mentioned modalities. When referring to a GUI, an icon is used to represent an action that is performed when clicked upon. Such actions range from executing an application to saving a document. However icons are also used to represent a state (e.g. weather forecast). Nonetheless, the same concept exists in tactile and auditory user interfaces (tactons [11] and earcons [24]). They can just as well be used to represent an action or state. An AUI provides a general description of a UI that can be used on different devices, in various ways and modalities. It describes what is to be displayed rather than how it is displayed. Additionally modeling efforts can be reduced. 2.4 Summary This chapter introduced the term wearable computer. Diverse properties and constraints were identified and explained. The wearable PA was presented as an appropriate metaphor for wearables in contrast to the well known desktop-metaphor. Furthermore mobile and ubiquitous computing paradigms were introduced and wearable computing highlighted. AUIs were presented as well as several other kinds of UIs. 14

27 Chapter 3 Related Work Regarding Abstract User Interfaces The previous chapter presented background information. This chapter continues the trail and takes up the introduced information to present related work to abstract user interfaces (AUIs) for wearable computing applications. A list of frameworks will be introduced and individually described. Their pros and cons will be evaluated and some disadvantages are pointed out. Again this chapter concludes with a summary. The idea and general information on each framework are presented. Furthermore diverse pros and cons are taken into account. During the assessment are disadvantages of the particular toolkit highlighted. Each section ends with a summary. 3.1 WUI Toolkit In 2005 Witt introduced a toolkit for contextaware user interface (UI) development for wearable computers [67] called wearable user interface (WUI) toolkit. It was designed and developed to meet requirements of wearable computers and aimed to ease development of WUIs. The toolkit first utilized reusable UI components and was based around a model-driven approach. It claims to support self-adapting UIs without being limited to specific interaction devices or graphical UIs [67, 71]. General Witt s toolkit was meant to allow application developers to build WUIs without the need of expert knowledge on creating WUIs or wearable computing. Thus enabling even none UI experts to generate context-aware and usable WUIs. Considering the rather challenging nature of developing WUIs [58], this provided an easy alternative in contrast to creating specialized WUIs from scratch. By utilizing reusable components development efforts could further be reduced. A model-driven approach is used in the toolkit. A UI for an application is built on an abstract model that is independent of a concrete representation. No concrete UI components (e.g. text field, button, etc.) are used and instead a UI is created with rather abstract components (e.g. information, trigger, etc.). The additional layer of abstraction allows the rendering in diverse modalities instead of being limited to graphical user interfaces (GUIs) or a single visual representation. Furthermore the toolkit supports an automatic adaption of its rendered UI according to available contextual information. At run-time a device- and context-specific UI can be generated. A wearable system may provide contextual information through sensors. Such information can be used to optimize the rendering of a UI and maintain usability when context changes occur. E.g. when walking or being in a very bright setting the UI of a wearable application could switch from a visual to an auditory display, thus taking available context information into account. The toolkit has been used to create numerous applications and prototypes [27, 30, 37, 43, 70]. Witt describes in his PhD thesis [68] the design 15

28 Cons No longer actively maintained Inconsistent and confusing interfaces Difficult extensibility of toolkit Pros Designed for wearable computing Automatic generation and adaption of UIs Supports multimodal interaction Integration of context Table 3.1: Summarizes pros and cons of WUI toolkit. A list of arguments regarding the intentions, goals and usability is shown. and evaluation of WUIs. Furthermore does he list ten requirements of the WUI toolkit [67], which are similar to those listed in section 1.2. Pros and Cons An overview of pros and cons regarding usability of the WUI toolkit may be taken from Table 3.1. They will be explained in further detail hereafter. The toolkit has been specifically designed to meet constraints and desired properties of wearable UIs. Its UI description is based on abstract components which specify what is to be displayed rather than how it is displayed. In addition the WUI toolkit facilitates automatic UI generation and adaption according to contextual information. Multi-modal interaction is also supported. Being a developer and looking from the point of usability, the toolkit does provide its users inconsistent and confusing interfaces. Furthermore the toolkit is no longer actively maintained. Extensibility of the toolkit (e.g. add a new UI component) is supported but a difficult task to accomplish as it is not quite clear where changes in the toolkit have to be made. Summary This section presented an approach to wearable UIs given an abstract UI description using the example of the WUI toolkit. A toolkit intended for creation of WUIs and ease development of such for application developers. It is based around a model-driven approach, supports self-adapting interfaces and first utilized reusable components. 3.2 Huddle Huddle is a system that uses an abstract description language for automatic generation of task-based UIs for appliances in a multi-device environment (e.g. a home theater or presentation room). These systems of connected appliances are becoming difficult to use as the number of available services and devices increases. Huddle makes use of an XML-based language for describing functionalities of appliances in those environments (e.g. televisions, DVD players, printers or microwave ovens). It has been used to generate graphical and speech interfaces for over thirty appliances on mobile phones, handhelds and desktop computers [40 42]. General Huddle addresses problems that arise in multidevice environments, such as the above mentioned home theaters, where multiple appliances are connected together. Users of those environments often find themselves in situations where they have to interact with multiple interfaces (e.g. use multiple remote controls) in order to perform a single task (e.g. watch a DVD movie). Huddle was designed to ease interaction with and support expandability and innovation of those systems [42]. In contrast to this approach one could incorporate multiple appliances into a single monolithic device, but this cannot easily be expanded nor is it likely to have a personalized UI [42]. Content flow is an important concept being applied in the design of Huddle, which is utilized to help users successfully perform their tasks. Content flows appear to be closely related to user goals with multi-appliance systems [42]. Imagine a user wanting to watch a DVD movie, thus viewing the movie on the television and lis- 16

29 tening to it via the speakers of a hi-fi system. In order to achieve this goal all required appliances need to be adjusted to allow the content to flow along the mentioned devices. A system s content flows can be described as the separate flows within each appliance combined with a wiring diagram [42] which shows the connections between all appliances. Such a wiring diagram is the only system-specific peace of information required by the Huddle framework. Each appliance s functionality could be modeled by its manufacture whereas the actual wiring is provided by the user, another application or some future wiring technology. The framework is based on three main features that enable the creation of powerful and general home theater interfaces: a flow-based UI, a planner and aggregate interface generators. Automatic interface generation systems like Huddle allow adding further features to multi-device environments which would otherwise be difficult to implement by (human) designers. A flow-based UI allows users to specify the content flow between appliances. They choose the endpoints and optionally select a concrete path (if multiple of such routes exist). A planner is used to configure appliances along the selected path to allow the requested content flow. The chosen planer is based on the well-known GraphPlan algorithm [9]. A questionnaire-like interface is displayed if a desired content flow cannot be activated, where the questions are supposed to assist the user in order to fix the problem. Thirdly the aggregate interface generators which are responsible for creating the actual UIs. They automatically generate a single coherent interface for multiple appliances by using knowledge of the appliance s functions and how these functions relate to the content flows [42]. Pros and Cons Table 3.2 shows advantages and disadvantages of the Huddle framework. The arguments are described in further detail hereafter. The description language is an integral part of the Huddle framework and is used for specifying functionalities of appliances. Nichols et al. claim that their description language is independent of presentation and easy to learn [42]. They also state that even complicated appliances can be specified relatively quickly [40]. During six years of development and experience Nichols et al. [40] have identified weaknesses within their description language. In an early stage of development several features were added, but were later found to be not useful (e.g. explanations and priorities of an appliance) [40]. Instead of removing them, these features remained in their language. It turned out that people learning the description language were confused by them. They also found that allowing multiple means of specifying certain features [40, p. 32] is likely to increase the usability and acceptance of their description language. From the point of an abstract UI description, the Huddle framework provides desired functionalities such as automatic generation of UIs and use of contextual information. Depending on wiring and described functionalities of appliances, UIs are accordingly rendered. The Huddle framework is intended to generate the same interface on multiple devices (e.g. personal digital assistant (PDA), mobile phone, etc.) but is not capable of rendering the UI in different ways in order to accommodate user preferences. Looking from the viewpoint of an abstract UI, a description of functionalities of a UI might be more desirable than a description of functionalities of appliances. Summary This section showed an approach to generate UIs from a specification of functionalities using the example of the Huddle system, a system intended for use in multi-device environments. It combines functionalities of appliances with wiring information in order to generate taskbased interfaces, whereby the actual generation involves additional components such as a planner and aggregate interface generators. 17

30 Pros Description language is XMLbased and easy to learn Automatic generation of UIs Use of contextual information Cons Intended for single UI across multiple devices. Restricted to description of functionalities Unnecessary and confusing features Table 3.2: Summarizes pros and cons of Huddle framework. The approach followed by this system possess various advantages and disadvantages which are listed above. 3.3 SUPPLE SUPPLE is intended as an alternative to creating UIs in a hand-crafted fashion. Instead UIs are automatically generated with respect to a person s device, abilities and preferences. The actual generation of UIs with SUPPLE is interpreted as an optimization problem [21,22]. General The SUPPLE system treats the generation of interfaces as an optimization problem. Provided that functionality, device and user are specified, SUPPLE computes an optimal layout. It searches for a rendering meeting the device s constraints and minimizing the user efforts. In contrast to earlier work [13, 19, 45] does the SUPPLE system also pick individual UI elements used for the final rendition. The rendition process is based on three inputs: a functional interface specification, a description of device capabilities and user traces. Those components are described in the following. The interface specification relies on describing what functionality should be exposed to the user rather than specifying the presentation of functionality. Elements of such UI description have two main attributes: a label and a data type. Types may be a primitive type (e.g. integer, floating-point number, string, etc.), an ordered list, a derivative or constraining type (e.g. type is integer but has to be between 0 and 23) or nothing (e.g. some elements are just for presentation and do not require user interaction). A container type is also provided and allows the nesting of elements. Elements may additionally provide further information such as a set of likely values, hints on how values should be represented (e.g. true On, false Off) or an indication of whether an element is editable (or not). The description of device capabilities is a tuple containing a set of available UI widgets, a set of device-specific constraints and two devicespecific functions used to estimate the appropriateness of utilizing particular widgets in particular contexts [21]. UI widgets are used to give abstract UI elements a concrete representation, whereas a distinction exists between those that render primitive types and those that are containers. Device-specific constraints are functions mapping a full or partial set of element-widget assignments to either true or false [21] (e.g. can be used to reflect screen size, etc.). Another function is used to evaluate the suitability for manipulating state variables of a given type [21] (e.g. depending on likely values or whether values typically lie close together does the appropriateness of UI elements change). The last function estimated how much user effort is required for navigating through container widgets. User traces can be used for automatic adaption of UIs. They were chosen to express usage patterns of a particular user and intended to provide a rendering that best fits the expectations of its user. A user trace is referred to as a set of trails where [... ] the term trail refers to coherent sequences of elements manipulated by the user [21]. While using the interface, SUPPLE accumulates trace information which in turn is used to adapt the UI. An interesting property of these traces is the fact that they are device-independent and can therefore be used to adopt the same UI on different devices (even on those that have not been used before by the user). 18

31 Pros Automatic generation and adaption of UIs Description of functionalities of UIs UI generation interpreted as optimization problem Cons UI generation requires computing power not available on wearables Complex applications not possible No interchangeable renderers Table 3.3: Summarizes pros and cons of SUP- PLE framework. Diverse (dis-)advantages regarding the usability, goals and intentions are listed. Pros and Cons Table 3.3 summarizes pros and cons that arise from using the SUPPLE framework. The presented arguments will be specified in the following. The interpretation of UI generation as an optimization problem is an interesting approach. It allows the automatic generation of optimal layouts with respect to certain aspects, like user preferences, device or task. From the viewpoint of abstract UIs, this approach offers desirable functionalities. However the SUPPLE framework has only been used to create relatively simple applications (e.g. FTP client, interface for controlling appliances in a university classroom, etc.) [21]. Its current data model is not capable of handling complex applications such as Microsoft Word or Outlook [21]. Furthermore UI generation and searching for optimal layout require computing power which is unlikely to be available on wearable devices. The accumulation of user traces and automatic adaption according to them represents a desirable property of WUIs. In addition the description of a UI through required functionalities, rather than its representation, is also a desired property. Summary This section presented an approach for automatically creating UIs with an optimal layout using the example of the SUPPLE system. A system intended to provide an alternative development method for creating UIs from scratch. It is based around an abstract UI describing its functionality rather than its representation, a description of device capabilities and a user model. SUPPLE treats the process of rendition as an optimization problem with respect to the layout of UI elements and user efforts. 3.4 Summary In this chapter, three approaches to automatic UI generation were presented. The WUI toolkit uses an abstract UI description and a renderer in order to generate concrete UIs. The Huddle system describes functionality through an XML-based description language and makes use of aggregate interface generators which generate concrete UIs. The approach of the SUP- PLE system interprets UI generation as an optimization problem. A comparison of the solution implementation and the presented frameworks is shown in section

32 20

33 Chapter 4 An Abstract User Interface for Wearable Computers This chapter is the main part of the thesis and pursues the goal to allow the resourceful and interested reader to repeat the solution implementation. It presents a solution to the research questions in section 1.2. The development of a framework for abstract user interfaces (AUIs) will be shown. The rendition of AUI elements will be addressed and how further user interface (UI) elements can be added will be explained. Furthermore an evaluation of the framework is described and its results pointed out. A discussion of the contents of this chapter can be found in the last section of this chapter. 4.1 Development This section presents a solution to the research questions (see section 1.2). The development of a framework facilitating the creation of AUIs for wearable computing applications is shown and documented hereafter. A number of design problems are identified and separately addressed Design Problems The answers to the research questions and the development of the framework is partitioned in a number of design problems. Each design problem highlights certain aspects of the AUI framework and summarizes particular aspects from multiple requirements found in section 1.2. Instead of directly relating to the requirements, they are further pooled into design problems in order to avoid unnecessary redundancy because their explanations would heavily overlap. Four design problems regarding the design and development of an AUI framework will be examined: Structure and Design of Abstract User Interface Components : The internal representation of an AUI affects numerous aspects of the frameworks design. Traversing of UI components will be necessary (e.g. when manipulating the contents of multiple abstract UI elements or rendering an AUI). The way AUIs are structured will have influence on the rest of the framework. Dispatching an Event : The event system presents an integral part of the framework. The kinds of events and how they are distributed will affect all event-related framework parts. E.g. the way the rendering process is organized or the way changes in an AUI element are propagated. This problem addresses questions like: How is the event system structured? or When and how are events fired? Integration of Context : How contextual information is represented will have impact on the rendering process of the AUI and the way application developers make use of it. This design problem focuses on the representation of context within the framework rather than recognition or generation of context. 21

34 Rendering of Abstract User Interface Components : The rendering process must be organized in a way that supports different window systems and lookand-feel standards. The rendering process should be as independent of window systems as possible. The rendered version of the AUI may not be limited to a single device or representation. Those design problems will be discussed in the following. Each problem consists of a set of goals and constraints which are required to fulfill the goals. Both, goals and constraints, will be examined in detail before a solution is suggested. It should be kept in mind that these problems merely present an overview as they can be divided in sub-problems (or goals). During the discussion of design problems, diagrams are used to illustrate certain aspects of the design problem. In order to avoid unnecessary redundancy, those diagrams show information related to the particular part of the design problem that is being discussed only. The repetitions of operations and attributes would unnecessarily clutter up the diagrams. More detailed information is best taken from the implementation [1] Structure and Design of Abstract User Interface Components In chapter A Brief Review of Wearable Computing and User Interfaces (see Chapter 2), AUIs were introduced as abstract descriptions that can be rendered on multiple devices and in various ways (e.g. changing color themes, navigation techniques or look-and-feels) and modalities (e.g. visual, auditory, tactile, etc.). This design problem is concerned with questions like: How should UI elements be structured in order to appropriately represent a concrete UI? and What may application developers expect from those UI elements in terms of functionality? Figure 4.1 shows an exemplary arrangement of UI elements for a quiz application (e.g. an application that takes a number of questions, the correct answer and several wrong answers). Its purpose is to give an example of an abstract UI description, whereas a concrete representation is shown in Figure 4.2. The graph (Figure 4.1) obviously includes several kinds of UI elements, each containing a different kind of data. A UI element represents a certain kind of data (e.g. Choice, Text, Trigger, Container, etc.) and thus must have some sort of data model, which can be used to access (get and set) its data. Furthermore UI elements can be grouped in containers to express correlation among them (e.g. some components share a semantic affiliation). In the course of this subsection (4.1.2) the general structure of abstract UI components (or simply abstract components) is explained, how this structure can be traversed as well as how the component s data can be accessed and represented. The following four requirements are related to the design problem discussed in this section. Device independent UI description: Definition of AUI components. A reasonable level of abstraction is chosen in order to allow the rendering on a broad range of devices (e.g. personal digital assistant (PDA), head-mounted display (HMD), etc.) and usage of different software frameworks (e.g. abstract window toolkit (AWT), Swing, standard widget toolkit (SWT), etc.). Reusability of components: Definition of AUI components which can be utilized with multiple I/O devices and modalities. The chosen level of abstraction covers a broad range of concrete UI components in various UI frameworks, devices and modalities. Furthermore, the data of abstract components is separately kept from its component in order to ease reusability and promote a loose coupling. Extensibility: An hierarchy of AUI components is introduced. How further AUI components are integrated can be derived from the described implementation. Nonetheless this topic is elaborated in 22

35 Figure 4.1: Illustrates an abstract representation of an UI for a quiz application. It shows how UI elements are structured and introduces different kinds of abstract UI elements. A concrete representation of this composition is shown in Figure 4.2. Figure 4.2: Shows a concrete representation of the abstract UI description that is illustrated in Figure 4.1. A few exemplary representations of abstract UI elements are presented. The background colors are used to highlight the dimensions of the Container components. greater detail in section 4.2. Support for distribution of toolkit components: The state of an AUI component is separately kept and access to it is strictly encapsulated. Stored information may reside on either the wearable system or on a remote computing machine. The above requirements are mentioned for the sake of completeness. They are supposed to provide the reader with a means of orientation. Hereafter various aspects of this design problem will be highlighted, explained, and solved. Their implementation will be visualized and described in detail. How to treat compositions and individual objects in a homogenous way? The framework developed in the thesis allows the construction of complex UIs using smaller and simpler UI elements. An application developer may group components to form a more complex component, which in turn can be grouped with other components and form an even more complex component. E.g. two trigger components may be grouped within a container, which in turn could be combined with a text and choice component and compose a more complex object (see Figure 4.1). Compositions and individual components should ideally be treated in the same way, whereas a naive implementation might favor a definition of primitive components and components that act as containers. There are a few problems with such a naive approach. Developers would have to handle containers and primitives in different ways even if their interfaces are nearly identical. Compositions and individual components need to be distinguished and cause the application to gain unnecessary complexity and increases development efforts. The Composite design pattern [23, p. 163] can be used to avoid the described problems. The following paragraphs are based on the definition of the Composite design pattern by Gamma et al. [23, p. 163] and describe its implementation in the solution. The Composite design pattern is an object-based structural pattern and can be used to represent part- 23

36 whole hierarchies. It allows to treat primitive components and compositions uniformly. Structure: The structure of the Composite pattern is illustrated in Figure 4.3. The basic idea is to create an abstract class that represents both primitive objects and containers and defines all child-related operations. Therefore developers can use the interface and treat them in the same way without having to make a distinction. Participants: The Composite design pattern has three participants, which are described hereafter. Component: Is the main class of this design pattern. The interface for managing and accessing child components is defined by this participant and optionally declares the interface for accessing its parent component (if present). Furthermore the class declares operations that are common to all components (compositions and primitive components) and provides default implementations where appropriate. Leaf: Is a primitive Component and defines the behavior of them in a composition. It does not have child components, thus representing a leaf in the tree structure. Composite: Is a composition of Component objects. It may have child components and represents a container object that provides an implementation for childrelated operations. Collaborations: Developers are meant to interact with components through the Component interface. A request is handled directly if the recipient is a Leaf and usually forwarded to child components if the recipient is a Composite. In the latter case, additional operations may be performed before and after forwarding. Implementation: An implementation of the Composite design pattern is shown in Figure 4.4. The naming of classes and operations has been modified to appropriately reflect abstract UI elements. The Component class defines operations for managing (addchild and removechild) and accessing (getchildren) child components. Furthermore a method for accessing the parent component is provided (getparent). No further operation is required with respect to treating compositions and individual components uniformly. The Container is the equivalent of the Composite class from the Figure 4.3, whereas Choice, Text and Trigger correspond to Leaf classes. How to provide sequential access to compositions without necessarily exposing its underlying representation? This framework allows the construction of aggregates. It is desirable to provide a uniform way of traversing such compositions without exposing its underlying representation. Depending on the context different traversal strategies may be needed and ideally use the same interface. Furthermore it may be required to iterate over an aggregate multiple times at the same time. A naive implementation could make use of the Component interface and manually iterate over the child components. However this solution produces unnecessary source code because it ignores the fact that the same type of iteration might be used multiple times. Thus it is likely to cause redundancy in the source code and is more likely to be errorprone. The Iterator design pattern [23, p. 257] may be used to accomplish this requirement. It does not clutter the components interface. Instead it provides a unified interface for accessing children of compositions and encapsulates the iteration strategy. An Iterator object has to keep track of visited elements and those that are still to be visited. An alternative could have been the Visitor design pattern [23, p. 331]. However, an implementation with the Visitor design pattern is less flexible with regard to extensibility of the 24

37 Figure 4.3: Illustrates the structure of Composite design pattern as defined by [23, p. 163]. The generic Component interface is shown as well as how Leaf and Composite elements are accordingly implemented. Figure 4.4: Shows the implementation of the Composite design pattern as part of the structure of an AUI. It supports treating compositions (e.g. Container) as well as individual components (e.g. Choice, Text and Trigger) in a homogenous way. framework. Adding a new component would imply the modification of the Visitor interface and the adjustment of classes implementing that interface. Instead the Iterator design pattern is used because it favors a lose coupling and allows a seamless integration of new components. The following paragraphs are based on the definition of the Iterator design pattern by Gamma et al. [23, p. 257] and highlight several properties of it. The Iterator design pattern is an object-based behavioral pattern. It provides sequential access to the elements of a composition without revealing the underlying representation. Structure: The structure of the Iterator pattern is illustrated in Figure 4.5. The basic idea is to encapsulate the iteration process and provide a uniform interface. Participants: The Iterator design pattern has four participants which are described in the following. Iterator: Is the main class of this design pattern. It declares operations for accessing and enumerating elements in a composition. The interface is used for all traversal strategies. ConcreteIterator: Is an Iterator for a concrete implementation of a traversal strategy. The ConcreteIterator holds knowledge on the ordering of the elements being traversed. It is responsible for managing the current position and choosing the next element. Aggregate: Is an aggregate that is supposed to be traversed. It declares an operation for generating an appropriate Iterator object. ConcreteAggregate: Is an implementation of the Aggregate interface. It is responsible for creating appropriate ConcreteIterator objects when requested. 25

38 Collaborations: A ConcreteAggregate is an Aggregate and creates a ConcreteIterator. A ConcreteIterator implements the Iterator interface and traverses its ConcreteAggregate. It remembers the current position in the composition and knows how to determine the next element in the traversal. Implementation: An implementation of the Iterator pattern is shown in Figure 4.6. The interfaces of the Iterator design pattern have been adapted according to the context of this design problem. The Iterator interface defines three operations. The hasnext operation can be used to determine whether there is at least one more element in the underlying aggregate. The next element in the traversal is returned by the next operation. The remove operation can be used to remove the last element from the underlying aggregate returned by the next operation. Pre- OrderIterator and PostOrderIterator are concrete implementations of the Iterator interface. The Component from the previous design pattern (Composite) has been specialized to create Iterator objects. It is equivalent to the Aggregate interface and defines the iterator operation, which returns a PostOrderIterator by default. A ConcreteComponent is an implementation of the Component and optionally overwrites the iterator operation to return a more appropriate Iterator object. How is the (internal) data of an abstract component represented and structured? Now that the structure of AUI components has been described and shown how components can be traversed, it is time to equip them with some content. Each of the introduced concrete Component classes manage and provide access to a certain kind of data. E.g. Text components use textual data (string) and Choice components let users select one or more options from a list (options and selections). Compositions and primitive Component (former Leaf ) objects should provide access to their data in a uniform way. Even if the actual data is likely to differ significantly, it is desirable to provide a consistent interface to retrieve and set data. This avoids the same problem a naive implementation of composition and individual objects would have had, not being able to treat them in a homogenous way (having to handle them separately). Using the concept of separation of concerns, the following structure can be identified, that addresses the previous issue. Structure: Figure 4.7 illustrates the structure to equip Component objects with a data model. The idea is to treat the data model of components as a separate object. This promotes a loose coupling and enables independent modification of both component and data model interfaces. Participants: The four participants of the structure are described in the following. Model: Is the main interface of the structure. It defines operations implemented by all data models and is used as a tagging interface for identifying Model objects. ConcreteModel: Is a specialization of the Model interface. It is a Model that defines further operations for accessing and managing its data, which are used by ConcreteComponent classes. Component: It declares an interface for accessing and creating a Model object. Default implementations may also be provided by the Component class. ConcreteComponent: Is a concrete implementation of the Component interface. It creates and provides access to an appropriate ConcreteModel object. Collaborations: ConcreteComponent objects are responsible for creating and managing their ConcreteModel object. The Concrete- Model can be accessed and manipulated through the ConcreteComponent interface. 26

39 Figure 4.5: Defines the structure of Iterator design pattern [23, p. 257]. The Aggregate and Iterator interfaces are shown as well as specializations of both. Figure 4.6: Illustrates how the Iterator design pattern has been implemented. It shows the adapted Iterator and Component (former Aggregate) interfaces. Concrete implementations of both are presented as well. Implementation: An implementation of structure can be seen in Figure 4.8. The Component class defines operations for getting and setting of Model objects (getmodel and setmodel). The createmodel operation creates a new Model object each time it is invoked. Specializations of the Component interface are represented by the Text, Choice and Trigger classes. Each creates and manages its own data model. Model is a tagging interface that is being used to identify data model objects. Each specialization provides operations according to the data they represent. The data model of the named Component classes including their operations are shown. As described above information of the data model may be accessed through the interface of the component. However, it is desirable to provide the choice to manipulate the data model directly or let the component handle and forward any manipulating actions to the data model. The latter reduces the programming effort of application developers and increases readability as they do not have to manipulate the data model in two steps (access data model & manipulate it). How to support different implementations of data models? Knowing the fundamental interfaces regarding data models of components, one can think about implementing and using them. But what is the best way to implement and represent a data model? There are plenty of ways a data model could be implemented. The most intuitive one is probably using get and set operations to access data that resides in memory. However, a network-enabled implementation that stores and loads its data from a remote computing machine or an implementation that accesses its data through files from the hard-disk are just two further examples. They certainly represent realistic alternative implementations. In other words it is desirable to allow the implementation of different data model 27

40 Figure 4.7: The design of an abstract UI components data model is shown. The Component and Model interfaces are declared as well as specialized versions of them. Figure 4.8: Illustrates the implementation of the abstract UI component s data models in the proposed framework. Several primitive Component classes (Text, Trigger and Choice) and their corresponding data models (TextModel, TriggerModel and ChoiceModel) are shown. families without having to adapt the data models interface, but at the same time being able to interchange these families. This problem can be addressed by the Abstract Factory design pattern [23, p. 87] which is an object-based creational pattern. It enables the creation of families of dependent objects without having to specify their concrete classes. The following paragraphs are also based on the definition of the design pattern found at [23, p. 87]. Structure: The structure of the Abstract Factory pattern is illustrated in Figure 4.9. The basic idea is to define an interface for products and another one for creating those products. Related objects can then be grouped in a family and are created by their corresponding factory. Participants: The participants of the Abstract Factory design pattern are described in the following. AbstractFactory: Is a main class of this design pattern. It defines operations for creating AbstractProduct objects. ConcreteFactory classes: Are an AbstractFactory implementing the interface for creating related or dependent objects. They create ConcreteProduct objects of one family. AbstractProduct classes: Are also main classes of this design pattern. Each AbstractProduct defines a set of relevant operations for itself. The AbstractFactory is based on the interfaces provided by the AbstractProduct classes. ConcreteProduct classes: Implement the interface of their corresponding AbstractProduct. A related or dependent implementation of all AbstractProduct interfaces is a family, thus a ConcreteFactory may be used to create objects of the ConcreteProduct implementations. 28

41 Collaborations: A ConcreteFactory is used to create ConcreteProduct objects of a particular implementation. Usually a single Concrete- Factory object is created at run-time and used to provide ConcreteProduct objects. If different implementations of products are required then developers should create and use multiple ConcreteFactory objects. However developers are to use the abstract interfaces only in order to ensure a minimum coupling with concrete implementations of the products and interchangeability of ConcreteFactory objects. Implementation: Figure 4.10 shows the implementation of the Abstract Factory design pattern for creating different implementations of Model interfaces. The factory and product interfaces were adapted to reflect data models of abstract UI elements. The AbstractProduct classes were replaced by the Model interfaces from the previous question. Their concrete implementations represent a default implementation that stores its data in class variables. They are the default family. DefaultModelFactory is a ConcreteFactory that returns the corresponding default implementation for an AbstractProduct. It represents an AbstractFactory that is responsible for creating dependent or related products. The DefaultModelFactory is just one example of an AbstractFactory. Different implementations using remote computing machines for retrieving and storing their data are just as well possible. access to a unique instance of an arbitrary class. The following paragraphs are based on the definition of the Singleton design pattern at [23, p. 127]. Structure: The structure of the Singleton pattern is illustrated in Figure The basic idea is to provide a class-scope operation for accessing and optionally creating the instance of interest. Participants: The single participant of the Singleton design pattern is explained below. Singleton: Is the main interface which defines an interface for accessing an unique instance of itself. The Instance operation is a class operation (meaning it can be accessed without creating an instance of a class). The unique instance may also be created by the Singleton itself if no instance is supplied from the outside. Collaborations: Developers access the instance of the Singleton object through its Instance operation only. Implementation: The implementation of the Singleton pattern is shown in Figure The naming of operations has been adopted to provide access to a unique instance of a ModelFactory object (e.g. DefaultModelFactory, etc.). Which family of data models is used? Having the freedom to choose from multiple implementations of the data model interface (data model families), creates the necessity to ensure consistent use of their implementations. In most cases it is desirable to have a unique instance of a ConcreteFactory that is globally accessible. This way it can be ensured that all created data model objects are from the same factory or family. The Singleton design pattern [23, p. 127] can be used to achieve this goal. It is an objectbased creational pattern that provides global Dispatching an Event The AUI framework makes use of an eventdriven approach. E.g. the concrete representation is updated when changes in the data model occur and vice versa. This obviously requires recognition of changes and notification of the representation. The event interfaces affect at least the rendering process and usage of the proposed framework by developers. In the following parts of this design problem, the structure and general architecture of the event system are presented. Furthermore questions like How are events distributed in the 29

42 Figure 4.9: Shows the structure of the Abstract Factory design pattern as defined by [23, p. 87]. Two families of products (AbstractProduct1 and AbstractProduct2 ) are presented along with the AbstractFactory class. The concrete implementations of both are shown as well. Furthermore the products with their concrete implementations and how they relate are shown. Figure 4.10: Illustrates how the Abstract Factory design pattern can be used to facilitate the creation of different families of data models. Shown are the ModelFactory interface and an exemplary family of data model classes (DefaultModelFactory, DefaultTextModel, etc.). Figure 4.11: The structure of the Singleton design pattern is illustrated. It shows the Singleton interface. Figure 4.12: Shows the implementation of the Singleton design pattern for providing a unique instance of an ModelFactory object. 30

43 event system? and How to support different distribution mechanisms? are answered. As part of the formulation of the research questions were several requirements identified from which are the following two related to the design problem at hand. Support for distribution of toolkit components: Events are distributed within the wearable system of its user. However they may also be distributed over a network to remote computing machines. All toolkit components using event-based communication could therefore be outsourced. This includes externalization of model and context changes as well as rendering of AUI components. Support for multi-modal information presentation: Integrated sensors of a wearable system could provide contextual information that can be used during the rendering process. They may provide the basis on which the modality is chosen. Sensors fire an event when a measurement was taken or contextual information is recognized. In the course of this design problem particular aspects of it are pointed out and explained. Their implementation is described and visualized. What is the architecture of the event system in the framework? The event system, implemented as part of the proposed framework, allows the notification of interested instances in the case that a certain kind of event occurs. E.g. anything from a user making a text entry over clicking on something to programmatically changing an abstract component s data causes an event to be triggered. In order to react to those events, they must be distributed to their appropriate destinations. This part of the design problem describes the main structure of the event system in this framework. Several fundamental interfaces are introduced and shown how they relate to each other. Structure: The structure of the event system is illustrated in Figure The idea is to create three kinds of objects. Those representing events, those being interested in events and finally those that distribute or provide events to interested instances. Participants: Six participants can be identified in the structure of the event system. They are described in the following. Event: Is one of the main interfaces. It defines operations for all events and provides default implementations where appropriate. It is also used as a tagging interface for identifying Event objects. ConcreteEvent: Is an Event object. It declares event-specific operations for getting and optionally setting the event s data. EventListener: Is the second of three main interfaces. It is a tagging interface for those objects being interested in receiving Event objects. Optionally, it may define operations for all listeners. ConcreteListener: Implements the EventListener interface. It declares operations required for receiving ConcreteEvent objects. EventProvider: Is the last of the main interfaces of this structure. It is also a tagging interface for identifying those objects that provide Event objects to interested EventListener instances. Operations for distributing events within the event system and un-/subscribing EventListener objects are defined as part of this interface. ConcreteProvider: Is a specialization of the EventProvider interface. It defines operations for un-/subscribing ConcreteEvent objects and distributing them to subscribed ConcreteListener objects. Implementation: An excerpt of the implementation of the event system is shown in Figure It illustrates two different kinds of 31

44 events, their corresponding EventListener and EventProvider interfaces. The three main interfaces can be viewed as tagging interfaces. EventListener and Event- Provider define an empty interface, they are plainly used as such. However, the Event class declares an operation (getsource) that return the source or origin of the event. Specialized versions of the main interfaces (Action and PropertyChange interfaces) implement and optionally extend their corresponding interface. The ActionProvider defines operations for un- and subscribing (addaction- Listener and removeactionlistener) as well as notifying all subscribed ActionListener objects (notifyactionlisteners). The ActionListener in turn does declare an operation (performaction) that is invoked when it is notified by an ActionProvider object. The ActionEvent does not require more data to be managed than already provided by the Event class. PropertyChange interfaces have been adapted in the same way the Action interfaces were. How are events propagated? The previous part of this design problem documented the structure of the event system and described the relationship among the basic interfaces. It also mentioned that EventProvider classes notify EventListener classes when an event occurs but not how those events actually get from the EventProvider to the EventListener. This part of the design problem is concerned with the question How does such an event get from A to B?. One can easily imagine three kinds of event systems. Those that follow a centralized approach, those following a decentralized approach and those that combine both. The centralized one has a single instance for subscribing and unsubscribing of EventListener objects for either all Event objects of a certain type or all Event objects from a particular EventProvider object. There is a single central intelligent object that receives all events and distributes them to their corresponding listeners (potential bottle-neck). The decentralized approach does not funnel all Event objects through a single object, but instead enables EventListener objects to register and unregister themselves directly at an Event- Provider object. The third approach combines the previous two. All of them have their advantages and may be particularly useful in certain situations. Regardless of the chosen approach, the propagation of events can be solved with the Observer design pattern. However, the approach followed in the proposed framework is a decentralized one, as the author expects application developers to observe particular objects more often than observing all events of a particular type (from all objects). Furthermore the author finds the centralized approach to be less useful in wearable scenarios. Wearable systems might not have enough computing power to distribute all events through a single object and still be able to interact with its user at the same time. The following paragraphs describe the Observer design pattern [23, p. 293] and how it can be used to solve event distribution. It is an object-based behavioral pattern. This design pattern automatically updates all dependent objects in an one-to-many dependency if the state of the root object changes. The paragraphs are based on the definition of the Observer design pattern found at [23, p. 293]. Structure: Figure 4.15 shows the structure of the Observer design pattern. The idea is to have an object that is subject to changing its state and if it does it notifies all objects that were observing it. Participants: The four participants are explained in the following. Subject: Is one of the main classes of this design pattern. It defines operations for un-/subscribing and notifying interested objects. It knows its observing objects and may have any (non-negative) number of them (including zero). ConcreteSubject: Implements the Subject interface. It holds the state that observing objects are interested in and no- 32

45 Figure 4.13: Shows the basic design of the event system in the proposed framework. It illustrates the Event class, the EventListener and EventProvider interfaces as well as their corresponding concrete versions. Furthermore it is shown how they relate to each other. Figure 4.14: Illustrates how the event system is implemented. It shows the ActionEvent and PropertyChangeEvent classes as well as their according EventListener (ActionListener and PropertyChangeListener) and EventProvider (ActionProvider and PropertyChangeProvider) interfaces. The relationship among them is presented along with event-specific operations. 33

46 tifies them when its state is updated or changed. Observer: Is the second main class of this design pattern. It declares operations that will be invoked when a Subject changes its state and notifies its Observer objects. ConcreteObserver: Is an Observer. It keeps track of the changes of a ConcreteSubject in order to stay up-to-date. A ConcreteObserver holds a reference to its corresponding ConcreteSubject objects, stores the state of the ConcreteSubject and updates it automatically if it changes. Collaborations: The ConcreteSubject notifies all of its ConcreteObserver objects every time its state is updated or changed. Once a ConcreteObserver has been informed it is likely to query the source in order to synchronize the state of the Subject and Observer. Implementation: The Observer design pattern can be used to synchronize the state of one object with the state of another one or distribute events. Figure 4.16 illustrates the implementation of this design pattern and how it can be used for the latter purpose. The Subject and Observer classes were renamed to reflect the problem more accurately. The operations and their implementations were adjusted accordingly. The EventProvider and EventListener interfaces correspond to the Subject and Observer classes with the minor exception that they do not define operations for notifying and updating. The declaration of those operations is shifted to the ConcreteProvider and ConcreteListener classes. ConcreteListener objects can be attached (addlistener) and detached (removelistener) to their corresponding ConcreteProvider and will be notified whenever notifylisteners is invoked. This calls the update operation on all registered ConcreteListener objects. It might not be as obvious but the basic classes in the chosen implementation do not force the implementation of the Observer design pattern. The decision of how events are distributed is left to the concrete events. This has rather practical reasons as the chosen programming language (Java) does not allow the implementation of the same interface more than once with different arguments. The problem can be avoided when using the EventProvider and EventListener interfaces as tagging interfaces and defining the actual operations for managing of EventListener objects and processing of events in the corresponding specialized (concrete) versions of the EventProvider and EventListener interfaces. How to support different implementations of event providers and which family is actually used? The previous design problem discussed how different families of data models could be created (see subsection 4.1.2). The same idea can be applied to the problem of supporting different implementations of the EventProvider interface. In most cases a default implementation, that distributes events on the wearable system, is sufficient enough. However in some cases an implementation, that enables the distribution of events across a network, might be more desirable. This requires multiple implementations of the EventProvider interface, which is exactly the kind of thing the Abstract Factory design pattern can deal with. It allows the creation of separate families of EventProvider classes. In combination with the Singleton design pattern a centralized point for creating Event- Provider objects can be set up and used throughout the framework. Structure and Participants: The structure and their corresponding participants of the involved design patterns (Abstract Factory and Singleton) have been described earlier in this chapter. For more details see How to support different implementations of data models? (Abstract Factory) and Which family of data models is used? (Singleton) in the previous design problem (see subsection 4.1.2). 34

47 Figure 4.15: Illustrates the structure of the Observer design pattern as defined by [23, p. 293]. The interfaces for a Subject and an Observer are shown as well as their corresponding Concrete- Subject and ConcreteObserver class. The relationship between them may be taken from this figure as well as the relationship among their concrete implementations. Figure 4.16: Illustrates how the Observer design pattern is integrated in the solution implementation. It shows the relationship among the fundamental classes from Figure 4.13 when looking from the viewpoint of event distribution. The EventProvider and EventListener interfaces as well as their specialized interfaces are presented. 35

48 Implementation: The combination of the Singleton (see Figure 4.11) and Abstract Factory (see Figure 4.9) design patterns is presented as a solution to support creating different families of EventProvider classes (see Figure 4.17). The default family, which utilizes local distribution of events, is used as an example of how several different families could be incorporated. The according EventProviderFactory class is shown and its relationship highlighted Integration of Context Contextual information provide a desirable enrichment for abstract UIs and wearable applications. A wide range of contexts can be recognized [14,25,32,38,64] and used to adapt UIs accordingly or react appropriately to changes of the environment, user s mental state, etc. The way such information is represented affects at least the rendering process and usage of the proposed framework by developers. Both, the rendering process and developers, are expected to make use of that information and should know how it is represented and how it can be accessed. This design problem is rather concerned with the management of contextual information in the framework instead with the question How is contextual information recognized or generated?. In the course of this design problem, several interfaces are presented which are supposed to be used by those having knowledge on context recognition and those that have not. The latter group would just use the contextual information provided by the first group. Furthermore a way of measuring information and providing (external) contextual information is also introduced as part of this design problem. Three requirements, that were listed in section 1.2, are related to the design problem discussed in this subsection. They are purely mentioned as a means of orientation for the reader. Support for integration of context: Describes how contextual information is used within this framework and how application developers could use it. Two kinds of contexts exist in this framework: (1) local context which each AUI component has and (2) global context which is accessible through a central storage point. Support for distribution of toolkit components: Contextual information is stored separately and may only be accessed in a strictly encapsulated way. They may reside on either the local wearable system or another network enabled computer. Support for multi-modal information presentation: The rendering in a certain modality may be specified as part of an AUI components context. A number of different aspects related to context integration are presented in the following. They are highlighted, examined and later on a solution is suggested. The implementations will be described in detail. How is contextual information represented? Before actually being able to use contextual information, one must first know how it is represented. This is explained in the course of this part of the design problem. Contextual information is represented through key-value pairs. This is shown in Table 4.1. It shows a few environmental keys and their corresponding values. The point (. ) within the key may be used to introduce layers of contextual information and simplifies the representation in a tree-like structure. Implementation: Contextual information is represented through a Context object as Figure 4.18 illustrates. The Context class defines operations for managing contextual information. A contextual information may be added or updated using the setproperty operation. It can be retrieved by using the getproperty operation. The state of the Context object can be stored in a XMLbased configuration file (storetoxml) and restored from such a file (loadfromxml). 36

49 Figure 4.17: Shows the implementation of the Abstract Factory and Singleton design patterns as a means of supporting different families of EventProvider classes. Illustrated are the Provider- Factory class and an exemplary family of EventProvider classes (DefaultProviderFactory, DefaultPropertyChangeProvider, etc.). Figure 4.18: Shows the implementation of a Context object that can be used to manage contextual information (getproperty and setproperty). It can be used to store and load the information from an XML-based source (loadfromxml and storetoxml). environment.illumination 42 environment.temperature 23.3 environment.location ( , ) Table 4.1: The representation of contextual information within the proposed framework is presented. Keys and their corresponding values are shown. Where is contextual information gathered? Knowing how contextual information is represented, the next natural step would be looking into how and where they can be used. It is certainly desirable to have a global storage point for contextual information that can be used to store global contextual information (e.g. environmental information, user s mental state, etc.). Not quite as obvious is the idea to attach such a Context object to abstract components (see Figure 4.4) as a means of expressing the context it is being used in. This would provide the foundations for context-sensitive rendering (e.g. a particular component is not editable or not visible at the moment). The Singleton design pattern [23, p. 127] can be used to provide global access to a Context object. The Component interface can simply be specialized to include access to a Context object. Structure and Participants: The structure and its corresponding participants of the Singleton design pattern were explained in a previous design problem (see Which family of data models is used? in 4.1.2). Attaching a Context object to an abstract component does not involve a structure that needs to be described at this point, therefore see implementation part. Implementation: The concrete implementation of the Singleton design pattern for providing global access to a single Context object is illustrated in Figure Furthermore the relationship between abstract components and Context objects are shown as well. Two operations were added to the Component interface. One for accessing its Context object (getcontext) and one for setting its Context object (setcontext). 37

50 Figure 4.19: Illustrates how the Singleton design pattern is used to provide global access to a single Context object and how abstract components manage their Context objects. How to recognize changes of contextual information? The distribution of events has been described as part of the previous design problem (see subsection 4.1.3). The Context object makes use of the event system and notifies all interested EventListener objects when changes of the contextual information occur. In short, it acts as an EventProvider. Figure 4.20 visualizes this behavior Rendering of Abstract User Interface Components The rendition of abstract components is a big part of the solution implementation. It must provide a way to turn abstract UI elements into concrete UI elements. However a simple 1:1 mapping from abstract to concrete components is not sufficient enough. Taking an abstract UI description and only allowing it to be represented in a single way would certainly defeat the purpose of having an abstract description in the first place. Therefore, the framework should support different window systems (e.g. AWT, SWT, etc.), multiple look-and-feel standards and should not be limited to a single device or representation. In the course of this design problem it will be explained how the rendition process of abstract components is structured and how it works. Furthermore the problem of keeping an abstract component synchronized with its concrete representation is addressed. The design problem discussed in this subsection is related to the following four requirements. Device independent UI description: Describes the process of how AUI components are rendered. The support of rendering on a wide range of devices (e.g. PDA, HMD or smartphone) and usage of UI toolkits (e.g. AWT, Swing, SWT) is highlighted. Reusability of components: Describes how AUI components can be reused in different UI frameworks, devices or modalities. Extensibility: The transformation of an AUI component into a concrete UI component is described. The same principle applies to any newly added AUI components. Further explanations follow in section 4.2. Support for multi-modal information presentation: Rendering AUI components in a certain modality is done by analyzing contextual information provided by sensors, constraints, developers or users. Hereafter, the mentioned aspects of the rendering process and synchronization are addressed. They will be explained in greater detail, a solution is suggested and their implementation described. How to support distribution of responsibilities across multiple renderers and ensure consistent handling of rendering requests? When rendering abstract components it is desirable to distribute the rendering process across multiple renderers. Imagine a single rendering instance, that is responsible for generating a concrete representation for all abstract components. Such a renderer would turn out to be comprehensive and hardly anymore manageable in case that changes need to be made. Also 38

51 Figure 4.20: The implementation of the Context object as a concrete EventProvider is shown. The PropertyChangeProvider interface is implemented and notifies all PropertyChangeListener objects when required. when taking extensibility into account, the approach of spreading responsibilities across multiple renderers is preferable. A renderer could handle compositions (e.g. Container), another text- and choice-based (primitive) components (e.g. Text and Choice) and a third might take care of action-based components (e.g. Trigger). An optional fourth renderer could be implemented, when a new abstract component is added to the framework and used to handle the new component. Such a concatenation of renderer is shown in Figure Having multiple renderers creates the need for a centralized point for accessing the chain of renderers. In other words the order in which the renderers handle or pass requests along does matter. It is also desirable to ensure that the same chain of renderers is used during all rendering actions. Skipping a renderer would cause certain abstract components to not be rendered anymore. The described behavior can be achieved by using a combination of the Chain of Responsibility [23, p. 223] and Singleton design patterns [23, p. 127]. The latter has first been introduced in the course of the design problem Structure and Design of Abstract User Interface Components (see subsection 4.1.2). The Chain of Responsibility promotes a loose coupling between the sender of a request and its receiver. It is an object-based behavioral pattern. A chain of handlers is used to pass the request along until it is handled by one of them. Structure: The Chain of Responsibility design pattern is structured as illustrated in Figure The basic idea is to create a chain of renderers, in which each renderer has its own responsibilities. A request is passed along the chain until one renderer takes responsibility for it. Participants: The design pattern has two participants which are described in the following. Handler: Is the main class of this design pattern. It declares operations for handling requests and optionally for accessing its successor. ConcreteHandler classes: Is a concrete implementation of the Handler class and is responsible for handling requests. If it can handle a request a ConcreteHandler does so. If not the request is forwarded to its successor. Collaborations: A request is initiated when it has been passed to the first Handler object in the chain. It is then passed from Concrete- Handler object to another until one Concrete- Handler handles it. Implementation: Figure 4.23 shows an implementation of the Chain of Responsibility and Singleton design patterns for rendering abstract components. The naming of interfaces and operations has been adjusted. The Renderer interface represents a Handler It defines operations for getting and setting its successor (getnext and setnext) in the chain of renderers. An operation for handling and/or forwarding rendering requests is also declared (substantiate). Furthermore the getinstance operation provides global access to the first renderer in the chain. By passing a request to the first renderer it is initiated and at some point in the chain handled. A few exemplary implementations of the Renderer interface are shown as well. They are 39

52 Figure 4.21: Shows a chain of renderers, each being responsible rendering a certain abstract components. E.g. the CompositeRenderer class render compositions whereas the PrimitiveRenderer1 class renders all Text and Choice components. PrimitiveRenderer2 takes care of Trigger components. merely supposed to give an idea of what Renderer objects exist in the framework, but do not list all of them. The chain of renderers has been implemented in such a way that allows interchanging of renderers at run-time. As the framework handles rendering requests, the chain of renderer can be extended, shortened or rearranged. This is possible because the chain of renderers is a linked list. However the beginning of the chain is provided by the Singleton design pattern and cannot easily be modified. This means that the first part of the chain is always the same (and is not intended to be replaced at run-time). To counteract this fact a DummyRenderer class was implemented. It does nothing except for forwarding rendering requests to its successor. When placing the DummyRenderer at the beginning of the chain, it enables us to interchange any meaningful renderer at run-time. How to keep concrete and abstract components synchronized? The rendition process has been described and can be distributed across multiple renderers. Each renderer has responsibilities for certain kinds of requests or abstract components. A renderer handles a request by either determining the concrete representation of an abstract component or forwarding it to its successor. Furthermore a renderer has the freedom to decide whether an abstract component is represented through a single or multiple concrete components (i.e. a 1:1 ratio between abstract and concrete UI elements is not required). The concrete representation needs to be kept synchronized with its abstract elements. Meaning if changes in the data model of an abstract component or its contextual information occur then these must be reflected in the concrete representation. In the opposite direction when a user interacts with the UI and makes inputs then these need to be reflected in the abstract component as well. This problem can be addressed by using the Mediator design pattern [23, p. 273] which is an object-based behavioral pattern. It mediates between multiple objects and coordinates their requests. The following paragraphs are based on the definition of this design pattern at [23, p. 273]. Structure: The structure of the Mediator design pattern is shown in Figure The basic idea is to use a mediating object that coordinates the interaction among Colleague objects. The Colleague objects communicate with their Mediator object whenever they would have otherwise communicated with another Colleague explicitly. This way the interaction between them be varied independently. Participants: The three participants of the Mediator design pattern are described in the following. Mediator: Is the main class of this design pattern. It declares an interface that is used to communicate with Colleague objects. These operations are typically invoked by Colleague objects and supposed to mediate the request to other Colleague objects. ConcreteMediator: Is a concrete implementation of the Mediator interface. It knows its colleagues and coordinates them. Colleague classes: Know their Mediator objects and communicate with them. Instead of explicitly communicating with other Colleague objects, ConcreteColleague objects pass their request 40

53 Figure 4.22: The structure of the Chain of Responsibility design pattern is presented as defined [23, p. 223]. A Handler interface is defined and are ConcreteHandler classes are shown as well. Figure 4.23: The implementation of the Chain of Responsibility and Singleton design patterns are illustrated as a means of rendering abstract components to concrete ones. The Renderer class and a few concrete versions of it (AWTCompositeRenderer, AWTComponentRenderer, DummyRenderer, etc.) are presented. to their Mediator object which in turn handles it by delegating it to other ConcreteColleague objects. the individual data model of the abstract component must be mapped to the capabilities of the concrete components and vice versa. Collaborations: Requests are sent by ConcreteColleague objects to their Mediator object. It forwards the request to appropriate Colleague objects. Implementation: The framework makes use of the Mediator design pattern. It has been realized for each abstract Component class in order to keep it synchronized with their concrete representations. Figure 4.25 illustrates how an AbstractComponent class passes requests to its mediator, who then delegates it to the appropriate ConcreteComponent classes. The Mediator design pattern does not require the definition of an interface, but instead describes mostly the information flow among a number of objects. The implementation is highly dependent on the abstract component and its concrete representation. This is because Summary (of Used Design Patterns) This chapter has described numerous aspects of the development of an AUI framework. Various design problems regarding the structure, design and rendering of abstract components, integration of context and event system were identified, explained and solved. The solutions are mainly based on design patterns that can be reviewed in [23]. The following seven design patterns were used and accordingly adapted to solve the problems at hand. Both, design patterns and design problems they solved, are listed below. Composite: This pattern has been used to model the structure of an AUI without having to treat compositions and individ- 41

54 Figure 4.24: Illustrates the structure of the Mediator design pattern as defined by [23, p. 273]. Shown are Mediator and Colleague classes as well as concrete implementations of them. Figure 4.25: Shows the communications among a mediator and mediated objects. An Abstract- Component requests to update its ConcreteComponent classes through the ConcreteMediator object and vice versa. The ConcreteMediator object delegates the requests to the appropriate recipients. ual objects in different ways (see subsection 4.1.2). Iterator: Defines traversal strategies for AUIs that can be used for an ordered sequential access of all components in a composition. E.g. renderers make use of them (see subsection 4.1.2). Abstract Factory: Is used to facilitate the creation of different families of related objects. Different implementations of data models for abstract components (e.g. default implementation that keeps the data on the wearable system or a networkenabled implementation that stores and retrieves its data from a remote machine) (see subsection 4.1.2) and different implementation of the EventProvider interface (e.g. default implementation that distributes events on a wearable computer only or a network-enabled implementation that also distributes them across a network to other remote machines) (see subsection 4.1.3) are supported this way. Singleton: Provides a global point of access in several implementations in the framework. The AbstractFactory implementations were combined with this pattern (see subsection and 4.1.3) but the Singleton design pattern was also used to provide global access to the chain of renderers (see subsection 4.1.5) and storage point for contextual information (see subsection 4.1.4). Observer: This pattern is used to distribute events within the event system. E.g. when contextual information or data of an abstract component changes then an event is triggered and forwarded to all interested listeners (see subsection 4.1.3). Chain-Of-Responsibility: Enables spreading the rendition of abstract components across multiple renderers (see subsection 4.1.5). A rendering request is issued and passed along a chain of renderers until one claims responsibility for the request and takes care of the 42

55 rendering. Mediator: Defines a mechanism of keeping information in abstract components and their corresponding concrete representations synchronized (see subsection 4.1.5). Even though design patterns could be used to solve most identified problems, they could not be used to solve all of them. The structural representation of the data model (see subsection 4.1.2) and event system (see subsection 4.1.3) do not utilize design patterns. They were solved with common sense and separation of concerns in mind. As noted at the beginning of this section, diagrams were kept as small as possible, thus they include necessary operations only. More detailed information can be found in the implementation [1]. 4.2 Extensibility This section presents three ways of extending and modifying the framework developed in the previous section (see section 4.1). The options for adding a new abstract component are described. Furthermore, it is explained how the process of rendering an AUI can be modified or adjusted to accommodated special needs. E.g. in terms of determining a concrete representation for an abstract component. The following approaches to extensibility will be described: (1) using existing components to create high level components, (2) modifying the rendering of existing components and (3) creating new abstract components Using Existing Components The framework and its abstract components have been designed with reusability in mind. High level components may be created by specializing a Container component and adding required components. This approach is especially useful when an amalgamation of multiple Pros Easy to implement and intuitive No need to modify the rendering process Cons Restricted to existing components Represents a composition and not a new component Table 4.2: Summarizes pros and cons for using existing abstract components in order to create high-level components. components to a high level component is necessary or an extended interface for accessing information in a composition is desirable. Diverse advantages and disadvantages regarding the use of this approach are shown in Table 4.2. While it is fairly easily implemented, it is also restricted to using and combining existing components only. It does not represent a truly new abstract component but rather a composition. However on the upside, a modification of the rendering process is not necessary as only a composition of existing (already renderable) components is used and no necessity for a new Renderer in the chain of renderers is required. An example for such a high level component, that profits from the presented approach in this section, is a personal profile component. A component to represent information of a person (e.g. first- and last-name, phone, , etc.). The process of creating such a high level component may be separated into 3 steps. 1. Creating a specialized version of the Container component. 2. Adding required components to the new Container object. 3. (Optional) Define operations for accessing and managing the data of its required components. With respect to the example, a PersonalProfile class would be created. It extends the Container component (introduced in 4.1.2). In the second step, Text components for the required fields of the profile are added as child components. This is preferable done in the constructor of the PersonalProfile class. The third step, 43

56 operations for retrieving and setting the fullname (first- and last-name), and other fields of the profile are added to the interface of the PersonalProfile class. An implementation of the PersonalProfile class can be reviewed in the appendix (see Listing A.1). Following this approach, existing components can be grouped and form new high level components that can be used just like any other abstract UI element of the framework Modifying the Rendering Process The actual rendering process was designed to be dynamic, meaning that arbitrary instances can be added or removed from the chain of renderers (or existing instances replaced). Therefore a renderer responsible for rendering certain AUI components can be replaced with a renderer that renders them in a different way. No actual new abstract component can be created with this approach. Instead the appearance of existing components can be adjusted or completely replaced. Table 4.3 shows a list of arguments reflecting the pros and cons of using this approach. A positive thing about the modification of the rendition process is that it can quite simply be done. On the downside, it is just as the previous approach (see subsection 4.2.1) restricted to existing components and can only be used to modify the representation of abstract components. However no configuration files are required to be modified, this is possible because a renderer can be added and removed at runtime. Thus the new renderer can simply be inserted into the chain of renderers whenever it is needed (and removed afterward). This may be a desirable technique when a new kind of context needs to be reflected in the UI. For example the importance of an abstract component may be highlighted by adding a frame to its graphical representation or emphasizing its auditive representation. The following steps need to be done in order to change the representation of an abstract component. Pros Cons No need to modify configuration files Restricted to existing components Easy to implement Only adjusts the representation of a components Table 4.3: Lists dis-/advantages for modifying the rendering process in order to adjust the representation of particular abstract components. 1. Implement a new Renderer class or specialize an existing one. 2. Handle rendering requests by checking whether the component being rendered is the one who s representation should be changed. If yes, then do so. If not, then forward request to successor. 3. Insert new Renderer class in chain of renderers. With respect to the example, an ImportanceRenderer class that implements the Renderer interface would be created. In the second step, the Context object of the component being rendered would be inspected for information on its importance. If the component is more important than others, then a bold red frame is added around it. However if the component is of average importance then the rendering request is simply passed along to the successive renderer in the chain. Once implemented, the ImportanceRenderer must be inserted at an appropriate position in the chain of renderers. Listing A.2 shows an implementation of the ImportanceRenderer class. This approach should be used when the concrete representation of abstract components must be modified. 44

57 Pros Adds a truly new component Implementations is simple and straight forward Cons Requires modification of rendering process Total integration (incl. data model) requires changing of a main interface Table 4.4: Shows up- and downsides for adding a completely new abstract components to the framework Addition of New User Interface Components Adding further abstract components to the framework is a three step process. The framework was designed with extensibility in mind and was kept simple. It presents a trade-off of how frequently new components are added and how easy the process of addition is. A number of advantages and disadvantages, that arise when this approach is implemented, are presented in Table 4.4. Despite being the most complex approach of the presented ones, an implementation is simple and straight forward. This is the case, because the interfaces that need to be realized are quite simple. For obvious reasons, a new renderer must be created (or an existing one modified) for each new component that is added to the framework. However, when a total integration (i.e. adding new abstract component, adding new data model and support for data model families) is desirable then the modification of a major interface is required (and consequently all implementations of it must be adjusted accordingly). This approach becomes useful every time an abstract component is identified that is needed to achieve a certain goal. Imagine the case that a wearable application want to display weather information. Thus an abstract component including information about the weather is needed. In order to create such a component several steps are necessary. 1. Create an appropriate data model. 2. Specialize the Component class. 3. Implement the rendering of the component. The first step is fairly easy. One has to decide the kind of data that the new component should provide access to. The implementation of a new data model is as easy as realizing the Model interface. With respect to the example, a WeatherModel would be created. However it should be kept in mind that switching data model families (e.g. from default to networkenabled data models) will not have any effect on the data model that was just created. For that to work, a total integration of the new data model is required, which means that the ModelFactory interface (and consequently all implementations of it) must be adjusted. In the second step, the actual component (WeatherComponent) is implemented by specializing the Component class. The implementation makes use of the previously created data model (step 1) and optionally defines further operations. E.g. the WeatherComponent class might implement the WeatherModel interface itself by forwarding all requests to the actual WeatherModel object. This increases usability of the component as it eases access to the data model. In the last step a renderer is implemented and inserted into the chain of renderers at an appropriate position. The concrete implementation of a renderer has been described in Listing A.3, A.4 and A.5 show the implementation of the example stated above Summary Summarizing the previous subsections, toolkit is extensible in three ways. 1. Reuse of existing components 2. Adapt the rendering process 3. Add abstract components the The first and probably the most intuitive approach is to use multiple existing components 45

58 and group them within one or more containers. This facilitates the reuse of AUI components and is especially useful when more complex UI components are required. Instead of using the same primitive UI components (e.g. Text, Trigger, etc.) in the same fashion in multiple places to achieve the same goal, a Container can be specialized to contain just the needed primitive UI components. Such a specialized Container may provide additional functionality regarding getting and setting its internal state and can of course be used as a whole. The second approach modifies the rendering process to accommodate special needs. This allows to render an AUI component in different ways depending on preferences and abilities of a user or hardware the framework is running on. A developer may also decide to replace the concrete representation of an abstract component by specializing the Renderer class and inserting it into the Chain of Responsibility design pattern. The third approach actually involves the integration of a new UI component. A specialized version of the Component class and an appropriate data model need to be created. Once an interface and implementation of the data model exist then the specialized Component class should make use of it. Furthermore the specialized Component class should implement the data model s interface in order to increase usability. The last step involves the modification of the rendering process (as demonstrated in approach two) to include the newly added component. 4.3 Rendition Rendering is an important aspect of the AbstractUI framework. In order to be able to interact with AUIs they must first be rendered. While section 4.2 explained the process of creating renderers, this section lists a few things that should be considered when doing so. Furthermore, the questions How can abstract components be represented? and How to layout concrete components? are addressed Common User Interface Elements The AbstractUI framework comes with a set of common UI elements. Even though this set is not exhaustive, it enables application developers to create a broad range of maintenance applications. The following five AUI components are part of the AbstractUI framework. Text: Is used to display text-related information. Depending on the context a Text component is being used in, it could be represented as a label, text box or text area. Thus also allowing user inputs. Trigger: Enables users to execute actions (e.g. send form or save file). A Trigger component will typically be represented as a labeled button, but could also be displayed as part of a menu. Choice: Is responsible for displaying selection-related information (e.g. month, weekday, etc.). The most obvious representation of a Choice component is probably the combo box. However depending on the context it is being used in, a check box, radio buttons or a list might also be suitable. Container: Is a component that allows the addition of child components. A Container will usually be represented as some sort of panel or dialog box. Screen: Is a top level container. Typical representations for a Screen are frames and windows. Those five UI elements were chosen as their concrete representations cover a broad range of commonly used UI elements. When combining them, a fairly large amount of applications can be created General Things to Consider The process of rendering an abstract UI component has previously been described, it does 46

59 not dictate how an abstract component is actually rendered or how a concrete representation is determined. The process merely provides a means to distribute responsibilities across multiple rendering instances (and to pass rendering requests along a chain of renderers). However there are a few things that should be considered prior beginning to implement a renderer. Indented Device(s) and Audience: A renderer typically makes use of a particular window system and therefore it is bound to devices that support the particular window system. When implementing a renderer for a certain device (or family of devices) then the availability of window systems should be evaluated. Where applicable, the option to use or modify an existing renderer should also be considered. Each device is used by a certain kind of audience. This means that the kind of users of the iphone are not necessarily the same that use the Openmoko [46]. Those are two different audiences and each group of users has its own stereotypes, behavioral conventions and routines which should be taken into account. If it is the goal to reach a particular group of users then it might be useful to look for devices that are being used by them. Representation of Abstract Components Abstract components need to be turned into concrete representations before a user can actually interact with them. However there are several ways of doing this, two basic ones are described in the following. Static Representation: Each abstract component has a single concrete representation which will be used every time it is rendered. In other words, a strict 1:1 mapping from abstract components to their concrete representations is made use of. E.g. a Text component is rendered as a text field while a Trigger component might always be represented as a button. Dynamic Representation: Each abstract component might have multiple concrete representations which will be used depending on the context and circumstances. In other words, a concrete representation of a particular abstract component is based on a set of rules that dictate which (concrete) representation should be used under what circumstances. E.g. a Text component may gain information that it is now editable thus its pure textual representation must be adapted to a text-field, where text can be entered by the user. Support for Look-and-Feel Standards: Some window systems support look-and-feel standards (e.g. Swing). They can be used to adapt the appearance of a UI (e.g. plain or decorated buttons) and the way it is being used (e.g. close buttons in upper left or right corner). When implementing a renderer one has the freedom to support look-and-feel standards. This can be done by relying on existing lookand-feel standards or implementing them yourself. In the latter case, one would want to consider enabling to change the coloring and font sizes of concrete representations to meet user s preferences. Laying Out of Concrete Components The concrete representation of an AUI must be laid out just like any other UI that is graphically represented. In principle the same techniques for graphical user interfaces (GUIs) can be applied to the rendition of AUIs. Where appropriate, it should be considered to rely on existing layout managers. Assumptions on Screen Space The screen resolution of mobile and wearable devices has a tendency to be relatively small compared to desktop computers. Thus not all components necessarily fit in the give screen space. 47

60 A renderer has the freedom to choose whether all components can be rendered on a given screen or not. It might simply assume that everything fits in the given space and optionally uses scrollbars if it does not. On the other hand a renderer could also realize that not all components can be viewed on the given screen space and therefore split the visualization across multiple screens Developing a Prototypical Renderer Several steps are required for creating a renderer. Four basic steps are described in the following. There are two basic scenarios regarding the complexity of a renderer. (1) Each abstract component has a single concrete representation (static representation). Meaning that only as many concrete representations are required as abstract ones exist. (2) One or more abstract components have multiple concrete representations (dynamic representation). Meaning that more concrete representations are required than abstract ones exist. However in both cases concrete representations also include compositions of concrete components. Having decided on which concrete representations to use, it should be considered to make use of the Factory or Abstract Factory design pattern. Both encapsulate the creation of (concrete) representations and makes it easy to adapt them afterward if necessary. In the case of the Abstract Factory design pattern further families of concrete representations (e.g. components using different colors, fonts, etc.) can easily be created and used without having to modify anything else. Window System Before actually implementing a renderer, one should choose a window system for rendering concrete representations of an AUI. The selected window system is likely to depend on the targeted device(s) of the renderer. Once a window system has been selected, it should be evaluated which UI components it provides. Concrete Representations Knowing the available UI components on a particular device or family of devices, one can start to think about which of them might be useful. Whereby useful is meant to be in the sense that they could be used to represent an abstract component either directly or in combination with further concrete components. Depending on the complexity of the renderer quite a lot of concrete components might come into question. Mapping and Mediating All concrete representations that could be used to represent abstract components have been evaluated at this point. Now it might be particularly useful to think about when a concrete representation should be used to represent an abstract component. In other words, in which context or under what circumstances a particular concrete representation will be used to represent a particular abstract component. Depending on the complexity of the renderer this can be relatively easy. Especially when using a static representation of abstract components. But even in the case of a dynamic representation, it is only a matter of mapping abstract components to their concrete representations. The purpose of this is to make clear how each abstract component is rendered and optionally when alternative representations should be used. Now one can implement mediators for each concrete representation. Each of which mediates between one abstract component and at least one concrete one. Their purpose is to keep abstract and concrete representations synchronized. Meaning, changes on one end (abstract or concrete) result in altering the opposite end. 48

61 Rendition When an abstract component is rendered it is passed to the chain of renderers. The component will be moved from renderer to renderer until one of them takes responsibility for it. It will be handled by creating an appropriate concrete representation and mediator if they are not present. The appropriate representation is found by applying the rules that define which concrete representation is used for which abstract component. Furthermore the child components of the abstract component are passed to the chain of renderers, thus recursively rendering all abstract components. The layout of the concrete representation is determined by the renderer. It either relies on existing layout managers or does it on its own. In the latter case the available screen space should be taken into account for choosing concrete representations. 4.4 Evaluation This section presents the results of an experimental evaluation of the AbstractUI framework. These indicate that the solution implementation of this thesis is better than the WUI toolkit (introduced in section 3.1) in terms of usability. A user study has been conducted as part of the evaluation and will be described in the course of this section Description of User Study The idea of the user study was to compare Witt s WUI toolkit [71] to the AbstractUI framework from the viewpoint of an application developer. Consequently, a part of the conducted user study was to use both toolkits and create applications with them. A total number of 12 subjects took part in the user study. Most of them were students at University Bremen (10 subjects) while the remaining ones were local staff members (2 subjects). 10 of them were male and 2 female. All subjects were between 23 and 31 years old (average age was 25). At the beginning of the user study, each subject was asked to fill out a simple questionnaire. They reported their age, sex and whether or not they had used the toolkits beforehand. Furthermore subjects rated their Java-skills on a scale from 1 to 10 (1 meaning very poor to 10 meaning very good) as well as their Englishskills also on a scale ranging from 1 to 10. Having done that, all subjects were given time to prepare for the programming exercises. Subjects were given several minimalistic examples demonstrating the use of key components for both toolkits (e.g. Trigger in AbstractUI framework and ExplicitTrigger in WUI toolkit). The examples for the AbstractUI framework and WUI toolkit contained the same functionality. One example for each key component and toolkit was presented to the subjects. All eight examples (4 exercises 2 toolkits) can be viewed in the appendix (see listings in Appendix B). Furthermore an overview of user study related components was given to the subjects. It listed the four key components and where to find them in the corresponding toolkit. Where appropriate, additional information on the use of a component was presented. A copy of the overview can be found in the Appendix B. Subjects could then look through the resources and ask questions at their own ledger. They were told that that all resources were allowed to be used throughout the entire user study. Once they felt comfortable with them they started with the programming exercises. A total number of four exercises (one for each key component), which had to be completed with both toolkits, were part of the user study. Subjects were told to have about 5 to 8 minutes to solve a given problem in a particular toolkit. Once the exercise was completed in one toolkit, they were asked to fill out a NASA TLX form. The same exercise was then completed using the remaining toolkit and the subjects were asked to fill out another NASA TLX form. Having completed the exercise with both toolkits, subjects were asked to rate how well they could solve the exercise in each toolkit on a scale from 1 to 10. This was repeated for all exercises. 49

62 The programming was done using Eclipse IDE [20] with preconfigured Java-projects. The projects contained two sets of empty Javaclasses, one set for each toolkit. The order of all exercises was changed with each subject as well as the order of the toolkits for each exercise. The sequence of both was balanced across all subjects. Having completed all exercises the subjects were asked whether or not they felt to have mixed up the toolkits. They were also asked to rate their overall experience with both toolkits on a scale from 1 to 10. The user study was then finished with an informal interview in which all subjects were given a chance to comment on everything they felt noteworthy Results The subjects NASA TLX and usability ratings were analyzed using a paired t-test (twotailed). The findings will be depicted in the following. The NASA TLX rating of the AbstractUI framework and the WUI toolkit was statistically significant (p < 0.01). The average NASA TLX rating for both toolkits can found in Figure 4.26 and The latter illustrates the average NASA TLX rating for each subject, whereas the first shows the overall NASA TLX rating of both toolkits. The average NASA TLX rating for the AbstractUI framework was and for the WUI toolkit (0 meaning very low to 100 meaning very high). The usability rating of both toolkits was also statistically significant (p < 0.01). The average usability rating for each subject is illustrated in Figure Figure 4.28 shows the average usability rating for both toolkits. Subjects usability ratings averaged for the AbstractUI framework at 9.28 and the WUI toolkit at 4.92 (1 meaning very poor to 10 meaning very good) Discussion All 12 subjects completed the entire user study without complications. Most of them rated their own Java- and English-skills to be good (rating 7). The average subject required about an hour in order complete the user study. The fastest subject required about 40 minutes while the slowest subject needed almost 90 minutes. Some subjects commented on the toolkits during the informal interview session. Several of them were complaining about inconsistent interfaces and weird naming conventions of the WUI toolkit. All of them commented on how easy and straight forward the AbstractUI framework was when comparing to the WUI toolkit. One subject was so pleased that he was already looking forward to programming with the AbstractUI framework when having to complete an exercise using the WUI toolkit. Other subjects felt that they spent 80% of their time understanding and programming with the WUI toolkit while the remaining 20% were used for the AbstractUI framework and paperwork. Furthermore, several subjects told the interviewer that they were fans of the slim source code they produced while completing the exercises with the AbstractUI framework. From observation of the subjects, it became quite clear that the AbstractUI framework is simpler and more intuitive to use. The results support this. The NASA TLX ratings of all subjects for both toolkits was statistically significant and show that the AbstractUI framework requires less workload than the WUI toolkit. Statistically significant was also the usability rating of both toolkits and indicate that the AbstractUI framework is more usable than the WUI toolkit. The results of the experimental evaluation indicate that the AbstractUI framework outperforms the WUI toolkit in terms of usability. 4.5 Discussion In the course of this section several perspectives on the introduced AbstractUI framework will be presented and discussed. The solution implementation will be compared with related work. Requirements stated in section 1.2 will be reviewed and discussed whether they are 50

63 Figure 4.26: Illustrates the average NASA TLX rating of all subjects (0 meaning very low to 100 meaning very high). Figure 4.27: Shows the average NASA TLX rating of each subject. Figure 4.28: Illustrates the average usability rating of all subjects (1 meaning very poor to 10 meaning very good). Figure 4.29: Shows the average usability rating of each subject. 51

Concurrent Task Trees

Concurrent Task Trees Chapter 1 Concurrent Task Trees Recap Slide Context Toolkit: Context Toolkit Context Abstraction Design Methodology 1.1 Task Models Slide HCI Lecture Summary: Theories Levels-of-analysis Stages-of-action

More information

Definition of Information Systems

Definition of Information Systems Information Systems Modeling To provide a foundation for the discussions throughout this book, this chapter begins by defining what is actually meant by the term information system. The focus is on model-driven

More information

Interaction design. The process of interaction design. Requirements. Data gathering. Interpretation and data analysis. Conceptual design.

Interaction design. The process of interaction design. Requirements. Data gathering. Interpretation and data analysis. Conceptual design. Interaction design The process of interaction design Requirements Data gathering Interpretation and data analysis Conceptual design Prototyping Physical design Conceptual design Introduction It aims to

More information

CPE/CSC 486: Human-Computer Interaction

CPE/CSC 486: Human-Computer Interaction CPE/CSC 486: Human-Computer Interaction Franz J. Kurfess Computer Science Department California Polytechnic State University San Luis Obispo, CA, U.S.A. Course Overview Introduction Cognitive Foundations

More information

Interaction Design. Task Analysis & Modelling

Interaction Design. Task Analysis & Modelling Interaction Design Task Analysis & Modelling This Lecture Conducting task analysis Constructing task models Understanding the shortcomings of task analysis Task Analysis for Interaction Design Find out

More information

C O N TA C T !!!!!! Portfolio Summary. for more information July, 2014

C O N TA C T !!!!!! Portfolio Summary. for more information  July, 2014 C O N TA C T IQ Portfolio Summary July, 2014 for more information www.concerttechnology.com bizdev@concerttechnology.com C o n c e r t T e c h n o l o g y Overview SOCIAL GRAPH ContactIQ is a collection

More information

Chapter 6: Interfaces and interactions

Chapter 6: Interfaces and interactions Chapter 6: Interfaces and interactions Overview Introduce the notion of a paradigm Provide an overview of the many different interface paradigms highlight the main design and research issues for each Consider

More information

cs465 principles of user interface design, implementation and evaluation

cs465 principles of user interface design, implementation and evaluation cs465 principles of user interface design, implementation and evaluation Karrie G. Karahalios 24. September 2008 1. Heuristic Evaluation 2. Cognitive Walkthrough 3. Discuss Homework 3 4. Discuss Projects

More information

HCI Lecture 14. Special Issues: Ubiquitous computing

HCI Lecture 14. Special Issues: Ubiquitous computing HCI Lecture 14 Special Issues: Ubiquitous computing Barbara Webb Key points: Making the computer part of the environment Mobile devices Implicit input Ambient output Continuous interaction Issues for design

More information

Ubiquitous and Context Aware Computing: Overview and Systems

Ubiquitous and Context Aware Computing: Overview and Systems Ubiquitous and Context Aware Computing: Overview and Systems Simon Bichler 1 / 30 Outline Definition and Motivation for Ubiquitous computing Context aware computing Sample Systems Discussion 2 / 30 Ubiquitous

More information

Augmenting Cognition with Wearable Computers

Augmenting Cognition with Wearable Computers Augmenting Cognition with Wearable Computers Kent Lyons, Thad Starner College of Computing and GVU Center Georgia Institute of Technology Atlanta, GA 30332-0280 USA {kent,thad}@cc.gatech.edu Abstract Mobile

More information

User Interface Design

User Interface Design User Interface Design & Development Lecture 07 Direct Manipulation João Pedro Sousa SWE 632 George Mason University today direct manipulation window UIs support for operations mouse, pen, eye tracking,

More information

Human-Computer Interaction IS4300

Human-Computer Interaction IS4300 Human-Computer Interaction IS4300 1 Quiz 3 1 I5 due next class Your mission in this exercise is to implement a very simple Java painting applet. The applet must support the following functions: Draw curves,

More information

SEMI-AUTOMATIC GENERATION OF DEVICE ADAPTED USER INTERFACES Stina Nylander 1 )

SEMI-AUTOMATIC GENERATION OF DEVICE ADAPTED USER INTERFACES Stina Nylander 1 ) SEMI-AUTOMATIC GENERATION OF DEVICE ADAPTED USER INTERFACES Stina Nylander 1 ) Abstract I am exploring an approach to developing services with multiple user interfaces based on a high level description

More information

Human-Computer Interaction IS4300

Human-Computer Interaction IS4300 Human-Computer Interaction IS4300 1 Ethnography Homework I3 2 1 Team Projects User analysis. Identify stakeholders (primary, secondary, tertiary, facilitating) For Primary Stakeholders Demographics Persona(s)

More information

HCI and Design SPRING 2016

HCI and Design SPRING 2016 HCI and Design SPRING 2016 Topics for today Heuristic Evaluation 10 usability heuristics How to do heuristic evaluation Project planning and proposals Usability Testing Formal usability testing in a lab

More information

SOME TYPES AND USES OF DATA MODELS

SOME TYPES AND USES OF DATA MODELS 3 SOME TYPES AND USES OF DATA MODELS CHAPTER OUTLINE 3.1 Different Types of Data Models 23 3.1.1 Physical Data Model 24 3.1.2 Logical Data Model 24 3.1.3 Conceptual Data Model 25 3.1.4 Canonical Data Model

More information

Chapter 6: Interfaces and interactions

Chapter 6: Interfaces and interactions Chapter 6: Interfaces and interactions Overview Introduce the notion of a paradigm Provide an overview of the many different kinds of interfaces highlight the main design and research issues for each of

More information

Design av brukergrensesnitt på mobile enheter

Design av brukergrensesnitt på mobile enheter Design av brukergrensesnitt på mobile enheter Tutorial på Yggdrasil Lillehammer, 12 oktober 2009 Erik G. Nilsson SINTEF IKT ICT Agenda 13:15 Introduction, user interfaces on mobile equipment, important

More information

Input part 3: Interaction Techniques

Input part 3: Interaction Techniques Input part 3: Interaction Techniques Interaction techniques A method for carrying out a specific interactive task Example: enter a number in a range could use (simulated) slider (simulated) knob type in

More information

Input: Interaction Techniques

Input: Interaction Techniques Input: Interaction Techniques Administration Questions about homework? 2 Interaction techniques A method for carrying out a specific interactive task Example: enter a number in a range could use (simulated)

More information

Crash Course in Modernization. A whitepaper from mrc

Crash Course in Modernization. A whitepaper from mrc Crash Course in Modernization A whitepaper from mrc Introduction Modernization is a confusing subject for one main reason: It isn t the same across the board. Different vendors sell different forms of

More information

A Top-Down Visual Approach to GUI development

A Top-Down Visual Approach to GUI development A Top-Down Visual Approach to GUI development ROSANNA CASSINO, GENNY TORTORA, MAURIZIO TUCCI, GIULIANA VITIELLO Dipartimento di Matematica e Informatica Università di Salerno Via Ponte don Melillo 84084

More information

Human-Computer Interaction: An Overview. CS2190 Spring 2010

Human-Computer Interaction: An Overview. CS2190 Spring 2010 Human-Computer Interaction: An Overview CS2190 Spring 2010 There must be a problem because What is HCI? Human-Computer interface Where people meet or come together with machines or computer-based systems

More information

Heuristic evaluation is a usability inspection technique developed by Jakob Nielsen. The original set of heuristics was derived empirically from an

Heuristic evaluation is a usability inspection technique developed by Jakob Nielsen. The original set of heuristics was derived empirically from an Heuristic evaluation is a usability inspection technique developed by Jakob Nielsen. The original set of heuristics was derived empirically from an analysis of 249 usability problems (Nielsen, 1994). -Preece

More information

CS6008-HUMAN COMPUTER INTERACTION Question Bank

CS6008-HUMAN COMPUTER INTERACTION Question Bank CS6008-HUMAN COMPUTER INTERACTION Question Bank UNIT I FOUNDATIONS OF HCI PART A 1. What is HCI? 2. Who is involved in HCI. 3. What are the 5 major senses? 4. List the parts of human Eye. 5. What is meant

More information

Lecture 6. Design (3) CENG 412-Human Factors in Engineering May

Lecture 6. Design (3) CENG 412-Human Factors in Engineering May Lecture 6. Design (3) CENG 412-Human Factors in Engineering May 28 2009 1 Outline Prototyping techniques: - Paper prototype - Computer prototype - Wizard of Oz Reading: Wickens pp. 50-57 Marc Rettig: Prototyping

More information

Usable Privacy and Security Introduction to HCI Methods January 19, 2006 Jason Hong Notes By: Kami Vaniea

Usable Privacy and Security Introduction to HCI Methods January 19, 2006 Jason Hong Notes By: Kami Vaniea Usable Privacy and Security Introduction to HCI Methods January 19, 2006 Jason Hong Notes By: Kami Vaniea Due Today: List of preferred lectures to present Due Next Week: IRB training completion certificate

More information

Page 1. Ubiquitous Computing: Beyond platforms, beyond screens; the invisible computer. Topics for Today. Ubiquitous Computing Fundamentals

Page 1. Ubiquitous Computing: Beyond platforms, beyond screens; the invisible computer. Topics for Today. Ubiquitous Computing Fundamentals Ubiquitous Computing: Beyond platforms, beyond screens; the invisible computer Shwetak N. Patel 2 Topics for Today Ubiquitous Computing Fundamentals Review the history of ubiquitous computing (ubicomp)

More information

Announcements. Usability. Based on material by Michael Ernst, University of Washington. Outline. User Interface Hall of Shame

Announcements. Usability. Based on material by Michael Ernst, University of Washington. Outline. User Interface Hall of Shame Announcements Usability Based on material by Michael Ernst, University of Washington Optional cumulative quiz will be given online in Submitty on May 2. Replaces your lowest Quiz 1 10. More details on

More information

CS 349 / SE 382 Custom Components. Professor Michael Terry February 6, 2009

CS 349 / SE 382 Custom Components. Professor Michael Terry February 6, 2009 CS 349 / SE 382 Custom Components Professor Michael Terry February 6, 2009 Today s Agenda Midterm Notes A2 Scroll XOR demo A3 super special sneak preview Clarifications on Fitt s Law Undo Custom components

More information

6 Designing Interactive Systems

6 Designing Interactive Systems 6 Designing Interactive Systems 6.1 Design vs. Requirements 6.2 Paradigms, Styles and Principles of Interaction 6.3 How to Create a Conceptual Model 6.4 Activity-Based Design of Interactive Systems 6.5

More information

Browsing the World in the Sensors Continuum. Franco Zambonelli. Motivations. all our everyday objects all our everyday environments

Browsing the World in the Sensors Continuum. Franco Zambonelli. Motivations. all our everyday objects all our everyday environments Browsing the World in the Sensors Continuum Agents and Franco Zambonelli Agents and Motivations Agents and n Computer-based systems and sensors will be soon embedded in everywhere all our everyday objects

More information

CSE 440: Introduction to HCI User Interface Design, Prototyping, and Evaluation

CSE 440: Introduction to HCI User Interface Design, Prototyping, and Evaluation CSE 440: Introduction to HCI User Interface Design, Prototyping, and Evaluation Lecture 12: Inspection-Based Methods James Fogarty Daniel Epstein Brad Jacobson King Xia Tuesday/Thursday 10:30 to 11:50

More information

The quality of any business or industrial process outcomes depend upon three major foundations:

The quality of any business or industrial process outcomes depend upon three major foundations: Ensuring Quality in an Internet of Things Messages between devices, or to/from humans benefits from structure. Version 4, February 5, 2018 Prepared by Michael Scofield, M.B.A. Synopsis... 1 What is the

More information

The Interaction. Dr. Karim Bouzoubaa

The Interaction. Dr. Karim Bouzoubaa The Interaction Dr. Karim Bouzoubaa UI Hall of Fame or Shame? The buttons are limited to text labels: à pi instead of (scientific mode) à sqrt rather than à * instead of X Why only one line of display?

More information

Operating system. Hardware

Operating system. Hardware Chapter 1.2 System Software 1.2.(a) Operating Systems An operating system is a set of programs designed to run in the background on a computer system, giving an environment in which application software

More information

I. INTRODUCTION ABSTRACT

I. INTRODUCTION ABSTRACT 2018 IJSRST Volume 4 Issue 8 Print ISSN: 2395-6011 Online ISSN: 2395-602X Themed Section: Science and Technology Voice Based System in Desktop and Mobile Devices for Blind People Payal Dudhbale*, Prof.

More information

eswt Requirements and High-Level Architecture Abstract Document Information Change History

eswt Requirements and High-Level Architecture Abstract Document Information Change History eswt Requirements and High-Level Architecture Abstract There is a need for a standardized UI API fit for embedded devices having fewer resources and smaller screen sizes than a desktop computer. The goal

More information

What is interaction design? What is Interaction Design? Example of bad and good design. Goals of interaction design

What is interaction design? What is Interaction Design? Example of bad and good design. Goals of interaction design What is interaction design? What is Interaction Design? Designing interactive products to support people in their everyday and working lives Sharp, Rogers and Preece (2002) The design of spaces for human

More information

1. Select/view stores based on product type/category- 2. Select/view stores based on store name-

1. Select/view stores based on product type/category- 2. Select/view stores based on store name- COMPETITIVE ANALYSIS Context The world of mobile computing is fast paced. There are many developers providing free and subscription based applications on websites like www.palmsource.com. Owners of portable

More information

NADAR SARASWATHI COLLEGE OF ENGINEERING & TECHNOLOGY

NADAR SARASWATHI COLLEGE OF ENGINEERING & TECHNOLOGY NADAR SARASWATHI COLLEGE OF ENGINEERING & TECHNOLOGY DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING EIGHTH SEMESTER QUESTION BANK CS6008 -HUMAN COMPUTER INTERACTION UNIT I FOUNDATIONS OF HCI The Human:

More information

Interaction Design. Ruben Kruiper

Interaction Design. Ruben Kruiper Interaction Design Ruben Kruiper What do you know? What do you think Interaction Design stands for? 2 What do you expect? Interaction Design will be: Awesome Okay Boring Myself I m not a big fan... This

More information

Context-aware Services for UMTS-Networks*

Context-aware Services for UMTS-Networks* Context-aware Services for UMTS-Networks* * This project is partly financed by the government of Bavaria. Thomas Buchholz LMU München 1 Outline I. Properties of current context-aware architectures II.

More information

6 Designing Interactive Systems

6 Designing Interactive Systems 6 Designing Interactive Systems 6.1 Design vs. Requirements 6.2 Paradigms, Styles and Principles of Interaction 6.3 How to Create a Conceptual Model 6.4 Activity-Based Design of Interactive Systems 6.5

More information

MediaTek Natural User Interface

MediaTek Natural User Interface MediaTek White Paper October 2014 2014 MediaTek Inc. Table of Contents 1 Introduction... 3 2 Computer Vision Technology... 7 3 Voice Interface Technology... 9 3.1 Overview... 9 3.2 Voice Keyword Control...

More information

Lo-Fidelity Prototype Report

Lo-Fidelity Prototype Report Lo-Fidelity Prototype Report Introduction A room scheduling system, at the core, is very simple. However, features and expansions that make it more appealing to users greatly increase the possibility for

More information

One Device to Rule Them All: Controlling Household Devices with a Mobile Phone

One Device to Rule Them All: Controlling Household Devices with a Mobile Phone One Device to Rule Them All: Controlling Household Devices with a Mobile Phone William Shato Introduction This project was undertaken as part of a seminar course in Mobile Computing. While searching for

More information

CSE 440: Introduction to HCI User Interface Design, Prototyping, and Evaluation

CSE 440: Introduction to HCI User Interface Design, Prototyping, and Evaluation CSE 440: Introduction to HCI User Interface Design, Prototyping, and Evaluation Lecture 11: Inspection Tuesday / Thursday 12:00 to 1:20 James Fogarty Kailey Chan Dhruv Jain Nigini Oliveira Chris Seeds

More information

Media Guide: PowerPoint 2010

Media Guide: PowerPoint 2010 Media Guide: PowerPoint 2010 Contents Introduction... 1 Planning Your Presentation... 2 Media Preparation... 2 Optimizing Images... 3 Media and Your PowerPoint Presentation... 4 Common Tasks in PowerPoint

More information

Category Theory in Ontology Research: Concrete Gain from an Abstract Approach

Category Theory in Ontology Research: Concrete Gain from an Abstract Approach Category Theory in Ontology Research: Concrete Gain from an Abstract Approach Markus Krötzsch Pascal Hitzler Marc Ehrig York Sure Institute AIFB, University of Karlsruhe, Germany; {mak,hitzler,ehrig,sure}@aifb.uni-karlsruhe.de

More information

CPS122 Lecture: The User Interface

CPS122 Lecture: The User Interface Objectives: CPS122 Lecture: The User Interface 1. To introduce the broad field of user interface design 2. To introduce the concept of User Centered Design 3. To introduce a process for user interface

More information

Computer-based systems will be increasingly embedded in many of

Computer-based systems will be increasingly embedded in many of Programming Ubiquitous and Mobile Computing Applications with TOTA Middleware Marco Mamei, Franco Zambonelli, and Letizia Leonardi Universita di Modena e Reggio Emilia Tuples on the Air (TOTA) facilitates

More information

Human Computer Interaction - An Introduction

Human Computer Interaction - An Introduction NPTEL Course on Human Computer Interaction - An Introduction Dr. Pradeep Yammiyavar Professor, Dept. of Design, IIT Guwahati, Assam, India Dr. Samit Bhattacharya Assistant Professor, Dept. of Computer

More information

Topic 01. Software Engineering, Web Engineering, agile methodologies.

Topic 01. Software Engineering, Web Engineering, agile methodologies. Topic 01 Software Engineering, Web Engineering, agile methodologies. 1 What is Software Engineering? 2 1 Classic Software Engineering The IEEE definition: Software Engineering is the application of a disciplined,

More information

Using Principles to Support Usability in Interactive Systems

Using Principles to Support Usability in Interactive Systems Using Principles to Support Usability in Interactive Systems Mauricio Lopez Dept. of Computer Science and Engineering York University Toronto, Ontario, Canada M3J1V6 malchevic@msn.com ABSTRACT This paper

More information

Abstract. 1 Introduction. 2 Sulawesi WHAT DO WE WANT FROM A WEARABLE USER INTERFACE? Adrian F. Clark, Neill Newman, Alex Isaakidis and John Pagonis

Abstract. 1 Introduction. 2 Sulawesi WHAT DO WE WANT FROM A WEARABLE USER INTERFACE? Adrian F. Clark, Neill Newman, Alex Isaakidis and John Pagonis WHAT DO WE WANT FROM A WEARABLE USER INTERFACE? Adrian F. Clark, Neill Newman, Alex Isaakidis and John Pagonis University of Essex and Symbian Ltd Abstract Graphical user interfaces are widely regarded

More information

20 reasons why the Silex PTE adds value to your collaboration environment

20 reasons why the Silex PTE adds value to your collaboration environment 20 reasons why the Silex PTE adds value to your collaboration environment The Panoramic Telepresence Experience (PTE) from UC innovator SilexPro is a unique product concept with multiple benefits in terms

More information

Introduction to User Stories. CSCI 5828: Foundations of Software Engineering Lecture 05 09/09/2014

Introduction to User Stories. CSCI 5828: Foundations of Software Engineering Lecture 05 09/09/2014 Introduction to User Stories CSCI 5828: Foundations of Software Engineering Lecture 05 09/09/2014 1 Goals Present an introduction to the topic of user stories concepts and terminology benefits and limitations

More information

Analysis Exchange Framework Terms of Reference December 2016

Analysis Exchange Framework Terms of Reference December 2016 Analysis Exchange Framework Terms of Reference December 2016 Approved for Public Release; Distribution Unlimited. Case Number 16-4653 The views, opinions and/or findings contained in this report are those

More information

CS 4300 Computer Graphics

CS 4300 Computer Graphics CS 4300 Computer Graphics Prof. Harriet Fell Fall 2011 Lecture 8 September 22, 2011 GUIs GUIs in modern operating systems cross-platform GUI frameworks common GUI widgets event-driven programming Model-View-Controller

More information

Ubiquitous Computing. Ambient Intelligence

Ubiquitous Computing. Ambient Intelligence Ubiquitous Computing Ambient Intelligence CS4031 Introduction to Digital Media 2016 Computing Evolution Ubiquitous Computing Mark Weiser, Xerox PARC 1988 Ubiquitous computing enhances computer use by making

More information

System Challenges for Pervasive and Ubiquitous Computing

System Challenges for Pervasive and Ubiquitous Computing System Challenges for Pervasive and Ubiquitous Computing 18 th Roy Want Intel Research May 2005, ICSE 05 St. Louis th May 2005, ICSE What is Ubiquitous Computing? The most profound technologies are those

More information

Assignment 5 is posted! Heuristic evaluation and AB testing. Heuristic Evaluation. Thursday: AB Testing

Assignment 5 is posted! Heuristic evaluation and AB testing. Heuristic Evaluation. Thursday: AB Testing HCI and Design Topics for today Assignment 5 is posted! Heuristic evaluation and AB testing Today: Heuristic Evaluation Thursday: AB Testing Formal Usability Testing Formal usability testing in a lab:

More information

CS211 Lecture: The User Interface

CS211 Lecture: The User Interface CS211 Lecture: The User Interface Last revised November 19, 2008 Objectives: 1. To introduce the broad field of user interface design 2. To introduce the concept of User Centered Design 3. To introduce

More information

Multiple Dimensions in Convergence and Related Issues

Multiple Dimensions in Convergence and Related Issues Multiple Dimensions in Convergence and Related Issues S.R. Subramanya LG Electronics CDG Technology Forum Las Vegas, Oct. 7, 2005 LGE Mobile Research, USA Talk Outline Introduction» Convergence Layers

More information

Introducing MESSIA: A Methodology of Developing Software Architectures Supporting Implementation Independence

Introducing MESSIA: A Methodology of Developing Software Architectures Supporting Implementation Independence Introducing MESSIA: A Methodology of Developing Software Architectures Supporting Implementation Independence Ratko Orlandic Department of Computer Science and Applied Math Illinois Institute of Technology

More information

Tracking Handle Menu Lloyd K. Konneker Jan. 29, Abstract

Tracking Handle Menu Lloyd K. Konneker Jan. 29, Abstract Tracking Handle Menu Lloyd K. Konneker Jan. 29, 2011 Abstract A contextual pop-up menu of commands is displayed by an application when a user moves a pointer near an edge of an operand object. The menu

More information

Usability. CSE 331 Spring Slides originally from Robert Miller

Usability. CSE 331 Spring Slides originally from Robert Miller Usability CSE 331 Spring 2010 Slides originally from Robert Miller 1 User Interface Hall of Shame Source: Interface Hall of Shame 2 User Interface Hall of Shame Source: Interface Hall of Shame 3 Redesigning

More information

COMMON ISSUES AFFECTING SECURITY USABILITY

COMMON ISSUES AFFECTING SECURITY USABILITY Evaluating the usability impacts of security interface adjustments in Word 2007 M. Helala 1, S.M.Furnell 1,2 and M.Papadaki 1 1 Centre for Information Security & Network Research, University of Plymouth,

More information

Dr. Shuang LIANG. School of Software Engineering TongJi University

Dr. Shuang LIANG. School of Software Engineering TongJi University Human Computer Interface Dr. Shuang LIANG School of Software Engineering TongJi University Today s Topics UI development and Trends NUI Discussion Today s Topics UI development and Trends Development Trends

More information

Page 1. Human-computer interaction. Lecture 2: Design & Implementation. Building user interfaces. Users and limitations

Page 1. Human-computer interaction. Lecture 2: Design & Implementation. Building user interfaces. Users and limitations Human-computer interaction Lecture 2: Design & Implementation Human-computer interaction is a discipline concerned with the design, implementation, and evaluation of interactive systems for human use and

More information

CS 160: Evaluation. Professor John Canny Spring /15/2006 1

CS 160: Evaluation. Professor John Canny Spring /15/2006 1 CS 160: Evaluation Professor John Canny Spring 2006 2/15/2006 1 Outline User testing process Severity and Cost ratings Discount usability methods Heuristic evaluation HE vs. user testing 2/15/2006 2 Outline

More information

Accessible PDF Documents with Adobe Acrobat 9 Pro and LiveCycle Designer ES 8.2

Accessible PDF Documents with Adobe Acrobat 9 Pro and LiveCycle Designer ES 8.2 Accessible PDF Documents with Adobe Acrobat 9 Pro and LiveCycle Designer ES 8.2 Table of Contents Accessible PDF Documents with Adobe Acrobat 9... 3 Application...3 Terminology...3 Introduction...3 Word

More information

User Interfaces Assignment 3: Heuristic Re-Design of Craigslist (English) Completed by Group 5 November 10, 2015 Phase 1: Analysis of Usability Issues Homepage Error 1: Overall the page is overwhelming

More information

Mobile Technologies. Mobile Design

Mobile Technologies. Mobile Design Mobile Technologies Mobile Design 4 Steps: 1. App Idea 2. Users Profile Designing an App 3. App Definition Statement Include 3-5 key features 4. UI Design Paper prototyping Wireframing Prototypes 2 Idea

More information

Models, Tools and Transformations for Design and Evaluation of Interactive Applications

Models, Tools and Transformations for Design and Evaluation of Interactive Applications Models, Tools and Transformations for Design and Evaluation of Interactive Applications Fabio Paternò, Laila Paganelli, Carmen Santoro CNUCE-C.N.R. Via G.Moruzzi, 1 Pisa, Italy fabio.paterno@cnuce.cnr.it

More information

Page 1. Ideas to windows. Lecture 7: Prototyping & Evaluation. Levels of prototyping. Progressive refinement

Page 1. Ideas to windows. Lecture 7: Prototyping & Evaluation. Levels of prototyping. Progressive refinement Ideas to windows Lecture 7: Prototyping & Evaluation How do we go from ideas to windows? Prototyping... rapid initial development, sketching & testing many designs to determine the best (few?) to continue

More information

CHAPTER 1 WHAT IS TOUCHDEVELOP?

CHAPTER 1 WHAT IS TOUCHDEVELOP? CHAPTER 1 In this chapter we present an overview of how TouchDevelop works within your phone and the larger ecosystem the cloud, the communities you are involved in, and the websites you normally access.

More information

OCR Interfaces for Visually Impaired

OCR Interfaces for Visually Impaired OCR Interfaces for Visually Impaired TOPIC ASSIGNMENT 2 Author: Sachin FERNANDES Graduate 8 Undergraduate Team 2 TOPIC PROPOSAL Instructor: Dr. Robert PASTEL March 4, 2016 LIST OF FIGURES LIST OF FIGURES

More information

20480C: Programming in HTML5 with JavaScript and CSS3. Course Code: 20480C; Duration: 5 days; Instructor-led. JavaScript code.

20480C: Programming in HTML5 with JavaScript and CSS3. Course Code: 20480C; Duration: 5 days; Instructor-led. JavaScript code. 20480C: Programming in HTML5 with JavaScript and CSS3 Course Code: 20480C; Duration: 5 days; Instructor-led WHAT YOU WILL LEARN This course provides an introduction to HTML5, CSS3, and JavaScript. This

More information

Interaction Techniques. SWE 432, Fall 2016 Design and Implementation of Software for the Web

Interaction Techniques. SWE 432, Fall 2016 Design and Implementation of Software for the Web Interaction Techniques SWE 432, Fall 2016 Design and Implementation of Software for the Web Today What principles guide the design of usable interaction techniques? How can interaction designs help support

More information

Mensch-Maschine-Interaktion 1. Chapter 7 (July 15, 2010, 9am-12pm): Implementing Interactive Systems

Mensch-Maschine-Interaktion 1. Chapter 7 (July 15, 2010, 9am-12pm): Implementing Interactive Systems Mensch-Maschine-Interaktion 1 Chapter 7 (July 15, 2010, 9am-12pm): Implementing Interactive Systems 1 Implementing Interactive Systems Designing Look-And-Feel Constraints Mapping Implementation Technologies

More information

Smart Driver Assistant Software Requirements Specifications

Smart Driver Assistant Software Requirements Specifications 2016 Software Requirements Specifications SEYMUR MAMMADLI SHKELQIM MEMOLLA NAIL IBRAHIMLI MEHMET KURHAN MIDDLE EAST TECHNICAL UNIVERSITY Department Of Computer Engineering Preface This document contains

More information

Mobility Solutions Extend Cisco Unified Communications

Mobility Solutions Extend Cisco Unified Communications Mobility Solutions Extend Cisco Unified Communications Organizations worldwide have used powerful new technologies such as the Internet, IP communications, and mobility to improve their business processes.

More information

E-BALL Technology Submitted in partial fulfillment of the requirement for the award of

E-BALL Technology Submitted in partial fulfillment of the requirement for the award of A Seminar report on E-BALL Technology Submitted in partial fulfillment of the requirement for the award of Degree of Computer Science SUBMITTED TO: SUBMITTED BY: www.studymafia.org www.studymafia.org Preface

More information

Input devices are hardware devices that allow data to be entered into a computer.

Input devices are hardware devices that allow data to be entered into a computer. 1.4.2 Input Devices Input devices are hardware devices that allow data to be entered into a computer. Input devices are part of the four main hardware components of a computer system. The Image below shows

More information

Multimodal Interfaces. Remotroid

Multimodal Interfaces. Remotroid Multimodal Interfaces Remotroid Siavash Bigdeli / Christian Lutz University of Neuchatel and University of Fribourg 1. June 2012 Table of contents 1 Introduction...3 2 Idea of the application...3 3 Device

More information

Slides copyright 1996, 2001, 2005, 2009, 2014 by Roger S. Pressman. For non-profit educational use only

Slides copyright 1996, 2001, 2005, 2009, 2014 by Roger S. Pressman. For non-profit educational use only Chapter 16 Pattern-Based Design Slide Set to accompany Software Engineering: A Practitioner s Approach, 8/e by Roger S. Pressman and Bruce R. Maxim Slides copyright 1996, 2001, 2005, 2009, 2014 by Roger

More information

Usability Testing. ISBVI_Magnifier

Usability Testing. ISBVI_Magnifier T Usability Testing ISBVI_Magnifier T Magnifier Indiana School for Visually Impaired Students Project Charter Team Project Outcome Target users ISBVI Project description Magnifier for visaully impaired

More information

Strong signs your website needs a professional redesign

Strong signs your website needs a professional redesign Strong signs your website needs a professional redesign Think - when was the last time that your business website was updated? Better yet, when was the last time you looked at your website? When the Internet

More information

Spontaneous Interaction using Mobile Phones and Short Text Messages

Spontaneous Interaction using Mobile Phones and Short Text Messages Spontaneous Interaction using Mobile Phones and Short Text Messages Frank Siegemund Distributed Systems Group, Department of Computer Science, Swiss Federal Institute of Technology (ETH) Zurich, 8092 Zurich,

More information

WEB ANALYTICS A REPORTING APPROACH

WEB ANALYTICS A REPORTING APPROACH WEB ANALYTICS A REPORTING APPROACH By Robert Blakeley, Product Manager WebMD A web analytics program consists of many elements. One of the important elements in the process is the reporting. This step

More information

Ecommerce UX Nielsen Norman Group. Lecture notes

Ecommerce UX Nielsen Norman Group. Lecture notes Ecommerce UX Nielsen Norman Group Lecture notes Table of Content 5 types of EC shoppers 3 Design Trends to Follow and 3 to Avoid http://www.nngroup.com/ 5 types of EC shoppers Product focused Browsers

More information

Chapter 12 (revised by JAS)

Chapter 12 (revised by JAS) Chapter 12 (revised by JAS) Pattern-Based Design Slide Set to accompany Software Engineering: A Practitionerʼs Approach, 7/e by Roger S. Pressman Slides copyright 1996, 2001, 2005, 2009 by Roger S. Pressman

More information

Accelerates Timelines for Development and Deployment of Coatings for Consumer Products.

Accelerates Timelines for Development and Deployment of Coatings for Consumer Products. May 2010 PPG Color Launch Process Accelerates Timelines for Development and Deployment of Coatings for Consumer Products. Inspire Market Feedback/Sales Design Color Develop Designer Mass Production Marketing

More information

Shake n Send: Enabling feedback submission directly from mobile applications

Shake n Send: Enabling feedback submission directly from mobile applications Shake n Send: Enabling feedback submission directly from mobile applications Billy Landowski willand@microsoft.com Sira Rao sirarao@microsoft.com Abstract As the mobile application market continues to

More information

The LUCID Design Framework (Logical User Centered Interaction Design)

The LUCID Design Framework (Logical User Centered Interaction Design) The LUCID Design Framework (Logical User Centered Interaction Design) developed by Cognetics Corporation LUCID Logical User Centered Interaction Design began as a way of describing the approach to interface

More information

Mayhem Make a little Mayhem in your world.

Mayhem Make a little Mayhem in your world. Mayhem Make a little Mayhem in your world. Team Group Manager - Eli White Documentation - Meaghan Kjelland Design - Jabili Kaza & Jen Smith Testing - Kyle Zemek Problem and Solution Overview Most people

More information

Lecture 15. Interaction paradigms-2. CENG 412-Human Factors in Engineering July

Lecture 15. Interaction paradigms-2. CENG 412-Human Factors in Engineering July Lecture 15. Interaction paradigms-2 CENG 412-Human Factors in Engineering July 9 2009 1 Announcements Final project presentations start on July 20 Guidelines will by posted by July 13 Assignment 2 posted

More information