Wearable Computing Summary of Georgia Tech Wearable Computing Research Papers Presented by Jamie Cooley Ambient Technologies, MIT Media Lab
Overview Wearable Motherboard: http://www.gtwm.gatech.edu/ Contextual Computing Group (CCG): http://www.cc.gatech.edu/ccg/
Wearable Motherboard http://www.gtwm.gatech.e du/ Intelligent Shirt designed for combat Detects vital signs un an unobtrusive manner, detects bullet wounds Civilian Uses: SIDS, discharge after surgery monitoring, astronauts, etc.
Wearable Motherboard (cont.) This is the Third Generation of the design Connected to a PSM (Personal Status Monitor) worn at the waist Funded: US Dept. of Navy
Wearable Motherboard (cont.) EKG and temperature monitoring
Wearable Motherboard (cont.) Zipper Attachment on Knitted Wearable Motherboard
Wearable Motherboard (cont.) Dr. Sundaresan Jayaraman,, Principal Investigator School of Textile & Fiber Engineering
Contextual Computing Group (CCG): Use Twiddler Input Device
Contextual Computing Group (CCG) (cont.): Several Published works on using this Twiddler Twiddler Typing: One-Handed Chording Text Entry for Mobile Phones (Kent Lyons, Thad Starner,, Daniel Plaisted,, James Fusia,, Amanda Lyons, Aaron Drew, and E. W. Looney) Expert Chording Text Entry on the Twiddler One-Handed Keyboard (Kent Lyons, Daniel Plaisted,, and Thad Starner) Everyday Wearable Computer Use: A Case Study of an Expert User (Kent Lyons) Using Multiple Sensors for Mobile Sign Language Recognition (Helene Brashear, Thad Starner,, Paul Lukowicz and Holger Junker)
Contextual Computing Group (CCG) (cont.): Abstract: Previously, we demonstrated that after 400 minutes of practice, ten novices averaged over 26 words per minute (wpm) for text entry on the Twiddler one-handed chording keyboard, outperforming the multi-tap tap mobile text entry standard. Here we present an extension of this study that examines expert chording performance. Five subjects continued the study and achieved an average rate of 47 wpm after approximately 25 hours of practice in varying conditions. One subject achieved a rate of 67 wpm, equivalent to the typing rate of the last author who has been a Twiddler user for ten years. We provide evidence that lack of visual feedback does not hinder expert typing speed and examine the potential use of multiple character chords (MCCs( MCCs) ) to increase text entry speed. We demonstrate the effects of learning on various aspects of chording and analyze how subjects adopt a simultaneous or sequential method of pushing the individual keys during a chord.
Contextual Computing Group (CCG) (cont.): Abstract: An experienced user of the Twiddler,, a one-handed chording keyboard, averages speeds of 60 words per minute with letter-by by-letter typing of standard test phrases. This fast typing rate coupled with the Twiddler's 3x4 button design, similar to that of a standard mobile telephone, makes it a potential alternative to multim ulti-tap tap for text entry on mobile phones. Despite this similarity, there is very little data a on the Twiddler's performance and learnability.. We present a longitudinal study of novice users' learning rates on the Twiddler.. Ten participants typed for 20 sessions using two different methods. Each session is composed of 20 minutes of typing with multim ulti-tap tap and 20. We found that users initially have a faster average typing rate with multi-tap; tap; however, after four sessions the difference becomes negligible, and by the eighth session participants type faster with chording on the Twiddler.. Furthermore, after 20 sessions typing rates for the Twiddler are still increasing. minutes of one-handed chording on the Twiddler. We found that
Contextual Computing Group (CCG) (cont.): Abstract: Wearable computers are a unique point in the mobile computing design space. In this paper, we examine the use of a wearable in everyday situations. Specifically, we discuss findings from a case study of an expert wearable computer user in an academic research setting over an interval of five weeks. We examine the use of the computer by collecting periodic screen shots of the wearable's display and utilize these screen shots in interview sessions to create a retrospective account of the machine's use and the user's context. This data reveals that This data reveals that the user employs the computer to augment his memory in various ways. We also found evidence of the wearable's use while engaged in another primary task. Furthermore, we discuss the intricate strategies developed by the participant that enable him to utilize the wearable in these roles.
Contextual Computing Group (CCG) (cont.): We build upon a constrained, lab-based based Sign Language recognition system with the goal of making it a mobile assistive technology. We examine using multiple sensors for disambiguation of noisy data to improve recognition accuracy. Our experiment compares the results of training a small gesture vocabulary using noisy vision data, accelerometer data and both data sets combined.
Contextual Computing Group Other Publications: (CCG) (cont.): Georgia Tech Gesture Toolkit: Supporting Experiments in Gesture Recognition Using Multiple Sensors for Mobile Sign Language Recognition Using GPS to Learn Significant Locations and Predict Movement Across Multiple Users Learning Significant Locations and Predicting User Movement with GPS Enabling Ad-Hoc Collaboration Through Schedule Learning and Prediction Technology Trends Favor Thick Clients for User-Carried Wireless Devices AND more! MOVIES