`RESEARCH &
`APPLICATIONS
`SYMPOSIUM 2000
`
`Symposium General Chair
`Andrew T. Duchowski
`
`Program Committee Co-Chairs
`Keith S. Karn
`John W. Senders
`
`Co-sponsored by ACM SIGCHI
`and ACM SIGGRAPH
`
`A Publication of ACM SIGGRAPH
`
`· acm ·(cid:173)
`
`PREss •
`
`: ~® :, nl5', i~G'._f _
` ;_ •• .. -
`•
`~ ~---
`-· .
`. ,
`'
`fa,(;, .. _
`
`1:
`
`Supercell
`Exhibit 1005
`Page 1
`
`
`
`The Association for Computing Machinery, Inc.
`1515 Broadway
`New York, New York 10036
`
`Copyright© 2000 by the Association for Computing Machinery, Inc (ACM). Permission to make digital or hard copies of
`portions of this work for personal or classroom use is granted without fee provided that the copies are not made or distributed
`for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyright for com(cid:173)
`ponents of this work owned by others than ACM must be honored. Abstracting with credit is permitted.
`
`To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
`Request permission to republish from: Publications Department, ACM, Inc. Fax +1-212-869-0481 or e-mail
`permissions@acm.org.
`
`For other copying of articles that carry a code at the bottom of the first or last page, copying is permitted provided that the
`per-copy fee indicated in the code is paid through the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923.
`
`Notice to Past Authors of ACM-Published Articles
`ACM intends to create a complete electronic archive of all articles and/or other material previously published by ACM. If you
`have written a work that was previously published by ACM in any journal or conference proceedings prior to 1978, or any
`SIG newsletter at any time, and you do NOT want this work to appear in the ACM Digital Library, please inform
`permissions@acm.org, stating the title of the work, the author(s), and where and when published.
`
`ACM ISBN: 1-58113-280-8
`Additional copies may be ordered prepaid from:
`ACM Order Department
`Phone: 1-800-342-6626
`P.O. Box 11405
`(USA and Canada)
`Church Street Station
`+1-212-626-0500
`New York, NY 10286-1405
`(All other countries)
`Fax: +1-212-944-1318
`E-mail: acrnhelp@acm.org
`
`ACM Order Number: 434001
`
`Printed in the USA
`
`2
`
`Supercell
`Exhibit 1005
`Page 2
`
`
`
`Table of Contents
`
`Preface ............................................................................... 6
`Sponsors ............................................................................. 7
`
`Welcome & Keynote Addresses
`
`Welcome Address
`Andrew T. Duchowski
`
`Keynote Address: Four Theoretical and Practical Questions ....................................... 8
`John W. Senders
`
`Design, Text Input, Gaze-Assisted User Interfaces
`
`Design Issues of iDict: A Gaze-Assisted Translation Aid ......................................... 9
`Aulikki Hyrskykari, Piiivi Majaranta, Antti Aaltonen, Kari-Jouko Riiihii
`
`Text Input Methods for Eye Trackers Using Off-Screen Targets ................................... 15
`Poika Isokoski
`
`Effective Eye-Gaze Input Into Windows™ . . . . . . . . . . . . . . . . . . . . . . . . ........................... 23
`Chris Lankford
`
`Cognition, Usability
`
`Comparing Interfaces Based on What Users Watch and Do ...................................... 29
`Eric C. Crowe, N. Hari Narayanan
`
`Extended Tasks Elicit Complex Eye Movement Patterns ....................................... .37
`Jeff B. Pelz, Roxanne Canosa, Jason Babcock
`
`The Effects of A Simulated Cellular Phone Conversation on Search For Traffic Signs in an Elderly Sample
`Charles T. Scialfa, Lisa McPhee, Geoffrey Ho
`Color Plate .......................................................................... 145
`
`.45
`
`Tracking Devices, Eye Movement Analysis
`
`High Image Rate Eye Movement Measurement ........................... not available for publication
`Andrew H. Clarke, Caspar Steineke, Harald Emanuel
`
`GazeTrackeT™: Software Designed to Facilitate Eye Movement Analysis ............................ 51
`Chris Lankford
`
`An Interactive Model-Based Environment for Eye-Movement Protocol Analysis and Visualization ....... .57
`Dario D. Salvucci
`
`Eye Movement Analysis
`
`Analysis of Eye Tracking Movements Using FIR Median Hybrid Filters ............................ 65
`J. Gu, M. Meng, A. Cook, M. G. Faulkner
`
`Identifying Fixations and Saccades in Eye-Tracking Protocols .................................... 71
`Dario D. Salvucci, Joseph H. Goldberg
`
`Visual Fixations and Level of Attentional Processing ........................................... 79
`Boris M. Velichkovsky, Sascha M. Dornhoefer, Sebastian Pannasch, Pieter J. A. Unema
`
`3
`
`Supercell
`Exhibit 1005
`Page 3
`
`
`
`Table of Contents
`
`Panel Discussion
`
`"Saccade Pickers" vs. "Fixation Pickers": The Effect of Eye Tracker Choice on Research Findings ........ 87
`Keith S. Karn (moderator), George McConkie, Waldemar Rojna, Dario Salvucci, John W. Senders,
`Roel Venegaal, David Wooding
`
`Gaze-Contingent Displays, Visual Search, Visual Inspection
`
`Binocular Eye Tracking in Virtual Reality for Inspection Training ................................. 89
`Andrew T. Duchowski, Vznay Shivashankaraiah, Tim Rawls, Anand K. Gramopadhye,
`Brian J. Melloy, Barbara Kanki
`Color Plate .......................................................................... 146
`
`User Performance With Gaze Contingent Displays ............................................. 97
`Lester C. Loschky, George W. McConkie
`
`Evaluating Variable Resolution Displays with Visual Search: Task Performance and Eye Movements ..... 105
`Derrick Parkhurst, Eugenio Culurciello, Ernst Niebur
`
`Posters and Demonstrations
`
`"GazeToTalk"00: A Nonverbal Interface with Meta-Communication Facility ........................ 111
`Tetsuro Chino, Kazuhiro Fukui, Kaoru Suzuki
`
`Using Eye Tracking to Investigate Graphical Elements for Normally Sighted and Low Vision Users ...... 112
`Julie A. Jacko, Armando B. Barreto, Ingrid U. Scott, Josey Y. M. Chu, Holly S. Bautsch,
`Gottlieb J. Marmet, Robert H. Rosa Jr.
`
`Eye-Movement-Contingent Release of Speech and "SEE"-Multi-Modalities Eye Tracking System ....... 113
`Weimin Liu
`
`What Eye-movements tell us about Ratios and Spatial Proportions ................................ 115
`Catherine Sophian, Martha E. Crosby
`
`Technical aspects in the recording of scanpath eye movements ................................... 116
`Daniela Zambarbieri, Stefano Ramat, Carlo Robino
`
`Pupillary Response, Hand-Eye Coordination
`
`Hand Eye Coordination Patterns in Target Selection ........................................... 117
`Barton A. Smith, Janet Ho, Wendy Ark, Shumin Zhai
`Color Plate .......................................................................... 147
`
`Pupillary Responses to Emotionally Provocative Stimuli ....................................... 123
`Timo Partala, Maria Jokiniemi, Veikko Surakka
`
`The Response of Eye-movement and Pupil Size to Audio Instruction While Viewing a Moving Target ..... 13 I
`Koji Takahashi, Minoru Nakayama, Yasutaka Shimizu
`
`Conference Committee ................................................................. 139
`Cover Image Credits
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ............ 141
`Author Index ........................................................................ 142
`Color Plate Section ................................................................... 143
`
`4
`
`Supercell
`Exhibit 1005
`Page 4
`
`
`
`EFFECTIVE EYE-GAZE INPUT INTO WINDOWS TM
`Chris Lankford
`Dept. of Systems Engineering
`Olsson Hall, University of Virginia
`Charlottesville, VA 22903
`804-296-3846
`cpl2b@Virginia.edu
`
`ABSTRACT
`The Eye-gaze Response Interface Computer Aid (ERICA) is a
`computer system developed at the University of Virginia that
`tracks eye movement. To allow true integration into the Windows
`environment, an effective methodology for performing the full
`range of mouse actions and for typing with the eye needed to be
`constructed. With
`the methods described
`in
`this paper,
`individuals can reliably perform all actions of the mouse and the
`keyboard with their eye.
`Keywords
`Eye-gaze, disabled, mouse clicking, typing, windows.
`1. INTRODUCTION
`The Eye-gaze Response Interface Computer Aid (ERICA) is a
`computer that tracks eye movement. The system noninvasively
`tracks where a user is looking by monitoring the user's eye
`movements through a camera mounted underneath the computer
`monitor. ERICA was originally developed to assist the disabled
`by providing them with a means to communicate. The device has
`helped numerous disabled individuals since its inception in
`1983. ERICA recently moved to the Windows platform. The
`mouse cursor now points to where the user is looking on the
`screen. To fully take advantage of the capabilities Windows has
`to offer, a method to click and to type with the eye needed to be
`incorporated into the graphical user interface (GUl).
`2. GAZE CLICKING
`To develop a successful "eye-mouse," the system needed to
`possess some means of performing mouse actions with the eye.
`The two obvious methods were either with a blink or by using
`eye :fixations ( calculating how long the eye dwells at a particular
`region).
`
`Blinking seemed like a particularly noisy methodology. Detecting
`a blink seemed difficult, and since people blink involuntarily
`every several seconds, the mechanism would have to work with a
`
`Permission to make digital or hard copies of all or part of this work for
`personal or classroom use is granted without fee provided that
`copies are not made or distributed for profit or commercial advan-
`tage and that copies bear this notice and the full citation on the first page.
`To copy otherwise, to republish, to post on servers or to
`redistribute to lists, requires prior specific permission and/or a fee.
`Eye Tracking Research & Applications Symposium 2000
`Palm Beach Gardens, FL, USA
`© 2000 ACM ISBN 1-58113-280·8/00/0011 ... $5.00
`
`prolonged blink. Thus, using :fixations seemed more reliable.
`2.1 Dwell Time Gaze Clicking
`The system uses dwell time to provide the user with access to
`various mouse clicking actions. When the user fixates (focuses at
`a point on the screen and keeps the mouse cursor stationary) for a
`predetermined amount of time on the computer display, a red
`rectangle appears, centered on the point of fixation. This
`rectangle begins collapsing in on itself. The rectangle serves as a
`visual cue to the user that if they keep fixating at that point, then
`they will perform a mouse control action at the point. The
`rectangle will tum to a blue circle halfway through its collapse
`and then continue collapsing. If the user looks away while the
`blue circle is collapsing, then they will start dragging what they
`were fixating on. If the user prolongs their :fixation and allows
`the circle to reach the end of its collapse, then the system clicks
`once on where they were looking. Lastly, a green rectangle will
`appear, and after a predetermined :fixation time, the system will
`double click on where the user is :fixating. Figure 1 shows the
`different stages of collapse.
`
`(cid:143)
`
`0
`
`"
`
`(cid:141)
`(cid:141)
`(cid:141)
`
`Visual Cue
`
`Dragging enabled if user
`looks away, single click at
`end of collapse
`
`Double click at end of timing
`
`Figure 1. Gaze Clicking Collapse Stages. The rectangle
`migrates through various stages of collapse to signal the mouse
`action it will perform.
`
`If the user knows they are going to be only using certain actions,
`like left button single clicking, then the other mouse actions may
`be disabled. For example, if dragging and double clicking are
`disabled, then the visual cue is simply a red collapsing rectangle
`that single clicks where the user is fixating when the rectangle
`reaches the end of its collapse. This is often used in children's
`games, where the only mouse action really needed is single
`clicking with the left mouse button. Also, children may have
`difficulty understanding the meaning of the collapse in the
`beginning, so disabling the multiple collapse modes facilitates
`teaching them to use the system.
`The system may use an alternative method for clicking instead of
`the dwell time. If the operator still has some use of their hands,
`
`23
`
`Supercell
`Exhibit 1005
`Page 5
`
`
`
`then pressing a single key on the keyboard will make the system
`click where they are looking. This has the advantage of greatly
`speeding up the clicking process. The key on the keyboard works
`like a mouse key. Pressing the key twice quickly performs a
`double click; holding it down allows dragging. Many of the
`disabled individuals using ERICA have only control over their
`eyes, so using their hands to click a key on the keyboard is not an
`option.
`However, this method of gaze clicking was not effective in
`granting adequate mouse capabilities to many disabled users.
`Despite this new method of gaze clicking and general advances
`in system accuracy, the system was still not able to reliably grant
`control over
`the Windows environment. Even with one(cid:173)
`centimeter accuracy, striking windows controls such as the
`minimize and maximize buttons with the gaze clicking was found
`to be difficult. Also, consistently clicking the correct.toolbar icon
`was troublesome, and selecting parts of text with the eye was
`hard, especially when the text was dense on the computer
`display. Running at a lower screen resolution and displaying text
`in a larger font seemed to be the only way to circumvent this
`problem.
`Testing with disabled people confirmed these results. In some
`cases, the previously described dwell time method was fine, such
`as the children's game example described above. In this case, the
`icons to "hit" were always large and accommodated the system's
`accuracy level. The only way to have the system reliably work in
`a standard Windows environment was to run at an extremely low
`resolution of 640x480 or 800x600 on a twenty-one inch monitor.
`Also, the text size had to be magnified 200 percent. This reduced
`the amount of space on the screen that showed meaningful
`information. For example, controls and text were so large in
`Microsoft Word that only a few lines of text were visible in the
`text editor portion of the program. If these conditions were not
`met, then the user may not have been able to hit their desired
`icon on the first try. Repeated attempts would usually be
`necessary. Disabled people tolerated this because they frequently
`had no other means to communicate, but this was inefficient.
`ERICA needed even higher accuracy.
`ERICA' s accuracy level was already at half a centimeter to one
`centimeter when the user was roughly sixty centimeters from the
`computer monitor. Pushing this accuracy level even higher
`seemed nearly impossible due to physiological effects on the eye.
`Boosting the accuracy to less than the one-millimeter level,
`which was the level needed to reliably and consistently control
`windows at high resolutions, seemed unlikely.
`The only way to overcome this accuracy issue was to make the
`objects on the screen larger as was done with the higher
`resolution and text magnification; however, the objects really did
`not need to be larger all of the time. Only an object the user
`wished to interact with needed to have an increased size. Thus, a
`zooming methodology was constructed.
`2.2 Zooming Methodology
`Zooming magnifies areas of the computer screen to allow the
`user to more reliably execute a mouse action at a desired
`location. The user first looks at a particular area where they wish
`to perform a mouse action. Unfortunately, ERICA cannot
`precisely determine where someone is looking, so the mouse
`
`cursor does not move to that exact screen point. A collapsing
`rectangle appears on the screen at the mouse cursor's location.
`When the rectangle reaches the end of its collapse (by becoming
`a point at the user's position of fixation), then a window pops up
`near the center of the screen. The region around which the user
`was fixating appears magnified in this window as shown in
`Figure 3. At the bottom of the window is an eye-gaze controlled
`button that closes the window if the user fixates on the button for
`a predefined length of time. If the user wishes to perform a
`mouse action somewhere in the magnified area, then the user
`then fixates in this window on where they actually wished to
`have performed a mouse action. A red rectangle again signals the
`user that continued fixation will cause another event to occur. If
`the rectangle reaches the end of its collapse, then a menu of six
`additional buttons appears in the window as shown in Figure 4.
`Fixating on a button for a predetermined amount of time causes
`that button's action to be performed. The available actions
`include left mouse button drag, left mouse button single click,
`left mouse button double click, right mouse button drag, right
`mouse button single click, and right mouse button double click.
`A collapsing red rectangle again shows which button will be
`clicked if fixation continues. If the user is in the middle ·of a drag
`then all buttons stop the drag and then perform their set action,
`with the exception of the drag start buttons which only cease the
`dragging. The fixation times for all events described are user
`customizable to suit the different user's skill levels in operating
`the system. The size of the Z,OOm window and the factor by which
`it magnifies the area the user initially fixates at are also
`adjustable. The window siz.e and the zoom level determine the
`amount of area on the screen that appears zoomed each time the
`Z,OOm window opens.
`
`Fixation on a
`Screen
`
`Fixation in
`Zoom Window
`
`Figure 3. Zooming. When the user fixates on the
`screen, a window appears with a magnified view of where the
`user was fixating.
`
`24
`
`Supercell
`Exhibit 1005
`Page 6
`
`
`
`different clicking actions in the zoom window, following the
`methods described previously in section 2.1.
`This variety in the clicking mechanisms allows the eye-gaze
`system to be optimized for each user - the user interfaces with
`the GUI in the most efficient method that suits their particular
`needs.
`3. EYE-TYPING
`Although ERICA now had an efficient means to perform mouse
`actions, the system still needed to provide a means for the user to
`robustly type with their eye. The previous ERICA keyboard was
`its own word processor. The typing the user performed with their
`eye appeared within a text box inside of the keyboard
`application. This design lacked flexibility because if ERICA
`wanted to support spell checking, for example, then ERICA
`programmers would have to devise and to implement more code.
`Furthermore, the keyboard worked by having the user look at a
`key and then an accept button. When the user fixated on a key,
`the key eniarged itself. After the user looked at the accept button,
`the keyboard typed the key into the text box. Also, only a few of
`the letters in the alphabet would appear at a time on the
`keyboard due to the accuracy limitations in the earlier ERICA
`systems.
`The ERICA eye-typing application, called the Visual Keyboard,
`needed to become more robust. Instead of providing its own word
`processor,
`the keyboard needed
`to
`interact with other
`applications in Windows. This would allow users to benefit from
`what other programs had to offer. ERICA programmers would
`not have to rewrite preexisting software. Also, the accept button
`was unnecessary. The keyboard could remove the accept button
`and type words after the user fixated on them for a predefined
`time. Lastly, with the higher accuracy the system now possessed
`after its migration to Windows, the keyboard could show more
`keys at a single time.
`The new Visual Keyboard application grants the system user the
`means to efficiently perform texi entry functions with their eye
`into any application running on the computer. Prolonged :fixation
`on a keyboard key in the software causes that key's action to be
`performed. This activation time is customizable to suit different
`users, but a highly skilled user can work effectively with a dwell
`time activation as little as 0.5 seconds.
`As the user effectively types with their eye, a dictionary at the
`bottom of the screen displays words that follow the user's typing;
`the dictionary narrows its word selection choices to match the
`characters
`typed by
`the user.
`The words are ordered
`alphabetically and by frequency. The words used more often are
`placed at the top of the list and in a brighter color. The keyboard
`is also tightly integrated with the applications in which it works.
`The keyboard's dictionary updates not only in response to the
`user's typing but in response to their mouse actions as well. For
`example, is the user clicks to another portion of the document,
`then the keyboard will update the dictionary to reflect the word
`at the cursor's new position. The dictionary is not limited to only
`words either - entire phrases or ,any alphanumeric character
`combination may appear or be added to the dictionary file. Using
`the dictionary greatly increases the speed of text input. Figure 5
`shows a typical keyboard layout.
`
`Figure 4. The Click Menu. After fixating on a spot in the zoom
`window, this menu appears to present the user with a series of
`choices as to what action to perform.
`
`The zooming feature allows any accuracy concerns with the
`system to become moot. Even with an accuracy level of two to
`three millimeters, the user cannot consistently fixate on certain
`GUI control objects such as scroll bars and toolbar buttons when
`operating at a high resolution. The zooming feature provides a
`reliable means for activating a GUI control and accomplishing
`various tasks in the GUI environment. With this feature,
`individuals can effectively tap into the GUI interface using solely
`the eye.
`The ERICA system employs a variety of smoothing algorithms to
`remove jitter in the mouse cursor; however, due to accuracy
`limitations in the system, the cursor may have a physical
`displacement from where the subject is actually looking, as
`discussed in section 2.1. This presents a positive feedback
`problem; if the user looks at the displaced cursor, then the cursor
`moves farther away from where the user originally intended a
`mouse action to occur. New users of the ERICA system must
`learn to ignore the displacement, and fortunately, the zooming
`methodology eases the burden of learning to do this. Since the
`magnification level and size of the zoom window can be changed,
`a different amount of screen area can be captured and magnified
`depending on the settings. Therefore, if the settings are made
`correctly, the zoom window will magnify where the user is really
`looking regardless of the mouse cursor having a slight physical
`displacement.
`Ignoring this displacement is more difficult with dragging
`operations because frequently larger icons are moving across the
`screen; however, the user can effectively perform typical drag
`and drop operations or scrolling operations with
`this
`methodology. The user must ignore the displacement that occurs
`and rely on the zoom window to position the object correctly.
`This technique does fail, however, if the user is performing a
`drawing operation with his or her eye. For example, if the
`drawing application draws a line wherever the mouse cursor
`moves, then many haphazardly drawn lines will result as the eye
`scans the screen during the drag operation.
`The system is not limited to using the z.ooming feature and a
`menu for implementing click actions. If the user knows they are
`going to be only using certain actions, like left clicking, then the
`menu may be bypassed and different dwell times can activate the
`
`25
`
`Supercell
`Exhibit 1005
`Page 7
`
`
`
`keys to show the words available for rapid typing by the user. If
`the user clicks one of these keys, then that entire word or
`sequence of words shown on the key is typed into the application
`they are working in. The last type of key is a command key.
`These keys perform various actions for the keyboard such as
`launching other applications, acting as a taskbar to easily change
`the active application, navigating through the dictionary, loading
`other keyboard layouts, and pausing the system to allow the user
`to rest.
`As the user shifts from application to application, the keyboard's
`layout and dictionary change to suit the particular application.
`For example, when using a word processing application, the
`layout should have all available character keys and a dictionary
`consistent with the English (or some other) language. If the user
`shifted over to a compiler, for example, then the dictionary
`would change to a listing containing common syntax and phrases
`used in the computer language. The layout may also change to
`maximize the user's efficiency in using the compiler.
`The user may use their eyes to reconfigure the keyboard layout.
`A sequence of dwell time controlled menus allows the user to
`manipulate the keys on the layout. Keys may be switched, added,
`deleted, or their timing for activation may be altered. These
`changes may be stored for only the application the user is
`currently working in; thus the layout and its attributes will shift
`as the user changes applications.
`The keyboard also performs various tasks as the user types. If the
`user
`repeatedly
`types a word,
`then
`the keyboard will
`automatically add that word to the dictionary associated with the
`application. The keyboard also checks the user's typing and
`removes any characters that disobey some common rules of
`grammar. For example, if the keyboard finds that the user has
`placed punctuation marks in the middle of a word, then the
`marks will be automatically removed.
`Lastly, to provide disabled individuals with a means to speak, the
`keyboard has a third-party text to speech software package
`integrated
`into
`it. DecTalk, made by Digital Equipment
`Corporation, provides nine different voices, ranging from a child,
`to an older man, and a young woman to an older woman. The
`keyboard speaks whatever the current line of text is that the user
`has typed into the application they are working in.
`The keyboard proved a great success in testing it with disabled
`users, but one notable addition was a phrases board. Many
`disabled people have certain actions they need performed on a
`regular basis, such as having their suction adjusted or the TV
`volume changed. Instead of having these long phrases in the
`dictionary, a layout was created that presents many different user
`customizable phrases. Looking at a key with the phrase types the
`phrase out and has the system speak it automatically. This lets
`the user quickly have their needs cared for.
`With the new keyboard application, users now have effective
`control over the computer. Their eyes can drag, click, double
`click, and type, giving them reliable control over the Windows
`environment.
`4. CONCLUSION
`Deployment of the system to the users with disabilities made the
`benefits of the device readily apparent. Observing disabled
`
`Figure 5. The Keyboard Layout. The Visual Keyboard presents
`this layout by default to the user for eye-gaze typing.
`
`Able-bodied users may type with their hands faster than they can
`with the eye-driven keyboard, but the application is a great boon
`to disabled
`individuals who cannot type otherwise. This
`application is not a dedicated word processor; it types into
`whatever application the user is working in, whether it is a
`spreadsheet, word processor, or e-mail program, and it can be
`customized to suit the application in which the user works. This
`software, when coupled with the mouse control already bestowed
`by ERICA, grants the user complete control over the computer.
`Figure 6 shows the collapsing red rectangle that marks the key
`that the user is fixating upon. Prolonged fixation on the key will
`have the rectangle continue its collapse. When the rectangle
`finishes collapsing, that key's action is performed. The term
`"clicking" will refer to the user fixating on the key for the
`predetermined fixation time and causing that key's action to
`occur.
`
`Non-highlighted key
`
`Highlighted key
`
`Box in mid-collapse
`
`w
`
`w
`
`w
`
`Box at the end of collapse
`Figure 6. Rectangle Collapse 0111 the Key. A collapsing
`rectangle appears as the user fzxates on a key; when the
`rectangle reaches the end of its collapse, the key's action is
`petformed.
`
`There are three types of keys in the visual keyboard. One is
`called a character key. This key represents a single key on the
`physical keyboard. When this key is clicked, that key's character
`is typed into the application the user is working in and the
`dictionary updates its dictionary keys to reflect the new typing. A
`second type of key is a word key, and the dictionary uses these
`
`26
`
`Supercell
`Exhibit 1005
`Page 8
`
`
`
`individuals working with the device showed what a tremendous
`boon
`it could be to people stripped of other means to
`communicate effectively. The first disabled person to use this
`version of the ERICA system was a man with Amyotrophic
`lateral sclerosis (ALS), otherwise known as Lou Gehrig's
`disease,
`in Washington, DC. He wrote, "ERICA
`is my
`lifeline ... the quality of my life is woven by ERICA." The
`feedback he gave on the perfo1mance of the system was
`instrumental to its success. The entire concept of zooming arose
`
`from observing and thinking about the difficulties that he
`encountered as he operated ERICA.
`The system has benefited other users with disabilities during the
`course of this work as well. Another ALS patient in Waynesboro
`uses ERICA to communicate with his health care givers and to
`write a philosophy book. Medicaid also recently approved
`funding for the device for a child in Roanoke, deeming ERICA a
`medical necessity in this child's case. The child uses ERICA to
`select things to say in the Words+ Talking Screens application.
`
`27
`
`Supercell
`Exhibit 1005
`Page 9
`
`