`(12) Patent Application Publication (10) Pub. No.: US 2016/0093105 A1
`Rimon et al.
`(43) Pub. Date:
`Mar. 31, 2016
`
`US 20160093105A1
`
`(54) DISPLAY OF TEXT INFORMATION ON A
`HEAD-MOUNTED DISPLAY
`
`(71) Applicant: Sony Computer Entertainment Inc.,
`Tokyo (JP)
`(72) Inventors: Noam Rimon, San Mateo, CA (US);
`Jeffrey Roger Stafford, San Mateo, CA
`(US)
`(21) Appl. No.: 14/503,196
`(22) Filed:
`Sep. 30, 2014
`Publication Classification
`
`(51) Int. Cl.
`G06T 9/00
`G06F 3/0
`G06F 3/0484
`G06F 3/04
`G06F 3/0488
`GO2B 27/0
`G06T II/60
`
`(2006.01)
`(2006.01)
`(2006.01)
`(2006.01)
`(2006.01)
`(2006.01)
`(2006.01)
`
`
`
`(52) U.S. Cl.
`CPC ............ G06T 19/006 (2013.01); G02B 27/017
`(2013.01); G06F 3/017 (2013.01); G06F 3/012
`(2013.01); G06F 3/013 (2013.01); G06T II/60
`(2013.01); G06F 3/041 (2013.01); G06F
`3/0488 (2013.01); G06F 3/04845 (2013.01);
`G02B 2027/0178 (2013.01); G06F2203/04806
`(2013.01)
`
`(57)
`
`ABSTRACT
`
`A method for presenting text information on a head-mounted
`display is provided, comprising: rendering a view of a virtual
`environment to the head-mounted display; tracking an orien
`tation of the head-mounted display; tracking a gaze of a user
`of the head-mounted display; processing the gaze of the user
`and the orientation of the head-mounted display, to identify a
`gaze target in the virtual environment towards which the gaZe
`of the user is directed; receiving text information for render
`ing on the head-mounted display; presenting the text infor
`mation in the virtual environment in a vicinity of the gaZe
`target.
`
`Supercell
`Exhibit 1006
`Page 1
`
`
`
`Patent Application Publication
`
`Mar. 31, 2016 Sheet 1 of 14
`
`US 2016/0093105 A1
`
`s
`
`S s
`
`&
`
`.
`r
`.
`C
`
`
`
`Supercell
`Exhibit 1006
`Page 2
`
`
`
`Patent Application Publication
`
`Mar. 31, 2016 Sheet 2 of 14
`
`US 2016/0093105 A1
`
`
`
`Supercell
`Exhibit 1006
`Page 3
`
`
`
`Patent Application Publication
`
`Mar. 31, 2016 Sheet 3 of 14
`
`US 2016/0093105 A1
`
`818
`
`
`
`
`
`
`
`
`
`Supercell
`Exhibit 1006
`Page 4
`
`
`
`Patent Application Publication
`
`Mar. 31, 2016 Sheet 4 of 14
`
`US 2016/0093105 A1
`
`
`
`S.
`
`s
`
`Supercell
`Exhibit 1006
`Page 5
`
`
`
`Patent Application Publication
`
`Mar. 31, 2016 Sheet 5 of 14
`
`US 2016/0093105 A1
`
`W
`
`W
`
`W
`Y
`W
`
`W
`
`W
`W
`W
`
`W
`
`W
`
`1.
`
`1/
`a-
`
`S2
`co
`
`W
`
`M
`a W
`sy
`Y
`W
`W
`-4- - - - - - - - - - - - - - - - - - - - - - - - - W
`
`2
`Y
`W
`W
`
`W
`W
`W
`W
`
`W
`W
`
`W
`W
`1.
`
`Y
`
`W
`
`W
`
`W
`W
`
`W
`
`W
`Y
`e
`
`s
`
`W
`W
`
`/
`
`
`
`SPLO
`
`Supercell
`Exhibit 1006
`Page 6
`
`
`
`Patent Application Publication
`
`Mar. 31, 2016 Sheet 6 of 14
`
`US 2016/0093105 A1
`
`
`
`Supercell
`Exhibit 1006
`Page 7
`
`
`
`Patent Application Publication
`
`Mar. 31, 2016 Sheet 7 of 14
`
`US 2016/0093105 A1
`
`
`
`PERPHERAL
`FOW
`
`Supercell
`Exhibit 1006
`Page 8
`
`
`
`Patent Application Publication
`
`Mar. 31, 2016 Sheet 8 of 14
`
`US 2016/0093105 A1
`
`{AUDIO,
`VALERT,
`
`
`
`
`
`
`
`
`
`fHAPTC
`ALERT
`
`s
`
`na n
`
`COMPUTER
`
`FIG. 8A
`
`New Message
`from John
`
`Supercell
`Exhibit 1006
`Page 9
`
`
`
`Patent Application Publication
`
`Mar. 31, 2016 Sheet 9 of 14
`
`US 2016/0093105 A1
`
`
`
`Supercell
`Exhibit 1006
`Page 10
`
`
`
`Patent Application Publication
`
`Mar. 31, 2016 Sheet 10 of 14
`
`US 2016/0093105 A1
`
`Hzmmmmm
`
`._.xM_._.:.mm_._<
`
`$5:9.
`
`mg
`
`Na80gas;
`
`Emma£255
`
`03mg0g
`
`mzsmmmo
`
`Egg
`
`02::
`
`m>_m_om_m_
`
`._.xm_._.
`
`zo_._.<_>_m_ou_z_
`
`a.OE
`
`an?
`
`Em._<
`
`m;
`
`%Em
`
`NS
`
`552
`
`89>
`
`.35
`
`mggo
`
`So
`
`08
`
`N8
`
`wow
`
`:5
`
`Supercell
`Exhibit 1006
`
`Page 11
`
`Supercell
`Exhibit 1006
`Page 11
`
`
`
`
`
`
`Patent Application Publication
`
`Mar. 31, 2016 Sheet 11 of 14
`
`US 2016/0093105 A1
`
`$9
`
`2106328;
`
`mm>$m
`
`89
`
`$52
`
`mm>mmm
`
`.258
`
`xmoimz
`
`mm>mmw
`
`m2:
`
`.__<_>_-m_
`
`mm>mmm
`
`32
`
`.2580225
`
`
`
`mm>mmmxmoEmz
`
`S9
`
`m_>_<o
`
`$5me
`
`0::
`
`82
`
`mmEnEom2;
`
`55mg
`
`
`
`mo_>m_ooz_5m_>_oo
`
`:5F
`
`82
`
`92:
`
`xmjoEzoo
`
`or.GE
`
`NOE89>
`
`m__>_<o
`
`m_.__mo_>_
`
`55%
`
`Supercell
`Exhi
`it 1006
`
`Page 12
`
`Supercell
`Exhibit 1006
`Page 12
`
`
`
`
`
`
`
`Patent Application Publication
`
`Mar. 31, 2016 Sheet 12 of 14
`
`US 2016/0093105 A1
`
`
`
`
`
`Supercell
`Exhibit 1006
`Page 13
`
`
`
`Patent Application Publication
`
`Mar. 31, 2016 Sheet 13 of 14
`
`US 2016/0093105 A1
`
`
`
`Supercell
`Exhibit 1006
`Page 14
`
`
`
`Patent Application Publication
`
`Mar. 31, 2016 Sheet 14 of 14
`
`US 2016/0093105 A1
`
`1400
`
`1430
`
`ERVER SYSTEM
`VIDEOS
`SYS
`
`1445
`
`VIDEO SOURCE
`
`I/O DEVICE
`
`PROCESSOR
`
`STORAGE
`
`1450
`
`1455
`
`CLIENT OUALIFIER
`
`GAME SERVER
`
`
`
`CLIENT
`
`CLIENT
`
`FIG. 13
`
`Supercell
`Exhibit 1006
`Page 15
`
`
`
`US 2016/0093105 A1
`
`Mar. 31, 2016
`
`DISPLAY OF TEXT INFORMATION ON A
`HEAD-MOUNTED DISPLAY
`
`BACKGROUND
`
`1. Field of the Invention
`0001
`0002 The present invention relates to methods and sys
`tems for display of text information on a head-mounted dis
`play.
`0003 2. Description of the Related Art
`0004. The video game industry has seen many changes
`over the years. As computing power has expanded, develop
`ers of Video games have likewise created game software that
`takes advantage of these increases in computing power. To
`this end, Video game developers have been coding games that
`incorporate sophisticated operations and mathematics to pro
`duce a very realistic game experience.
`0005 Example gaming platforms, may be the Sony Play
`station R, Sony PlayStation2(R) (PS2), and Sony Playsta
`tion3(R) (PS3), each of which is sold in the form of a game
`console. As is well known, the game console is designed to
`connect to a monitor (usually a television) and enable user
`interaction through handheld controllers. The game console
`is designed with specialized processing hardware, including a
`CPU, a graphics synthesizer for processing intensive graphics
`operations, a vector unit for performing geometry transfor
`mations, and other glue hardware, firmware, and software.
`The game console is further designed with an optical disc tray
`for receiving game compact discs for local play through the
`game console. Online gaming is also possible, where a user
`can interactively play against or with other users over the
`Internet. As game complexity continues to intrigue players,
`game and hardware manufacturers have continued to inno
`Vate to enable additional interactivity and computer pro
`grams.
`0006. A growing trend in the computer gaming industry is
`to develop games that increase the interaction between the
`user and the gaming system. One way of accomplishing a
`richer interactive experience is to use wireless game control
`lers whose movement is tracked by the gaming system in
`order to track the player's movements and use these move
`ments as inputs for the game. Generally speaking, gesture
`input refers to having an electronic device such as a comput
`ing System, Video game console. Smart appliance, etc., react
`to some gesture made by the player and captured by the
`electronic device.
`0007 Another way of accomplishing a more immersive
`interactive experience is to use a head-mounted display. A
`head-mounted display is worn by the user and can be config
`ured to present various graphics, such as a view of a virtual
`space. The graphics presented on ahead-mounted display can
`cover a large portion or even all of a user's field of view.
`Hence, a head-mounted display can provide a visually
`immersive experience to the user.
`0008 Another growing trend in the industry involves the
`development of cloud-based gaming systems. Such systems
`may include a remote processing server that executes a game
`application, and communicates with a local thin client that
`can be configured to receive input from users and render
`video on a display.
`0009. It is in this context that embodiments of the inven
`tion arise.
`
`SUMMARY
`I0010 Embodiments of the present invention provide
`methods and systems for display of text information on a
`head-mounted display. It should be appreciated that the
`present invention can be implemented in numerous ways,
`Such as a process, an apparatus, a system, a device or a method
`on a computer readable medium. Several inventive embodi
`ments of the present invention are described below.
`I0011. In one embodiment, a method for presenting text
`information on a head-mounted display is provided, compris
`ing: rendering a view of a virtual environment to the head
`mounted display; tracking an orientation of the head
`mounted display; tracking a gaze of a user of the head
`mounted display; processing the gaze of the user and the
`orientation of the head-mounted display, to identify a gaze
`target in the virtual environment towards which the gaze of
`the user is directed; receiving text information for rendering
`on the head-mounted display; presenting the text information
`in the virtual environment in a vicinity of the gaze target.
`I0012. In one embodiment, the gaze target defines a view
`depth in the virtual environment, the view depth defined rela
`tive to a virtual viewpoint in the virtual environment that is
`defined for the head-mounted display and that defines a per
`spective from which the view of the virtual environment is
`rendered; wherein the text information is presented in the
`virtual environment at approximately the view depth.
`0013. In one embodiment, the text information is pre
`sented in the virtual environment at a lateral location relative
`to the gaze target.
`I0014) In one embodiment, the text information is pre
`sented on an object in the virtual environment that is located
`at approximately the view depth.
`I0015. In one embodiment, the text information is pre
`sented at substantially a location defined by the gaze target.
`10016. In one embodiment, the gaze target identifies an
`object in the virtual environment; and, the text information is
`presented on the object in the virtual environment.
`(0017. In one embodiment, determining the orientation of
`the head-mounted display includes capturing images of the
`head-mounted display and analyzing the captured images of
`the head-mounted display.
`I0018. In one embodiment, determining the orientation of
`the head-mounted display includes processing data received
`from at least one inertial sensor included in the head-mounted
`display.
`I0019. In one embodiment, tracking the gaze of the user
`includes tracking an orientation of an eye of the user.
`0020. In one embodiment, the text information is defined
`by one or more of an instant message, an e-mail, a chat
`message, a social network feed.
`0021. In one embodiment, a method for displaying text on
`a head-mounted display is provided, comprising: rendering a
`View of a virtual environment on a head-mounted display;
`receiving text information for display to a user of the head
`mounted display; tracking a location and orientation of a
`controller device in a vicinity of the head-mounted display; in
`response to the receiving text information, rendering, in the
`view of the virtual environment, a virtual display device, the
`Virtual display device configured to display the text informa
`tion, a location and orientation of the virtual display device in
`the virtual environment being controlled based on the tracked
`location and orientation of the controller device.
`0022. In one embodiment, the method further comprises:
`receiving input from a touch-sensitive surface of the control
`
`Supercell
`Exhibit 1006
`Page 16
`
`
`
`US 2016/0093105 A1
`
`Mar. 31, 2016
`
`ler device; processing the input to define an interaction with
`the text information as it is displayed on the virtual display
`device.
`0023. In one embodiment, the interaction with the text
`information is selected from Scrolling the text information,
`moving the text information, adjusting a level of Zoom, and
`selecting a portion of the text information.
`0024. In another embodiment, a method for displaying
`text on a head-mounted display is provided, comprising: ren
`dering a view of a virtual environment to a head-mounted
`display; receiving text information for display on the head
`mounted display; in response to the receiving text informa
`tion, rendering an alert to the head-mounted display; detect
`ing a predefined response to the alert; in response to detecting
`the predefined response to the alert, rendering the text infor
`mation to the head-mounted display.
`0025. In one embodiment, the alert is defined by display of
`a visual indicator on the head-mounted display; the pre
`defined response is defined by detection of a gaze direction of
`a user of the head-mounted display that is towards the visual
`indicator.
`0026. In one embodiment, the predefined response is
`defined by detection of a movement of a controller device
`towards the head-mounted display; wherein rendering the
`text information to the head-mounted display includes ren
`dering, in the view of the virtual environment, a virtual dis
`play device that is configured to display the text information.
`0027. In one embodiment, a location and orientation of the
`virtual display device in the virtual environment are defined
`by a location and orientation of the controller device.
`0028. In one embodiment, in response to gesture input
`detected from a touch-sensitive surface of the controller
`device, an interaction with the rendered text information on
`the virtual display device is defined.
`0029. Other aspects of the invention will become apparent
`from the following detailed description, taken in conjunction
`with the accompanying drawings, illustrating by way of
`example the principles of the invention.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`0030 The invention may best be understood by reference
`to the following description taken in conjunction with the
`accompanying drawings in which:
`0031
`FIG. 1 illustrates a system for interactive gameplay
`of a video game, in accordance with an embodiment of the
`invention.
`0032 FIG. 2 illustrates a head-mounted display (HMD),
`in accordance with an embodiment of the invention.
`0033 FIG. 3 conceptually illustrates the function of a
`HMD in conjunction with an executing video game, in accor
`dance with an embodiment of the invention.
`0034 FIG. 4 illustrates a view of the virtual environment
`which can be rendered on an HMD device, inaccordance with
`an embodiment of the invention.
`0035 FIG.5 conceptually illustrates a user interacting in a
`virtual environment, in accordance with an embodiment of
`the invention.
`0036 FIG. 6 conceptually illustrates a user 600 viewing a
`virtual environment 601, in accordance with an embodiment
`of the invention.
`0037 FIGS. 7A and 7B illustrate the use of a visual alert to
`trigger display of text in a virtual environment, in accordance
`with an embodiment of the invention.
`
`0038 FIG. 8A illustrates a user 800 interacting with a
`virtual environment via an HMD device 802 and operating a
`controller 804, in accordance with an embodiment of the
`invention.
`0039 FIG. 8B illustrates a view of a virtual environment,
`in which a controller device is replaced with or shown as a
`virtual device having a display screen for displaying the text
`information.
`0040 FIG. 8C illustrates a top view of a controller device,
`in accordance with an embodiment of the invention.
`0041
`FIG. 8D illustrates transference of touch/gesture
`input received at a real-world controller device to a virtual
`display device in a virtual environment, in accordance with an
`embodiment of the invention.
`0042 FIG. 9 is a flow diagram conceptually illustrating a
`process for presenting text information to a user in a virtual
`environment.
`0043 FIG. 10 illustrates a system for rendering text infor
`mation on a head-mounted display to a user, in accordance
`with an embodiment of the invention.
`0044 FIG. 11A illustrates a configuration for operating a
`head-mounted display device, in accordance with an embodi
`ment of the invention.
`0045 FIG. 11B illustrates a configuration for operating a
`head-mounted display device, in accordance with an embodi
`ment of the invention.
`0046 FIG. 12 illustrates components of a head-mounted
`display, in accordance with an embodiment of the invention.
`0047 FIG. 13 is a block diagram of a Game System,
`according to various embodiments of the invention.
`
`DETAILED DESCRIPTION
`0048. The following embodiments describe methods and
`apparatus for display of text information on a head-mounted
`display.
`0049. It will be obvious, however, to one skilled in the art,
`that the present invention may be practiced without some or
`all of these specific details. In other instances, well known
`process operations have not been described in detail in order
`not to unnecessarily obscure the present invention.
`0050 FIG. 1 illustrates a system for interactive gameplay
`of a video game, in accordance with an embodiment of the
`invention. A user 100 is shown wearing a head-mounted
`display (HMD) 102. The HMD 102 is worn in a manner
`similar to glasses, goggles, or a helmet, and is configured to
`display a video game or other content to the user 100. The
`HMD 102 provides a very immersive experience to the user
`by virtue of its provision of display mechanisms in close
`proximity to the user's eyes. Thus, the HMD 102 can provide
`display regions to each of the user's eyes which occupy large
`portions or even the entirety of the field of view of the user.
`0051. In one embodiment, the HMD 102 can be connected
`to a computer 106. The connection to computer 106 can be
`wired or wireless. The computer 106 can be any general or
`special purpose computer known in the art, including but not
`limited to, a gaming console, personal computer, laptop, tab
`let computer, mobile device, cellular phone, tablet, thin client,
`set-top box, media streaming device, etc. In one embodiment,
`the computer 106 can be configured to execute a video game,
`and output the video and audio from the video game for
`rendering by the HMD 102.
`0052. The user 100 may operate a controller 104 to pro
`vide input for the video game. Additionally, a camera 108 can
`be configured to capture image of the interactive environment
`
`Supercell
`Exhibit 1006
`Page 17
`
`
`
`US 2016/0093105 A1
`
`Mar. 31, 2016
`
`in which the user 100 is located. These captured images can
`be analyzed to determine the location and movements of the
`user 100, the HMD 102, and the controller 104. In one
`embodiment, the controller 104 includes a light which can be
`tracked to determine its location and orientation. Addition
`ally, as described in further detail below, the HMD 102 may
`include one or more lights which can be tracked to determine
`the location and orientation of the HMD 102. The camera 108
`can include one or more microphones to capture sound from
`the interactive environment. Sound captured by a microphone
`array may be processed to identify the location of a Sound
`source. Sound from an identified location can be selectively
`utilized or processed to the exclusion of other sounds not from
`the identified location. Furthermore, the camera 108 can be
`defined to include multiple image capture devices (e.g. Ste
`reoscopic pair of cameras), an IR camera, a depth camera, and
`combinations thereof.
`0053. In another embodiment, the computer 106 functions
`as a thin client in communication over a network with a cloud
`gaming provider 112. The cloud gaming provider 112 main
`tains and executes the video game being played by the user
`102. The computer 106 transmits inputs from the HMD 102,
`the controller 104 and the camera 108, to the cloud gaming
`provider, which processes the inputs to affect the game State
`of the executing video game. The output from the executing
`Video game, such as video data, audio data, and haptic feed
`back data, is transmitted to the computer 106. The computer
`106 may further process the data before transmission or may
`directly transmit the data to the relevant devices. For example,
`video and audio streams are provided to the HMD 102.
`whereas a vibration feedback command is provided to the
`controller 104.
`0054. In one embodiment, the HMD 102, controller 104,
`and camera 108, may themselves be networked devices that
`connect to the network 110 to communicate with the cloud
`gaming provider 112. For example, the computer 106 may be
`a local network device. Such as a router, that does not other
`wise perform video game processing, but facilitates passage
`of network traffic. The connections to the network by the
`HMD 102, controller 104, and camera 108 may be wired or
`wireless.
`0055 FIG. 2 illustrates a head-mounted display (HMD),
`in accordance with an embodiment of the invention. As
`shown, the HMD 102 includes a plurality of lights 200A-H.
`Each of these lights may be configured to have specific
`shapes, and can be configured to have the same or different
`colors. The lights 200A, 200B, 200C, and 200D are arranged
`on the front surface of the HMD 102. The lights 200E and
`200F are arranged on a side surface of the HMD 102. And the
`lights 200G and 200H are arranged at corners of the HMD
`102, so as to span the front surface and a side surface of the
`HMD 102. It will be appreciated that the lights can be iden
`tified in captured images of an interactive environment in
`which a user uses the HMD 102. Based on identification and
`tracking of the lights, the location and orientation of the HMD
`102 in the interactive environment can be determined. It will
`further be appreciated that some of the lights may or may not
`be visible depending upon the particular orientation of the
`HMD 102 relative to an image capture device. Also, different
`portions of lights (e.g. lights 200G and 200H) may be exposed
`for image capture depending upon the orientation of the
`HMD 102 relative to the image capture device.
`0056 Throughout the present disclosure, the terms “loca
`tion.” “orientation.” and “position' may be utilized. Unless
`
`otherwise indicated or apparent from the context of the spe
`cific description in which the term is applied, these terms are
`defined as follows. The term “location” identifies a point or
`region of a spatial system, for example, at which an object is
`situated (i.e. where the object is “located'). In some imple
`mentations, a location can be defined by coordinates in a
`spatial coordinate system (e.g. (x,y,z) coordinates in a three
`dimensional coordinate system). The term "orientation'
`identifies the directional pose of an object in a spatial system,
`which is independent of the object’s location. “Orientation
`can be defined by rotational aspects of an object such as pitch,
`yaw, and roll within a spatial system. The term “position' can
`identify location and/or orientation. However, it is noted that
`the term “position” can also be applied in other ways which
`may or may not be related to location? orientation (e.g. to
`identify a relative ranking, a configuration, etc.), and that Such
`will be apparent from the context of the description in which
`the term is applied. Furthermore, it will be appreciated that
`the terms “location.” “orientation, and “position” may be
`applied to a real space, a virtual space, and/or an augmented
`reality space.
`0057. In one embodiment, the lights can be configured to
`indicate a current status of the HMD to others in the vicinity.
`For example, Some or all of the lights may be configured to
`have a certain color arrangement, intensity arrangement, be
`configured to blink, have a certain on/off configuration, or
`other arrangement indicating a current status of the HMD
`102. By way of example, the lights can be configured to
`display different configurations during active gameplay of a
`Video game (generally gameplay occurring during an active
`timeline or within a scene of the game) versus other non
`active gameplay aspects of a Video game. Such as navigating
`menu interfaces or configuring game settings (during which
`the game timeline or scene may be inactive or paused). The
`lights might also be configured to indicate relative intensity
`levels of gameplay. For example, the intensity of lights, or a
`rate of blinking, may increase when the intensity of gameplay
`increases. In this manner, a person external to the user may
`view the lights on the HMD 102 and understand that the user
`is actively engaged in intense gameplay, and may not wish to
`be disturbed at that moment.
`0058. The HMD 102 may additionally include one or
`more microphones. In the illustrated embodiment, the HMD
`102 includes microphones 204A and 204B defined on the
`front surface of the HMD 102, and microphone 204C defined
`on a side surface of the HMD 102. By utilizing an array of
`microphones, sound from each of the microphones can be
`processed to determine the location of the Sound's source.
`This information can be utilized in various ways, including
`exclusion of unwanted Sound Sources, association of a Sound
`Source with a visual identification, etc.
`0059. The HMD 102 may also include one or more image
`capture devices. In the illustrated embodiment, the HMD 102
`is shown to include image capture devices 202A and 202B.
`By utilizing a stereoscopic pair of image capture devices,
`three-dimensional (3D) images and video of the environment
`can be captured from the perspective of the HMD 102. Such
`video can be presented to the user to provide the user with a
`“video see-through' ability while wearing the HMD 102.
`That is, though the user cannot see through the HMD 102 in
`a strict sense, the video captured by the image capture devices
`202A and 202B can nonetheless provide a functional equiva
`lent of being able to see the environment external to the HMD
`102 as if looking through the HMD 102. Such video can be
`
`Supercell
`Exhibit 1006
`Page 18
`
`
`
`US 2016/0093105 A1
`
`Mar. 31, 2016
`
`augmented with virtual elements to provide an augmented
`reality experience, or may be combined or blended with vir
`tual elements in other ways. Though in the illustrated embodi
`ment, two cameras are shown on the front surface of the HMD
`102, it will be appreciated that there may be any number of
`externally facing cameras installed on the HMD 102, oriented
`in any direction. For example, in another embodiment, there
`may be cameras mounted on the sides of the HMD 102 to
`provide additional panoramic image capture of the environ
`ment.
`0060 FIG. 3 conceptually illustrates the function of the
`HMD 102 in conjunction with an executing video game, in
`accordance with an embodiment of the invention. The execut
`ing video game is defined by a game engine 320 which
`receives inputs to update a game state of the video game. The
`game state of the video game can be defined, at least in part,
`by values of various parameters of the video game which
`define various aspects of the current gameplay, Such as the
`presence and location of objects, the conditions of a virtual
`environment, the triggering of events, user profiles, view
`perspectives, etc.
`0061. In the illustrated embodiment, the game engine
`receives, by way of example, controller input 314, audio input
`316 and motion input 318. The controller input 314 may be
`defined from the operation of a gaming controller separate
`from the HMD 102, such as controller 104. By way of
`example, controller input 314 may include directional inputs,
`button presses, trigger activation, movements, or other kinds
`of inputs processed from the operation of a gaming controller.
`The audio input 316 can be processed from a microphone 302
`of the HMD 102, or from a microphone included in the image
`capture device 108. The motion input 318 can be processed
`from a motion sensor 300 included in the HMD 102, or from
`image capture device 108 as it captures images of the HMD
`102. The game engine 320 receives inputs which are pro
`cessed according to the configuration of the game engine to
`update the game state of the video game. The game engine
`320 outputs game state data to various rendering modules
`which process the game state data to define content which will
`be presented to the user.
`0062. In the illustrated embodiment, a video rendering
`module 322 is defined to render a video stream for presenta
`tion on the HMD 102. The video stream may be presented by
`a display/projector mechanism 310, and viewed through
`optics 308 by the eye 306 of the user. An audio rendering
`module 324 is configured to render an audio stream for lis
`tening by the user. In one embodiment, the audio stream is
`output through a speaker 304 associated with the HMD 102.
`It should be appreciated that speaker 304 may take the form of
`an open air speaker, headphones, or any other kind of speaker
`capable of presenting audio.
`0063. In one embodiment, a gaze tracking camera 312 is
`included in the HMD 102 to enable tracking of the gaze of the
`user. The gaze tracking camera captures images of the user's
`eyes, which are analyzed to determine the gaze direction of
`the user. In one embodiment, information about the gaZe
`direction of the user can be utilized to affect the video ren
`dering. For example, if a user's eyes are determined to be
`looking in a specific direction, then the video rendering for
`that direction can be prioritized or emphasized, such as by
`providing greater detail or faster updates in the region where
`the user is looking. It should be appreciated that the gaZe
`direction of the user can be defined relative to the head
`mounted display, relative to a real environment in which the
`
`user is situated, and/or relative to a virtual environment that is
`being rendered on the head-mounted display.
`0064 Broadly speaking, analysis of images captured by
`the gaze tracking camera 312, when considered alone, pro
`vides for a gaze direction of the user relative to the HMD 102.
`However, when considered in combination with the tracked
`location and orientation of the HMD 102, a real-world gaze
`direction of the user can be determined, as the location and
`orientation of the HMD 102 is synonymous with the location
`and orientation of the user's head. That is, the real-world gaze
`direction of the user can be determined from tracking the
`positional movements of the user's eyes and tracking the
`location and orientation of the HMD 102. When a view of a
`virtual environment is rendered on the HMD 102, the real
`world gaze direction of the user can be applied to determine a
`virtual world gaze direction of the user in the virtual environ
`ment.
`0065. Additionally, a tactile feedback module 326 is con
`figured to provide signals to tactile feedback hardware
`included in either the HMD 102 or another device operated by
`the user, such as a controller 104. The tactile feedback may
`take the form of various kinds of tactile sensations, such as
`vibration feedback, temperature feedback, pressure feed
`back, etc.
`0066. As has been noted, the HMD device described
`herein is capable of providing a user with a highly immersive
`experience, enveloping a large proportion or even an entirety
`of a user's field of vision. Because of this immersive aspect of
`the HMD experience, it can be problematic to find ways to
`display text information to a user in a way that does not seem
`awkward or unnatural. Thus, in accordance with embodi
`ments of the invention described herein, methods, apparatus,
`and systems are provided for the rendering of text on an HMD
`device.
`0067. It should be appreciated that the specific implemen
`tations described herein may be applied for the rendering of
`text from any source of text information. Merely for purposes
`of illustration, some representative examples of Sources of
`text information and/or types of text information include the
`following: gaming related information Such as in-game
`updates, score information, resource levels, energy levels,
`instructions, hints, in-game messages, player-to-player mes
`saging/chat, captions, dialogue, player achievements, player
`status, etc., gaming and/or non-gaming social network activ
`ity Such as posts, messages, friend requests, addition to a
`Social graph, recent activity, popular activity, birthday
`reminders, social network app notifications, etc., mobile
`device (e.g. cellular phone, tablet, Smartwatch, etc.) related
`information Such as incoming calls, missed calls, alerts, noti
`fications, text/SMS messages, chat messages, app notifica
`tions, etc., and other types of information Such as calendar/
`schedule reminders, e-mail messages, receipt of any of the
`foregoing, combinations of any of the foregoing, etc.
`0068 FIG. 4 illustrates a view of the virtual environment
`which can be rendered on an HMD device, inaccordance with
`an embodiment of the invention. In accordance with some
`embodiments, text information can be displayed on any
`objects or surfaces that exist within a virtual environment.
`The display of text information in this manner can provide for
`rendering of textina manner that is integrated into the contex