throbber
( 12 ) United States Patent
`Wan et al .
`
`( 10 ) Patent No . : US 10 , 114 , 465 B2
`( 45 ) Date of Patent :
`Oct . 30 , 2018
`
`US010114465B2
`
`( 72 )
`
`( * ) Notice :
`
`( 54 ) VIRTUAL REALITY HEAD - MOUNTED
`DEVICES HAVING REDUCED NUMBERS OF
`CAMERAS , AND METHODS OF OPERATING
`THE SAME
`( 71 ) Applicant : GOOGLE INC . , Mountain View , CA
`( US )
`Inventors : Chung Chun Wan , San Jose , CA ( US ) ;
`Choon Ping Chng , Los Altos , CA ( US )
`( 73 ) Assignee : GOOGLE LLC , Mountain View , CA
`( US )
`Subject to any disclaimer , the term of this
`patent is extended or adjusted under 35
`U . S . C . 154 ( b ) by 227 days .
`( 21 ) Appl . No . : 14 / 996 , 858
`( 22 ) Filed :
`Jan . 15 , 2016
`Prior Publication Data
`( 65 )
`US 2017 / 0205886 A1
`Jul . 20 , 2017
`( 51 ) Int . CI .
`( 2006 . 01 )
`G06F 3 / 01
`( 2011 . 01 )
`G06T 19 / 00
`GO2B 27 / 01
`( 2006 . 01 )
`U . S . Ci .
`CPC . . . . . . . . . G06F 3 / 017 ( 2013 . 01 ) ; G02B 27 / 0172
`( 2013 . 01 ) ; G06T 19 / 006 ( 2013 . 01 ) ; G02B
`2027 / 014 ( 2013 . 01 ) ; G02B 2027 / 0112
`( 2013 . 01 ) ; GO2B 2027 / 0138 ( 2013 . 01 )
`( 58 ) Field of Classification Search
`CPC . . . . . . . . . . . . . . . . GO6F 3 / 017 ; GO2B 27 / 0172 ; G02B
`2027 / 0138 ; GO2B 2027 / 014 ; G02B
`2027 / 0112 ; G06T 19 / 006
`See application file for complete search history .
`
`( 52 )
`
`( 56 )
`
`HO4N 5 / 33
`250 / 226
`GOIS 17 / 89
`348 / 335
`
`References Cited
`U . S . PATENT DOCUMENTS
`3 / 1999 Toyama et al .
`5 , 889 , 505 A
`6 , 630 , 915 B1
`10 / 2003 Flood et al .
`9 / 2007 Acharya . . . . . .
`7 , 274 , 393 B2 *
`3 / 2012 Bamji . . . . . . . . . . . . . . .
`8 , 139 , 142 B2 *
`( Continued )
`OTHER PUBLICATIONS
`International Search Report and Written Opinion received for PCT
`Patent Application No . PCT / US2016 / 068045 , dated Mar . 20 , 2017 ,
`11 pages .
`( Continued )
`Primary Examiner - Antonio Xavier
`( 74 ) Attorney , Agent , or Firm — Brake Hughes
`Bellermann LLP
`ABSTRACT
`( 57 )
`Example virtual - reality head - mounted devices having
`reduced numbers of cameras , and methods of operating the
`same are disclosed herein . A disclosed example method
`includes providing a virtual - reality ( VR ) head - mounted dis
`play ( V - HMD ) having an imaging sensor , the imaging
`sensor including color - sensing pixels , and infrared ( IR )
`sensing pixels amongst the color - sensing pixels ; capturing ,
`using the imaging sensor , an image having a color portion
`and an IR portion ; forming an IR image from at least some
`of the IR portion from the image ; performing a first tracking
`based on the IR image ; forming a color image by replacing
`the at least some of the removed IR portion with color data
`determined from the color portion of the image and the
`location of the removed IR - sensing pixels in the image ; and
`performing a second tracking based on the color image .
`20 Claims , 7 Drawing Sheets
`
`6B
`
`622
`
`FINGER /
`HAND
`TRACKING ?
`NO
`
`YES
`
`600
`
`- - - 624
`
`FORM DEPTH DATA FROM THE FIRST AND SECOND IR IMAGES AND THE
`FIRST AND SECOND COLOR IMAGES FORMED FROM CORRESPONDING
`IMAGES CAPTURED BY THE FIRST AND SECOND SENSORS
`
`628
`
`PERFORM FINGERHAND TRACKING BASED THE DEPTH DATA
`626
`
`YES
`
`6DOF
`TRACKING ?
`7630
`NO
`DERIVE 6DOF TRACKING DATA FROM THE FIRST AND SECOND COLOR
`MAGES FORMED FROM IMAGES CAPTURED BY THE FIRST AND SECOND
`SENSORS
`
`END
`
`Meta Platforms, Inc.
`Exhibit 1040
`Page 001
`
`

`

`US 10 , 114 , 465 B2
`Page 2
`
`( 56 )
`
`References Cited
`U . S . PATENT DOCUMENTS
`8 , 885 , 022 B2 11 / 2014 Gay et al .
`8 , 982 , 261 B2
`3 / 2015 Spears et al .
`9 , 007 , 422 B1 *
`4 / 2015 Kwon
`2003 / 0063185 A1 *
`4 / 2003 Bell
`003 Bell . . . . . . . . . . . . .
`2012 / 0218410 A1 *
`8 / 2012 Kim . . . . . . . . . . . . . .
`2012 / 0320216 Al 12 / 2012 Mkrtchyan et al .
`2013 / 0083003 A14 / 2013 Perez et al .
`2013 / 0293468 A1 11 / 2013 Perez et al .
`2014 / 0104274 A14 / 2014 Hilliges et al .
`2014 / 0176724 Al *
`6 / 2014 Zhang . . . . . . . . . . . . . . . HO1L 27 / 14618
`348 / 164
`2014 / 0240492 AL
`8 / 2014 Lee
`10 / 2014 Finocchio . . . . . . . . . . . . . . . G06F 3 / 017
`2014 / 0306874 A1 *
`345 / 156
`
`GO6T 7 / 12
`348 / 14 . 03
`GO1S 17 / 023
`348 / 46
`GO2B 5 / 201
`348 / 148
`
`2015 / 0054734 AL
`2015 / 0062003 A1 *
`2015 / 0261299 A1 *
`2016 / 0181314 A1 *
`2016 / 0261300 Al *
`
`2 / 2015 Raghoebardajal et al .
`3 / 2015 Rafii
`. . . . . . . . . . . . . . . GO6F 3 / 017
`345 / 156
`9 / 2015 Wajs . . . . . . . . . . . . . . . . . . . . . . . G06F 3 / 011
`726 / 19
`6 / 2016 Wan . . . . . . . . . . . . . . HO1L 27 / 14612
`348 / 302
`G06T 7 / 593
`
`9 / 2016 Fei . .
`
`OTHER PUBLICATIONS
`A premium VR Headset ” , Vrvana ( https : / / www .
`“ Vrvana Totem
`vrvana . com / ) , printed Dec . 15 , 2015 , 5 pages .
`Tang , et al . , “ High Resolution Photography with an RGB - Infrared
`Camera ” , In Proc . 7th Int . Conf . on Computational Photography
`( ICCP ) , Houston , TX , IEEE , Apr . 24 - 26 , 2015 , pp . 1 - 10 .
`* cited by examiner
`
`Meta Platforms, Inc.
`Exhibit 1040
`Page 002
`
`

`

`is the mana mer comment
`
`US 10 , 114 , 465 B2
`
`atent
`
`Oct . 30 , 2018
`
`Sheet 1 of 7
`
`100
`
`140
`
`144
`VR
`CONTENT
`SYSTEM
`
`WWWWWWWWWWWWWWWWWWWWWWWWW .
`
`Is - 131
`
`1327
`
`WEXW TAWIA III
`
`NETWORK
`120
`
`110 my
`
`136 - 3
`
`2
`
`133
`
`115
`
`:
`
`: :
`
`: : : :
`
`??????????????
`
`FIG . 1
`
`205my
`
`114 . m
`
`713
`
`112
`
`FIG . 2
`
`Meta Platforms, Inc.
`Exhibit 1040
`Page 003
`
`

`

`atent
`
`Oct . 30 , 2018
`
`Sheet 2 of 7
`
`US 10 , 114 , 465 B2
`
`MU DATA 317
`
`
`
`MUUNIT 318
`
`
`
`MAGINGNG PIPELINE 300
`
`|
`
`
`6DOF TRACKING DATA
`332
`PROCESSOR 316
`CHROMA
`
`315
`
`pe ( 1920x1080 ) SUB - SAMPLER
`RGB
`314B
`
`FINGERHAND TRACKER 334
`CROPPER / SCALER 320
`
`?????????????
`
`STEREO FORMER 319
`
`DISPLAY MODULE
`
`OPTICAL TRACKER 338
`
`* *
`
`*
`
`VR FUNCTION PORTION 330
`
`
`
`DEPTH DATA - Il
`
`DISPLAY IMAGE ( S )
`
`.
`
`. .
`
`4
`
`4
`
`.
`
`4
`
`.
`
`4
`
`.
`
`. 44
`
`. .
`
`I
`
`I
`
`322
`
`CROPPERI SCALER
`
`
`
`QHD IR PAIR DATAN
`
`FIG . 3
`
`NA
`( 960x540 ) 314C
`
`RGB | ( 1920x1080 ) 313B
`
`IR
`
`( 960x540 ) 313C
`
`
`
`305 PORTION 310
`
`
`
`CAMERA | MAGE PORTION PROCESSING
`
`M
`
`
`
`paw 314A 1
`
`EMITTER
`
`309
`
`UNU
`
`YEY
`
`RECON STRUCTORI 311
`RGBR CAMERA 114 I SYNC 1
`CAMERA
`RGBR
`
`113
`
`307
`308B
`
`112
`
`308A 3 306
`
`
`
`Imom 313A
`
`4
`
`4
`
`4
`
`4
`
`4
`
`-
`
`335
`
`Meta Platforms, Inc.
`Exhibit 1040
`Page 004
`
`

`

`atent
`
`Oct . 30 , 2018
`
`Sheet 3 of 7
`
`US 10 , 114 , 465 B2
`
`400
`
`LG
`406
`R
`405
`
`407
`IR
`408
`
`FIG . 4
`
`Meta Platforms, Inc.
`Exhibit 1040
`Page 005
`
`

`

`atent
`
`Oct . 30 , 2018
`
`Sheet 4 of 7
`
`US 10 , 114 , 465 B2
`
`5007
`
`GB
`
`IR
`511
`
`IR
`514 |
`B G1
`- IR
`517
`
`IR
`519
`
`IRRIR
`T511 512
`513
`R
`R
`514
`515
`516
`LIRIR
`517 518 519
`
`550 -
`
`FIG . 5
`
`Meta Platforms, Inc.
`Exhibit 1040
`Page 006
`
`

`

`atent
`
`Oct . 30 , 2018
`
`Sheet 5 of 7
`
`US 10 , 114 , 465 B2
`
`000
`
`602
`
`START
`
`START
`
`PROVIDE FIRST AND SECOND RGBIR SENSORS INA
`STEREOSCOPIC ARANGEMENT INA VIRTUAL REALITY HEADSET
`
`604
`
`MYYNNY
`
`CAPTURE FIRSTAND SECOND RGBIR MAGES USING RESPECTIVE
`ONES OF THE FIRST AND SECOND SENSORS
`
`606
`
`608
`
`610
`
`FORMA FIRST IR IMAGE BY EXTRACTING IR SENSING PIXELS
`FROM THE CAPTURED FIRST IMAGE
`
`NA
`
`FORMA SECOND IR IMAGE BY EXTRACTING IR SENSING PIXELS
`FROM THE CAPTURED SECOND IMAGE
`
`FORMA FIRST COLOR IMAGE BY RECONSTRUCTING COLOR
`PIXELSAT LOCATIONS OF THE EXTRACTED IR SENSING PIXELS IN
`THE CAPTURED FIRST IMAGE
`
`612 -
`
`FORMA SECOND COLOR IMAGE BY RECONSTRUCTING COLOR
`PIXELS AT LOCATIONS OF THE EXTRACTED IR SENSING PIXELS IN
`THE CAPTURED SECOND IMAGE
`
`HARRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR
`
`614 -
`
`YES
`
`TRACKING ?
`NO
`
`prown 616
`DETECT IR EMITTERS AND / OR REFLECTORS BASED ON THE FIRST AND
`SECOND IR IMAGES FORMED FROM CORRESPONDING IMAGES CAPTURED BY
`THE FIRST AND SECOND SENSORS , AND TRACK FINGERS / HAND
`
`618mm
`
`YES
`
`PASS
`THROUGH ?
`NO
`
`- 620
`STEREOSCOPICALLY DISPLAY IN THE VIRTUAL REALITY HEADSET THE FIRST
`AND SECOND COLOR IMAGES FORMED FROM CORRESPONDING IMAGES
`CAPTURED BY THE FIRST AND SECOND SENSORS
`
`.
`
`FIG . 6A
`
`Meta Platforms, Inc.
`Exhibit 1040
`Page 007
`
`

`

`atent
`
`Oct . 30 , 2018
`
`Sheet 6 of 7
`
`US 10 , 114 , 465 B2
`
`68
`
`622
`
`FINGERI
`HAND
`TRACKING ?
`
`YES
`
`600
`
`porn 624
`
`FORM DEPTH DATA FROM THE FIRST AND SECOND IR IMAGES AND THE
`FIRST AND SECOND COLOR IMAGES FORMED FROM CORRESPONDING
`IMAGES CAPTURED BY THE FIRST AND SECOND SENSORS
`take partition
`wawasan antara satu saturation
`w
`
`PERFORM FINGERHAND TRACKING BASED THE DEPTH DATA
`mm 626
`
`YES
`
`628
`
`visivi
`
`6DOF
`TRACKING ?
`mm 630
`NO
`DERIVE 6DOF TRACKING DATA FROM THE FIRST AND SECOND COLOR
`IMAGES FORMED FROM IMAGES CAPTURED BY THE FIRST AND SECOND
`SENSORS
`
`END
`
`FIG . 6B
`
`Meta Platforms, Inc.
`Exhibit 1040
`Page 008
`
`

`

`atent
`
`Oct . 30 , 2018
`
`Sheet 7 of 7
`
`US 10 , 114 , 465 B2
`
`2
`
`X
`
`P24
`
`ASA
`
`.
`
`i
`
`}
`
`P80
`
`P20
`
`P50
`
`P82
`
`i
`
`P54
`
`P60
`
`FIG . 7
`
`P56
`
`KET
`
`P62
`
`. . .
`
`P52
`
`P64
`
`ht
`
`P72 766 P70 P68
`
`274
`
`FIG
`
`POD P14
`
`PO4
`
`OOOOOO
`
`wwwvvvvvvvvvv
`
`AAAAAAAAAAAAAAAAAAAAAAA
`
`POO
`
`P16
`
`ET
`
`PO2
`
`NNNNNNNNNNNNNN
`
`ANTI
`t titutitutitutitutitutitutitutitutitutitutitutitutitutitutitutit
`
`A
`
`MMMvvvvvvvvEVENEMENEVE
`
`UUUUUUUUUUUUTTU P10
`
`Meta Platforms, Inc.
`Exhibit 1040
`Page 009
`
`

`

`US 10 , 114 , 465 B2
`
`VIRTUAL REALITY HEAD - MOUNTED
`DEVICES HAVING REDUCED NUMBERS OF
`CAMERAS , AND METHODS OF OPERATING
`THE SAME
`
`FIELD OF THE DISCLOSURE
`This disclosure relates generally to virtual reality , and ,
`more particularly , to virtual - reality head - mounted devices
`having reduced numbers of cameras , and methods of oper er 10
`ating the same .
`
`of the at least some of the infrared portion in the image ; and
`perform a second tracking based on the color image .
`BRIEF DESCRIPTION OF THE DRAWINGS
`FIG . 1 is a schematic illustration of an example virtual
`reality system including a head - mounted device having
`fewer cameras in accordance with the teachings of this
`disclosure .
`FIG . 2 is an example front view of the disclosed example
`virtual - reality head - mounted display of FIG . 1 in accordance
`with the teachings of this disclosure .
`FIG . 3 is a schematic diagram of an example virtual
`BACKGROUND
`reality head - mounted display having fewer cameras in
`Virtual - reality head - mounted displays have multiple cam 15 accordance with the teachings of this disclosure .
`FIG . 4 is an example imaging sensor in accordance with
`eras to image and / or render virtual - reality environments in
`the teachings of this disclosure .
`which someone can be physically or virtually present , and to
`FIG . 5 is a diagram illustrating an example extraction of
`track movements of the viewer and / or other items physically
`infrared sensing pixels .
`FIGS . 6 A and 6B are a flowchart illustrating an example
`and / or virtually present in the virtual - reality environment . 20
`method that may , for example , be implemented using
`SUMMARY
`machine - readable instructions executed by one or more
`processors to operate the example head - mounted displays
`disclosed herein .
`FIG . 7 is a block schematic diagram of an example
`computer device and an example mobile computer device
`that may be used to implement the examples disclosed
`herein .
`
`Virtual - reality head - mounted devices or displays having
`reduced numbers of cameras , and methods of operating the 25
`same are disclosed . An disclosed example method includes
`providing a virtual - reality head - mounted display having an
`imaging sensor , the imaging sensor including color - sensing
`pixels , and infrared sensing pixels amongst the color - sens
`DETAILED DESCRIPTION
`ing pixels ; capturing , using the imaging sensor , an image 30
`having a color portion and an infrared portion ; forming an
`Virtual - reality ( VR ) head - mounted displays or devices
`infrared image from at least some of the infrared portion
`( V - HMDs ) are next - generation computing platforms for
`from the image ; performing a first tracking based on the
`providing virtual reality systems and / or environments . A
`infrared image : forming a color image by replacing the at
`V - HMD can include multiple sub - systems , such as a display
`data 35
`least some of the removed infrared portion with color data 35
`sub - system , a camera sub - system , an image processing
`determined from the color portion of the image and the
`sub - system , a controller sub - system , etc . There is need for
`location of the removed infrared - sensing pixels in the image ;
`camera and image processing sub - systems that can perform ,
`and performing a second tracking based on the color image .
`among other things , a plurality of VR functions that can
`A disclosed example virtual - reality head - mounted devices 40 meet customer , user , and / or wearer expectations for VR
`for use in a virtual - reality environment includes an imaging
`functionality . Example VR functions include , but are not
`sensor to capture an image using color - sensing pixels , and
`limited to , 6 degree - of - freedom ( 6D0F ) head tracking , fin
`ger / hand tracking , environmental ( or depth ) sensing , pass
`red sensing pixels located amongst the color - sensing
`through of images for display via or in the V - HMD , tracking
`pixels , the captured image having a color portion and an
`infrared portion ; a reconstructor configured to remove at 45 a VR controller or other device held by a user , etc . Current
`least some of infrared portion from the image , and form an
`V - HMDs designs require dedicated camera ( s ) for each VR
`infrared image from the removed infrared portion ; a first
`function being implemented . For example , 1st and 2nd cam
`tracker configured to perform first virtual - reality tracking
`eras for head tracking , 3rd and 4th cameras for finger / hand
`within the virtual - reality environment using the infrared
`tracking , 5th camera for environmental sensing ( e . g . , depth ) ,
`image ; an image modifier to form a color image by substi - 50 6th and 7th cameras for pass - through , and 8th and 9th cameras
`tuting the removed infrared portion with color - sensing pix
`for tracking a VR controller , etc . Thus , current V - HMDs can
`els determined from the color portion of the image and the
`require nine or more cameras to provide this basic set of VR
`locations of the removed infrared - sensing pixels in the
`functions . The fact that conventional V - HMDs need so many
`image ; and a second tracker configured to carry out a second
`cameras , presents numerous and significant disadvantages to
`virtual - reality tracking based on the color image .
`55 overcome , especially given a large number of V - HMDs may
`A disclosed example non - transitory machine - readable
`be retail devices where looks , size , industrial design , weight ,
`media stores machine - readable instructions that , when
`etc . are important . For example , weight increase , size
`executed , cause a machine to at least provide a virtual reality
`increase , cost increase , number of components increase ,
`head - mounted display having an imaging sensor , the imag -
`decreased reliability , increased complexity , industrial design
`ing sensor including color - sensing pixels , and infrared sens - 60 limitations , etc . These cameras typically differ from other
`ing pixels amongst the color - sensing pixels ; capture , using
`cameras of a V - HMD in , for example , size , angle of view ,
`the imaging sensor , an image having a color portion and an
`resolution , etc . Future needs for additional VR functions
`infrared portion ; form an infrared image using at least some
`further compounds these issues . Limiting the number of
`of the infrared portion ; perform a first tracking based on the
`cameras correspondingly reduces the number of VR func
`infrared image ; form a color image by replacing the at least 65 tions that can be realized , thus , making such a VR - HMD less
`some of the infrared portion of the with color data deter -
`attractive in the marketplace . All said , there are significant
`mined from the color portion of the image and the location
`hurdles in conventional V - HMDs that must be overcome to
`
`Meta Platforms, Inc.
`Exhibit 1040
`Page 010
`
`

`

`US 10 , 114 , 465 B2
`
`front face 112 . The cameras 113 and 114 can be matched and
`meet the needs in the market , which cannot be met by
`separated by about the same distance as a person ' s eyes so
`conventional V - HMDs . The fact that V - HMDs being able to
`sen
`that , if a pair of pictures is taken the pictures can provide
`support such a large number of VR functions is a testament
`collectively a stereoscopic impression . For example , in FIG .
`to a significant unmet need .
`2 , the cameras 113 and 114 face upward from the page , and
`Example V - HMDs that overcome at least these problems 5
`face leftward in the example of FIG . 3 . The cameras 113 and
`are disclosed herein . The disclosed examples require fewer
`114 can capture a portion of the light moving toward or
`cameras , two ( 2 ) versus nine ( 9 ) , while still providing at
`impinging the face 112 . The light can move toward the face
`least the same set of VR functions delineated above . The
`112 along different paths or trajectories . The position ( s ) of
`cameras , and the imaging sensors therein , are designed to
`perform a wide - range of imaging capabilities using fewer 10 the cameras 113 and 114 may vary based on any number
`cameras . An example camera can be designed to , for
`and / or type ( s ) of design parameters . For example , the space
`example , perform a set of camera capabilities consisting of ,
`between the cameras 113 and 114 can impact perceived
`for example , a wide field - of - view , a maximum and / or mini -
`depth of field . They may also be selected based on a desired
`mum sampling rate , a needed pixel arrangement , etc . to
`V - HMD 110 size , industrial design , use of additional cam
`realize the desired set of VR functions to be realized . For 15 eras , anticipated size of wearer , additional VR functions or
`instance , an example output of an example wide - angle
`features included , etc . Additional and / or alternative cameras
`capable imaging sensor as disclosed herein can be cropped
`can be included and / or used as an upgrade , to e . g . , support
`to form a moderate , a normal , a narrow , etc . field - of - view
`additional VR functions that we not previously contem
`image . Thus , a wide angle camera in accordance with the
`plated . For example , a fish eye lense could not have been
`teachings of this disclosure can be used to capture data 20 originally included , but could later be installed to support
`needed to simultaneously and / or sequentially realize a nar -
`newly contemplated VR functions . Further , one or more of
`row , a normal and / or a wide field - of - view . Because the
`the cameras 113 and 114 may be selectively controllable to
`super - set of capabilities addresses or provides each VR
`image different locations at different times . Further still , the
`functions , cost , complexity , weight , etc . of using the con -
`cameras 113 and 114 need not be the same . Moreover , a
`ventional V - HMD to implements a large number of VR 25 camera 113 , 114 may be , e . g . , updated by a user or service
`functions . Realizing support for a larger number of VR
`center to support additional V - HMD functions , and / or to
`functions using only a pair of stereoscopic cameras provides
`modify or customize functionality of a V - HMD . As will be
`a significant new advantage that can be brought to bear in the
`discussed more thoroughly below , a user 135 can have a VR
`V - HMD marketplace . For example , an imaging sensor and
`controller 136 . The VR controller 136 can , e . g . , emit and / or
`its associated camera could be designed to support a wide 30 reflect infrared ( IR ) or any other type ( s ) of light that can be
`angle of view that can support the needed angle - of - view , and
`detected by one or more of the cameras 113 and 114 to help
`be processed to extract ( cropped ) to get image data for a
`determine positions of , for example , the user ' s hands . Like
`normal angle - of - view . That is , the camera ( s ) can be viewed
`wise , other elements 115 in the VR system 100 can emit
`as being able to support the requirements of the two or more
`and / or reflect IR or other type ( s ) of light for tracking or other
`VR functions . In general , a camera or its imaging sensor has 35 VR purposes .
`be designed such that all of the data needed for all supported
`In general , the example VR system 100 provides a VR
`VR function can be obtained by processing the image data
`environment and VR content that can be accessed , viewed ,
`in
`a respective or corresponding manner . As disclosed
`and / or otherwise interacted with . As will be described below
`herein , a plurality of VR functions can be realized using only
`in connection with FIG . 3 , using only two of the disclosed
`a pair of cameras , which is about a 75 % decrease in the 40 example cameras 113 and 114 , the example V - HMD 110 can
`required number of cameras . It is contemplated that other
`facilitate the implementation of multiple VR functions rather
`past , present or future functions may also be supported by
`than having to implement an impractical larger number of
`cameras , as is required in conventional V - HMDs ( e . g . , see a
`the two camera configuration disclosed herein .
`Reference will now be made in detail to non - limiting
`discussed above )
`examples of this disclosure , examples of which are illus - 45
`As shown in FIG . 1 , the example VR system 100 includes
`trated in the accompanying drawings . The examples are
`a plurality of computing and / or electronic devices that can
`described below by referring to the drawings , wherein like
`exchange data over a network 120 . The devices may repre
`reference numerals refer to like elements . When like refer -
`sent clients or servers , and can communicate via the network
`ence numerals are shown , corresponding description ( s ) are
`120 or any other additional and / or alternative network ( s ) .
`not repeated and the interested reader is referred to the 50 Example client devices include , but are not limited to , a
`previously discussed figure ( s ) for a description of the like
`mobile device 131 ( e . g . , a smartphone , a personal digital
`element ( s ) . These examples and variants and portions
`assistant , a portable media player , etc . ) , an electronic tablet ,
`thereof shown in the attached drawings are not drawn to
`a laptop or netbook 132 , a camera , the V - HMD 110 , a
`scale , with specific shapes , or with specific relative dimen -
`desktop computer 133 , a gaming device , and any other
`sions as they are not important to this disclosure and may 55 electronic or computing devices that can communicate using
`render the drawings more difficult to comprehend . Specific
`the network 120 or other network ( s ) with other computing or
`elements may have been intentionally exaggerated for dis -
`electronic devices or systems , or that may be used to access
`cussion purposes . Instead , the drawings have been drawn for
`VR content or operate within a VR environment . The
`clarity and comprehension . Further , the arrangement of
`devices 110 and 131 - 133 may represent client devices . In
`elements and couplings maybe changed , rearranged , etc . 60 some examples , the devices 110 and 131 - 133 include one or
`according to other implementations of this disclosure and
`more processors and one or more memory devices , which
`can execute a client operating system and one or more client
`the claims herein .
`Turning to FIG . 1 , a block diagram of an example VR
`applications that can access , control , and light - emitting
`system 100 is shown . The example VR system 100 includes
`portion VR content on a light - emitting portion device imple
`a V - HMD 110 having a front face 112 in accordance with the 65 mented together with each respective device . One or more of
`teachings of this disclosure are shown . As shown in FIGS .
`the devices 110 and 131 - 133 can , e . g . , emit or reflect
`2 and 3 , only a pair of cameras 113 and 114 is needed at the
`infrared ( IR ) or other type ( s ) of light that can be detected by
`
`Meta Platforms, Inc.
`Exhibit 1040
`Page 011
`
`

`

`US 10 , 114 , 465 B2
`
`to IR light . Additionally or alternatively , a dual - band filter
`one or more of the cameras 113 and 114 to help determine
`that passes only visible light and a narrow band of IR light
`position of a user or the devices 110 , 131 - 133 for tracking
`or other VR functions .
`can be placed inside the camera module such that the R , G
`and B pixels 405 - 407 will only be response to light , i . e . , an
`An example stereoscopic placement of the cameras 113
`IR - cut filter per pixel . With this arrangement , IR sensing
`and 114 is shown in FIG . 2 . In the example of FIG . 2 , the 5
`pixels only sense narrowband IR light , and the R , G and B
`cameras 113 and 114 are equally spaced from opposite sides
`pixels 405 - 407 only sense R , G and B visible light . This way
`of a virtual dividing line 205 , and are positioned the same
`the color image produced by the R , G and B sensing pixels
`different from the bottom 210 of the face 112 . Other camera
`405 - 407 would be improved compared to no per - pixel IR cut
`configurations reflecting other and / or alternative implemen -
`filter on top of the R , G and B sensing pixels 405 - 407
`tation objectives and / or desired VR functions are contem - 10
`plated .
`because no IR light is leaked into those R , G and B sensing
`pixels 405 - 407 .
`Turning now to FIG . 3 , a schematic diagram of an
`In some instances , the camera portion 305 includes an IR
`example imaging pipeline 300 that may be used with any of
`emitter 309 . The example IR emitter 309 can be selectively
`the example V - HMDs disclosed herein shown . As discussed
`below , the imaging pipeline 300 can be used to carry out 15 operated or activated to emit IR
`light that can reflect off
`other non - imaging function ( s ) 335 . The imaging pipeline
`objects such as the example objects 115 and 136 ( see FIG .
`300 includes a camera portion 305 that includes the cameras
`1 ) , allowing the reflected IR light to be used to locate the
`113 and 114 , an image processing portion 310 that forms the
`objects 115 , 136 . Additionally or alternately , the IR emitter
`necessary images for VR tracking functions , and possibly in
`309 may be implemented separately from the V - HMD 110 .
`some instances , other portion ( s ) 335 that can perform other 20
`Turning to the example image processing portion 310 of
`imaging function ( s ) and / or non - imaging function ( s ) . FIG . 3
`FIG . 3 , RGBIR images 313A , 314A captured by respective
`shows the logical arrangements of elements , blocks , func -
`ones of the cameras 113 , 114 are provided to a reconstructor
`tions , etc . For clarity of illustration and ease of comprehen
`311 . Using any number and / or type ( s ) of method ( s ) , tech
`sion , details such as busses , memory , caches , etc . are omit -
`niques ( s ) , algorithm ( s ) , circuit ( s ) , etc . , the reconstructor 311
`ted , but inclusion and implementation of such details are 25 can create an RBG image 313B , 314B and an IR image
`well known to those of skill in the art .
`313C , 314C for respective ones of RGBIR images 313A ,
`The example camera portion 305 of FIG . 1 includes the
`314A provided by the cameras 113 , 114 .
`cameras 113 and 114 together with respective lenses 306 and
`As shown in FIG . 5 , the IR images 313C and 314C in FIG .
`307 . In the orientation of FIG . 3 , light 308 A and 308B
`3 can be formed by collecting the values of the IR pixels
`moving from left to right , passes through the lenses 306 , 30 from a full RGBIR image array 500 into a smaller array 550
`307 , possibly with optical effects performed by the lenses
`of just IR pixel data . Illustrated in FIG . 5 are three example
`306 , 307 , and impinges on the RGIBIR imaging sensors of
`movements of IR pixels 511 , 517 and 519 from the image
`the cameras 113 , 114 . Lenses such as 306 and 307 may be
`500 into the image 550 . The other IR pixels can be moved
`fixed lenses , or can have selectively variable focal lengths ,
`or collected in a similar manner .
`selectively perform focusing , have selectively variable aper - 35
`The RGB images 313B and 314B may be formed by
`tures , have selectively variable depth of field , etc . These
`extracting or removing the IR pixels from an RGBIR image .
`selective functions can be perform automatically , manually .
`As they are removed or extracted , the values in the array
`where the IR pixels were removed represent can be given a
`or some combination thereof .
`The two cameras 113 and 114 can be identical to each
`NULL , vacant , etc . value or indicator . In some examples , the
`other , with both having red ( R ) sensing pixels ( one of which 40 IR
`pixels need not be so modified as they can be later
`is designated at reference numeral 405 ) , green ( G ) sensing
`overwritten by subsequent image processing . In the example
`pixels ( one of which is designated at 406 reference numeral
`of FIG . 4 , the IR pixels replaced every other green pixels ,
`406 ) , blue ( B ) sensing pixels ( one of which is designated at
`however , other patterns could be used .
`reference numeral 407 ) , and infrared ( IR ) sensing pixels
`Using any number and / or type ( s ) of method ( s ) , tech
`( one of which is designated at reference numeral 408 ) . The 45 niques ( s ) , algorithm ( s ) , circuit ( s ) , etc . , the reconstructor 311
`pixels 405 - 408 can be arranged in a regular or semi - random
`can create an RBG image 313B , 314B and an IR image
`pattern , such as the example pattern shown in FIG . 4 . In the
`313C , 314C for respective ones of RGBIR images 313A ,
`example of FIG . 4 , some of the G sensing 406 pixels of a
`314A provided by the cameras 113 , 114 . Using any number
`conventional Bayer RGB sensor are replaced with IR sens -
`and / or type ( s ) of method ( s ) , techniques ( s ) , algorithm ( s ) ,
`ing pixels 408 in accordance with the teachings of this 50 circuit ( s ) , etc . , the reconstructor 311 can create fills in the
`disclosure . Accordingly , the IR sensing pixels 408 are
`blank or vacant IR pixel locations with suitable green pixel
`placed , located , etc . within , amongst , etc . the color sensing
`values , thus , forming a completed RGB image 313B , 314B .
`R , G and B pixels 405 - 407 . Compared with conventional
`Turning now to how the four images 313B , 313C , 314B
`Bayer RGB sensors that can be used in digital cameras , the
`and 314C in FIG . 3 are created , formed or generation by the
`RGBIR sensor 400 disclosed herein can have a mixture of R , 55 reconstructor 311 can be adaptive , changed , combined , etc .
`G and B pixels sensitive to red , green and blue visible light ,
`to perform a number of V - HMD functions and / or , more
`and IR pixels sensitive to non - visible infrared light ( e . g . , see
`broadly , VR functions . Because of the various processing of
`FIG . 4 ) . Accordingly , the imaging sensors of the cameras
`the images 313A and 314A , the reconstructor 311 makes the
`113 , 114 are RGBIR sensors , and images output by the
`images 313B , 313C , 314B and 314C that give the impres
`cameras are RGBIR images . That is , the RGBIR images 60 sion that more than two cameras were used to capture the
`output by the cameras 113 , 114 convey red , green , blue and
`images 313B , 313C , 314B and 314C .
`IR information . As shown , a 2x2 block of pixels can include
`The two RGB images 313B and 314B can be processed by
`1 red pixel , 1 green pixel , 1 blue pixel and 1 IR pixel . Color
`a Chroma sub - sampler 315 .
`and IR pixels may be arranged in other pattern ( s ) , and / or
`To perform 6DoF tracking , the example imaging pipeline
`65 300 includes any number and / or type ( s ) of a 6DoF
`with different ratio ( s ) .
`The R , G and B pixels 405 - 407 can be equipped with a
`processor ( s ) 316 . Using any number and / or type ( s ) of meth
`per - pixel IR - cut filter so a pixel is not sensitive or responsive
`od ( s ) , techniques ( s ) , algorithm ( s ) , circuit ( s ) , etc . , the 6DOF
`
`Meta Platforms, Inc.
`Exhibit 1040
`Page 012
`
`

`

`US 10 , 114 , 465 B2
`
`eliminated and / or implemented in any other way . Further ,
`processor 316 can process the Chroma sub - sampled images
`one or more circuit ( s ) , programmable processor ( s ) , fuses ,
`provided by the Chroma sub - sampler 315 to track stationary
`application - specific integrated circuit ( s

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket