`Trika et al.
`
`(10) Patent No.:
`(45) Date of Patent:
`
`US 6,630,931 B1
`Oct. 7, 2003
`
`USOO6630931B1
`
`(54) GENERATION OF STEREOSCOPIC
`DISPLAYS USING IMAGE APPROXMATION
`
`(75) Inventors: Sanjeev N. Trika, Hillsboro, OR (US);
`John I. Garney, Aloha, OR (US)
`(73) Assignee: Intel Corporation, Santa Clara, CA
`(US)
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 883 days.
`
`(*) Notice:
`
`(21) Appl. No.: 08/935,314
`(22) Filed:
`Sep. 22, 1997
`(51) Int. Cl. ................................................ G06T 15/00
`(52) U.S. Cl. .......................................... 345/419; 348/42
`(58) Field of Search ................................. 345/419, 422,
`345/427; 348/42, 43, 44, 616, 617, 620,
`845.1
`
`(56)
`
`References Cited
`U.S. PATENT DOCUMENTS
`
`4,329,708 A * 5/1982 Yamamoto et al. ......... 34.8/617
`4,345,272 A * 8/1982 Shirota .............
`... 348/617
`4,591,898 A * 5/1986 DeBoer et al. ............. 34.8/617
`OTHER PUBLICATIONS
`Shaun Love, "Nonholographic, Autostereoscopic, Nonpla
`nar Display of Computer Generated Images, Thesis Sub
`mitted to North Carolina State University, 12 pages, 1990.
`Stephen J. Adelson, “Stereoscopic Projections: Parallel
`Viewing Vectors, Rotations, and Shears,” Los Alamos
`National Laboratory, Los Alamos, New Mexico, pp. 1-17,
`Dec. 22, 1993.
`
`Stephen J. Adelson, et al., “Simultaneous Generation of
`Stereoscopic Views,' Computer Graphics Forum, Vol 10,
`pp. 3-10, 1991.
`Stephen J. Adelson, et al., “Stereoscopic ray-tracing.” The
`Visual Computer, Vol 10, pp. 127-144, 1993.
`Shaun Love, et al., Final Session of 1997 SIGGRAPH
`conference, presented on Aug. 3, 1997 in Los Angeles, CA,
`23 pages.
`Larry F. Hodges, et al., “Stereo and Alternating-Pair Tech
`niques for Display of Computer-Generated Images," IEEE
`CG &A, Sep. 1985, pp. 38–45.
`
`* cited by examiner
`
`Primary Examiner Mano Padmanabhan
`(74) Attorney, Agent, or Firm-Blakely, Sokoloff, Taylor &
`Zafman LLP
`ABSTRACT
`(57)
`A method and apparatus for generating Stereoscopic displayS
`in a computer System. Each frame in a Sequence of frames
`includes a left image and a right image, and each image
`includes a plurality of pixels. Depth information for objects
`depicted in the display is stored in a Z buffer. Either the left
`image or the right image is computed as an approximation
`of the other using the depth information Stored in the Z
`buffer. The approximated image is alternated between the
`left and the right image on a frame-by-frame basis, So that
`the left and right image are each approximated every other
`frame. Pixels which are not filled in the approximated image
`are assigned values based on the corresponding pixels in the
`same (non-approximated) image from the preceding frame.
`
`15 Claims, 7 Drawing Sheets
`
`
`
`Render Left and Right Images
`of Frame 1 and Display
`801
`
`No
`
`More
`Frames?
`802
`Yes
`Current Frane Mext Frame
`803
`
`Yes
`
`Current
`rame = Odd-Numbered
`Frame?
`804
`
`Render Right mage
`805A
`
`Render Left Image
`805E
`
`Generate Left Image as
`Approximation From Right mage
`308A
`
`Generate Right Image as
`Approximation From Left Image
`806B
`
`For Each Pixel Not Fied in
`Left Image, Use Data Walues
`of that Pixel From Previous
`Frame Left Image
`807A
`
`
`
`For each Pixel Not Fiedm
`Right Image, Use Data Values
`Of that Pixel From Previous
`Frame Right Image
`807B
`
`Display Left and Right
`Images for Current Frame
`808
`
`IPR2018-01045
`Sony EX1016 Page 1
`
`
`
`U.S. Patent
`
`Oct. 7, 2003
`
`Sheet 1 of 7
`
`US 6,630,931 B1
`
`l
`
`8
`
`HETTIOHINOO
`
`| |
`
`
`
`
`
`}{{HOM LEN O|
`
`IPR2018-01045
`Sony EX1016 Page 2
`
`
`
`U.S. Patent
`
`Oct. 7, 2003
`
`Sheet 2 of 7
`
`US 6,630,931 B1
`
`18
`
`PROCESSING/
`CONTROL
`
`DISPLAY
`
`
`
`
`
`Z. BUFFER
`
`FRAME
`BUFFER
`
`FIG. 2
`
`IPR2018-01045
`Sony EX1016 Page 3
`
`
`
`U.S. Patent
`
`Oct. 7, 2003
`
`Sheet 3 of 7
`
`US 6,630,931 B1
`
`St.
`
`FG. 3
`
`
`
`FG. 4
`
`IPR2018-01045
`Sony EX1016 Page 4
`
`
`
`U.S. Patent
`
`Oct. 7, 2003
`
`Sheet 4 of 7
`
`US 6,630,931 B1
`
`F.G. 5A
`
`WSY
`M --
`WZ
`
`WSZ
`
`1
`
`WSX
`
`M
`
`25
`
`
`
`IPR2018-01045
`Sony EX1016 Page 5
`
`
`
`U.S. Patent
`
`Oct. 7, 2003
`
`Sheet 5 of 7
`
`US 6,630,931 B1
`
`
`
`F.G. 6
`
`IPR2018-01045
`Sony EX1016 Page 6
`
`
`
`U.S. Patent
`
`Oct. 7, 2003
`
`Sheet 6 of 7
`
`US 6,630,931 B1
`
`FG, 7
`
`Compute K1, K2
`701
`
`Render all Triangle Data From Left Eye
`Position (Transform, Light, Setup, Rasterize)
`702
`
`Current Scan Line = Top Scan Line
`703
`
`Current Pixel F. First Pixel in Current SCan Line
`704
`
`
`
`
`
`Current Scan
`Line F. Next
`SCan Line
`709
`
`
`
`
`
`Current Pixel
`= Next Pixel
`710
`
`R L
`Compute XV =XV+ K1 + K2Z
`705
`
`R
`Store R, G, Bat (XV, Y) in Right Buffer
`706
`
`YeS
`
`
`
`
`
`Yes
`
`
`
`
`
`More
`Pixels in SCan Line,
`707
`
`NO
`
`More
`SCan Lines?
`708
`
`NO
`
`End
`
`IPR2018-01045
`Sony EX1016 Page 7
`
`
`
`U.S. Patent
`
`Oct. 7, 2003
`
`Sheet 7 of 7
`
`US 6,630,931 B1
`
`
`
`Render Left and Right Images
`of Frame 1 and Display
`801
`
`FG. 8
`
`Yes
`
`Current Frame. Next Frame
`803
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`Render Right Image
`805A
`
`Render Left Image
`805B
`
`Generate Left Image as
`Approximation From Right Image
`806A
`
`Generate Right Image as
`Approximation From Left Image
`806B
`
`For Each Pixel Not Filled in
`Left Image, Use Data Values
`of that Pixel From Previous
`Frame Left Image
`807A
`
`
`
`For Each Pixel Not Fed in
`Right Image, Use Data Values
`Of that Pixel From Previous
`Frame Right Image
`807B
`
`
`
`Display Left and Right
`Images for Current Frame
`808
`
`
`
`IPR2018-01045
`Sony EX1016 Page 8
`
`
`
`US 6,630,931 B1
`
`1
`GENERATION OF STEREOSCOPIC
`DISPLAYS USING IMAGE APPROXMATION
`
`2
`ing frame. Other features of the present invention will be
`apparent from the accompanying drawings and from the
`detailed description which follows.
`
`FIELD OF THE INVENTION
`The present invention pertains to the field of visual
`display techniques for computer Systems. More particularly,
`the present invention relates to techniques for generating
`Stereoscopic imageS for virtual reality based applications.
`
`BACKGROUND OF THE INVENTION
`Modern computer Systems are capable of generating
`images with a high degree of realism. Traditional computer
`display techniques have achieved realism by generating
`two-dimensional (2-D) views of three-dimensional (3-D)
`Scenes or data. However, advancements in Virtual reality
`technology and in computer processing power have drawn
`considerable interest to technology for generating 3-D
`images of Scenes or data. Such technology is highly desir
`able for use in many applications, particularly in computer
`games and in complex, real-world Simulations.
`The manner in which the human brain interprets visually
`perceived objects in 3-D is well-understood. The brain
`perceives objects in 3-D, because the eyes detect images in
`Stereo. A Stereo effect is caused by the differences between
`the images detected by the left eye and the right eye due to
`the Separation between the two eyes. Consequently, it is well
`known that the perception of 3-D can be provided artificially
`by generating two Spatially-offset 2-D images of the same
`Subject and providing these images Separately to the left and
`right eye.
`Regardless of the medium used, existing 3-D techniques
`each generally employ Some mechanism to ensure that each
`eye Sees only the appropriate one of the two views. Various
`approaches have been used to provide this function, Such as
`relatively simple and inexpensive anaglyphs (color-filtered
`eyeglasses), liquid crystal shutter glasses, and complex,
`expensive head-mounted devices which have a dedicated
`display for each eye.
`Certain problems are associated with providing 3-D
`effects in the computer field, including relatively large
`requirements for processing power, efficiency, and memory
`capacity. In many existing Systems, these requirements stem
`from the fact that two separate images are generated for each
`frame that is rendered, i.e., one for the left eye and one for
`the right eye, compared to only one image per frame for
`conventional, two-dimensional (2-D) computer displayS.
`For each frame to be rendered for 3-D display, the model
`geometry must be rendered from both eye points. Thus, each
`triangle in a Scene is transformed, lit, Set up, and rasterized
`twice for each frame. As a result, 3-D Stereo applications
`must either execute at half the potential geometry rate or at
`half the potential frame rate. Either result tends to adversely
`impact the degree of realism experienced by the user. Hence,
`what is needed is a fast, efficient, and inexpensive technique
`for generating 3-D displays in a computer System.
`
`15
`
`25
`
`35
`
`40
`
`45
`
`50
`
`55
`
`SUMMARY OF THE INVENTION
`The present invention includes a method of generating a
`Stereoscopic Sequence of frames. Each frame in the Sequence
`has a left image and a right image. For at least one frame in
`the Sequence, one of the left image and the right image is an
`approximation of the other image. In the method, any pixel
`not filled in the approximated image is assigned the data
`values of a corresponding pixel in an image from a preced
`
`60
`
`65
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`The present invention is illustrated by way of example
`and not limitation in the figures of the accompanying
`drawings, in which like references indicate Similar elements
`and in which:
`FIG. 1 is a block diagram of a computer System in which
`the present invention is implemented.
`FIG. 2 is a block diagram illustrating components of a
`graphics controller, including a Z buffer.
`FIG. 3 illustrates a relationship between a world frame
`and local frames associated with two objects to be displayed.
`FIG. 4 illustrates a relationship between the world frame
`and a camera frame.
`FIGS. 5A and 5B illustrate a relationship between a
`window frame and a viewport frame.
`FIG. 6 illustrates relationships between the parameters of
`depth Z, focal length (p, and interocular distance Ö.
`FIG. 7 is a flow diagram illustrating a routine for gener
`ating 3-D Stereoscopic images, in which the right image is
`an approximation of the left image.
`FIG. 8 is a flow diagram illustrating a routine for gener
`ating 3-D Stereoscopic images in which the approximated
`image is alternated between the left image and the right
`image and unfilled pixels are assigned values.
`
`DETAILED DESCRIPTION
`A method and apparatus are described for generating fast,
`efficient, low-cost Stereoscopic displays in a computer Sys
`tem. In the following description, for purposes of
`explanation, numerous specific details are Set forth in order
`to provide a thorough understanding of the present inven
`tion. It will be evident, however, to one skilled in the art that
`the present invention may be practiced without these specific
`details. In other instances, well-known Structures and
`devices are shown in block diagram or other Symbolic form
`in order to facilitate description of the present invention.
`As will be described in detail below, the present invention
`improves the generation of 3-D Stereoscopic images in a
`computer System by generating either the left or right image
`as an approximation of the other. The approximation is
`generated using depth information Stored in a Z buffer.
`Approximation of one of the left and right images eliminates
`the need to render two separate imageS for each Scene to be
`rendered, thus reducing the amount of required memory and
`processing power in the computer System. This approach
`allows 3-D Stereo applications to execute at full geometry
`and refresh rates, because the cost of generating the Second
`image is Substantially reduced.
`Refer to FIG. 1, which illustrates a computer system 1 in
`which the present invention is implemented according to one
`embodiment. The computer System 1 includes a central
`processing unit (CPU) 10, random access memory (RAM)
`11, read-only memory (ROM) 12, and a mass Storage device
`13, each coupled to a bus 18. The bus 18 may actually
`comprise one or more physical buses interconnected by
`various bridges, controllers and/or adapters. Also coupled to
`the bus 18 are a communication device 19 for providing an
`interface for the computer System 1 to a network connection
`20, a keyboard 14, a conventional pointing device 15, and a
`graphics controller 16. The graphics controller 16 is further
`
`IPR2018-01045
`Sony EX1016 Page 9
`
`
`
`US 6,630,931 B1
`
`25
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`3
`coupled to a display device 17 to provide output display data
`to the display device 17, which displays information visually
`to a user. The display device 17 may be any conventional
`Visual display device, Such as a cathode ray tube (CRT), a
`liquid crystal display (LCD), etc., or an unconventional
`display Such as a head-mounted or Shutter glasses based
`Stereo display.
`The graphics controller 16, which may alternatively be
`referred to as a graphics accelerator or graphics adapter,
`provides various processing functions for generating com
`plex (e.g., 3-D) visual displayS. Mass storage device 13 may
`include any Suitable device for Storing large quantities of
`data in a nonvolatile manner, Such as a magnetic, optical, or
`magneto-optical (MO) storage device, e.g., a magnetic disk
`or tape, Compact Disk ROM (CD-ROM), CD-R (CD
`15
`Recordable), Digital Versatile Disk (DVD), etc. The com
`munication device 19 may be any device suitable for pro
`Viding the computer System 1 with a communication
`interface with a network, Such as a conventional telephone
`modem, a cable television modem, an Integrated Services
`Digital Network (ISDN) adapter, a Digital Subscriber Line
`(xDSL) adapter, an Ethernet adapter, or the like. The point
`ing device 15 may be any Suitable device for positioning a
`cursor or pointer on the display device 17, Such as a mouse,
`trackball, etc.
`In one embodiment, the present invention is carried out in
`the computer system 1 in response to the CPU 10 executing
`Sequences of instructions contained in memory. The memory
`may be any one of RAM 11, ROM 12, or mass storage
`device 13, or a combination of these devices. The instruc
`tions may be loaded into RAM 11 from a persistent store,
`Such as mass Storage device 13 and/or from one or more
`other computer systems (collectively referred to as"host
`computer System') over a network. For example, a host
`computer System may transmit a Sequence of instructions to
`computer System 1 in response to a message transmitted to
`the host computer System over the network by computer
`System 1. AS computer System 1 receives the instructions via
`the network connection 20, computer System 1 Stores the
`instructions in memory. Computer System 1 may store the
`instructions for later execution or execute the instructions as
`they arrive over the network connection 20.
`In Some cases, the downloaded instructions may be
`directly supported by the CPU 10 of computer system 1.
`Consequently, execution of the instructions may be per
`formed directly by the CPU 10. In other cases, the instruc
`tions may not be directly executable by the CPU 10. Under
`these circumstances, the instructions may be executed by
`causing the CPU 10 to execute an interpreter that interprets
`the instructions or by causing the CPU 10 to execute
`instructions which convert the received instructions to
`instructions which can be directly executed by the CPU 10.
`In an alternative embodiment, hardwired circuitry may be
`used in place of, or in combination with, Software instruc
`tions to implement the present invention. For example, in
`certain embodiments of the present invention, aspects of the
`present invention may be included within, or carried out by,
`the graphics controller 16. Thus, the present invention is not
`limited to any Specific combination of hardware circuitry
`and Software, nor to any particular Source for the instructions
`executed by a computer System.
`In general, the difference between the left and right image
`of a Stereoscopic image pair is simply a horizontal shift. The
`magnitude of this shift depends, in part, upon the apparent
`distance of the subject from the viewer (the depth). In certain
`computer graphics Subsystems, depth information relating to
`
`65
`
`4
`displayable objects is Stored in a special memory, known as
`a Z buffer, the contents of which are used for purposes of
`Visible Surface determination. Consequently, approximated
`images according to the present invention are generated
`based, in part, upon depth information Stored in a Z buffer.
`Referring now to FIG. 2, the graphics controller 16 is
`shown in greater detail. The graphics controller 16 includes
`a memory 37, and processing and control circuitry 36
`coupled between the bus 18 and memory 37. The memory 37
`includes a Z buffer 38 for storing depth (z) values associated
`with individual pixels of a display as well as a frame buffer
`39 for storing color values and other information of frames
`to be displayed. The display of display device 17 is peri
`odically refreshed by the graphics controller 16 from the
`contents of the frame buffer 39. It should be noted that,
`although the Z buffer 38 is shown within the graphics
`controller 16, in alternative embodiments the Z buffer 38
`may be located elsewhere within the computer System 1,
`Such as in RAM 11.
`It is useful at this point to consider certain aspects of
`generating 3-D images in a computer System. The process
`typically requires Several transformations between coordi
`nate Systems, or "frames' of reference: 1) a local frame to
`world frame transformation; 2) a world frame to camera
`frame transformation; 3) a camera frame to window frame
`transformation; and 4) a window frame to viewport frame
`mapping. Techniques for performing these transformations
`are well-known in computer graphics. However, a discus
`Sion of certain aspects of these techniques may facilitate
`understanding the present invention and is therefore pro
`vided now with reference to FIGS. 3 through 5.
`During execution of a 3-D Software application, an object
`to be displayed is initially represented in terms of its own
`local frame. Referring now to FIG. 3, consider a simple
`example in which a 3-D application represents two objects
`to be displayed, objects 21 and 22. Objects 21 and 22 are
`shown in FIG. 2 with respect to coordinate axis x, y, and
`Z of a world frame. The world frame refers to the overall
`environment maintained by the application, which may
`include a number of displayable objects. Objects 21 and 22
`are initially referenced only to their own local frames, 23
`and 24, respectively. Consequently, the initial transforma
`tion involves any rotation, translation, and Scaling required
`to reference objects to the world frame.
`Referring now to FIG. 4, an object must next be trans
`formed from the world frame to the camera frame. The
`camera frame essentially represents the frame of the viewer
`(or camera) and is defined by the coordinate axes u, v and
`n, with origin r. The position P. of the camera is defined by
`the coordinates (x, y, z). Thus, the object 21, which is
`shown in FIG. 3 as conforming to the world frame, is
`transformed to the camera frame according to the well
`known transformation of equation (1), in which P. repre
`Sents the coordinates (x, y, z) of an object in the world
`frame, and M. .
`.
`.
`represents a world-to-camera frame
`transformation matrix, which is well-known in the art of 3-D
`computer graphics.
`
`(1)
`P=M. P.
`Next, the object must be transformed from the camera
`frame to the window frame. The window frame represents
`the coordinate System of the portion of the data that the user
`wants to view. This transformation is represented by the
`well-known transformation equations (2) through (5), in
`which P represents the window frame coordinates (X,
`
`-e
`
`IPR2018-01045
`Sony EX1016 Page 10
`
`
`
`US 6,630,931 B1
`
`S
`Y, Z) of the object, M .
`. . crepresents a camera-to
`window frame transformation matrix, which is well-known
`in the art of 3-D computer graphics, and X, y, Z, and W are
`intermediate coordinates.
`
`XC
`ya
`Y
`y
`* = Mic C
`
`W
`
`1
`
`w = yaf wa
`
`(2)
`
`(3)
`
`(4)
`
`15
`
`Finally, the object must be transformed from the window
`frame to the viewport frame. The viewport frame corre
`sponds to the display area of the display device. FIGS. 5A
`and 5B illustrate the relationship between the window frame
`and the Viewport frame with respect to window coordinate
`axes W., W, and W. The transformation essentially
`involves translation and scaling. The window 25 is defined
`to be centered at coordinates (WCX, WCY, WCZ) and to
`have dimensions of WSX along the W. axis, WSY along the
`W, axis, and WSZ along the W. axis. The viewport 27 is
`defined to be centered at coordinates (VCX, VCY, VCZ) and
`have dimensions VSX along the V axis, VSY along the V,
`axis, and VSZalong the V axis. The mapping of the window
`25 to the viewport 27 is defined by equations (6) through (9),
`in which P represents the coordinates (X, Y, Z) of the
`object in the viewport frame.
`
`-e
`
`25
`
`P-f.
`
`(P.)tm (6)
`
`AS noted above, the present invention provides for
`approximations of images based on depth values Stored in a
`Z buffer. The Z buffer generally contains a depth value for
`each pixel of a frame to be displayed. Referring now to FIG.
`6, the depth value Z for a given pixel 32 is defined herein as
`the apparent distance from the eye point (or camera point) 31
`to the front surface 33 of a displayable object 21. Also shown
`in FIG. 6 are the focal length (p, which is defined as the
`distance from the eye point 31 to the image plane 30 in the
`Viewing direction, and the interocular distance Ö, which is
`defined as the distance between the left and right eye of the
`viewer.
`The present invention provides that either the left image
`or the right image of a Scene is approximated based on the
`other. However, for purposes of description only, it shall be
`assumed henceforth that the right image is approximated
`from the left image unless otherwise Stated. AS noted above,
`a corresponding left and right image differ only in terms of
`their X (horizontal) coordinates. Thus, if X represents the
`X coordinate value of a pixel in the left image (in the
`Viewport frame), then in accordance with the present
`invention, the X coordinate value Xf of the corresponding
`pixel of the right image (in the viewport frame) can be
`obtained using equation (5), in which Z, represents the Z
`(depth) value associated with the pixel in the viewport
`frame, and K and K- are given by equations (11) and (12),
`respectively.
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`Refer now to FIG. 7 which illustrates a routine for
`generating the right image as an approximation of the left
`image in accordance with the present invention. Initially, in
`Step 701 the parameters K and K- are computed according
`to equations (11) and (12). Next, in step 702 the entire scene
`is rendered (including transformation, lighting, set up, and
`rasterization) as viewed from the left eyepoint. In step 703,
`the current Scan line is Set equal to the top Scan line, and in
`step 704 the current pixel is set equal to the first pixel in the
`current scan line. In step 705, Xf is computed according to
`equation (10). Next, in step 706 the red (R), green (G), and
`blue (B) values computed for pixel (X, Y) are stored at
`the location for pixel (Xf, Y) in a portion of the frame
`buffer allocated for the right image (the “right frame
`buffer”). If there are more pixels in the scanline (step 707),
`then the current pixel is set to the next pixel in step 710, and
`the routine repeats from step 705. If not, then if there are
`more scan lines (step 708), the current scan line is set to the
`next scan line in step 709, and the routine then repeats from
`step 704. If there are no more scan lines, the routine ends.
`One problem with generating one image as an approxi
`mation of the other is that the data (e.g., R, G, B) for certain
`pixels in the approximated image may not be computed, i.e.,
`those pixels may not be “filled”. The reason for this effect is
`that a point on an object may be part of a hidden Surface in
`only one of the two images. That is, there may be a pixel in
`the right image which represents a given point on a Surface,
`yet there is no corresponding pixel in the left image which
`represents that point. Hence, if the right image is generated
`as an approximation of the left image, no pixel will be
`designated in the right image to represent that point. AS a
`result, at least one pixel in the right image will not be filled.
`Pixels that are not filled might show up as black areas on the
`display, which is not desirable. Accordingly, it is desirable to
`have a technique for compensating for this effect. The
`present invention includes Such a technique.
`A Stereoscopic 3-D display is comprised of a Sequence of
`frames (not to be confused with the “frames” of reference
`discussed above), in which each frame includes a left image
`and a right image. Accordingly, one embodiment of the
`present invention provides that, rather than approximating
`the same image for every frame (i.e., always the right image
`or always the left image), the left image and the right image
`are alternately Selected to be approximated on a frame-by
`frame basis. For example, the right image may be approxi
`mated based on the left image for odd numbered frames,
`while the left image is approximated from the right image
`for even numbered frames. Further, for any pixel that is not
`filled in the approximated image, that pixel is assigned the
`data (e.g., R,G,B) of the pixel with the same location in the
`corresponding image from the immediately preceding
`frame, which image was not an approximation.
`Thus, using this technique, the pixels in the approximated
`image will contain essentially correct (although in Some
`cases slightly time-lagged) data. The only additional com
`putation is for those pixels that are not filled, and for those
`pixels, the additional computation is only a single look-up in
`a color buffer. No additional memory is required, because
`the previous frame's color buffer is maintained anyway to
`serve as a front buffer for display to the monitor (all
`processing on a frame is traditionally done on a backbuffer).
`Thus, improved image quality is achieved at minimal cost.
`
`IPR2018-01045
`Sony EX1016 Page 11
`
`
`
`7
`FIG. 8 illustrates a routine for generating Stereoscopic
`images using alternation of the approximated image. In Step
`801, the left and right images of the first frame (frame 1) are
`rendered and dislayed. In step 802, if there are more frames
`to display, then the routine proceeds to step 803; otherwise
`the routine ends. In step 803, the current frame is set equal
`to the next frame. If the current frame is an odd-numbered
`frame (i.e., frame 1, 3, 5, etc.) (step 804), then the routine
`proceeds to steps 805A, 806A, and 807A. If, however, the
`current frame is an even-numbered frame (i.e., frame 2, 4, 6,
`etc.), then the routine proceeds to steps 805B, 806B, and
`807B.
`Referring to steps 805A, 806A, and 807A, the right image
`is first rendered in step 805A. In step 806A, the left image
`is generated as an approximation from the right image in the
`manner described above. In step 807A, for each pixel that is
`not filled in the left image, that pixel is assigned the data
`values of that pixel from the previous frame's left image.
`Similarly, in step 805B, the left image is rendered. In step
`806B, the right image is generated as an approximation from
`the left image. In step 807B, for each pixel that is not filled
`in the right image, that pixel is assigned the data values of
`that pixel from the previous frame's right image.
`Following either step 807A or 807B, the left and right
`images from the current frame are displayed in step 808, and
`the routine proceeds again to Step 802.
`Thus, a method and apparatus have been described for
`generating fast, low-cost Stereoscopic displays in a computer
`System. Although the present invention has been described
`with reference to specific exemplary embodiments, it will be
`evident that various modifications and changes may be made
`to these embodiments without departing from the broader
`Spirit and Scope of the invention as Set forth in the claims.
`Accordingly, the Specification and drawings are to be
`regarded in an illustrative rather than a restrictive Sense.
`What is claimed is:
`1. A method of generating a Stereoscopic Sequence of
`frames, each frame in the Sequence having a left image and
`a right image, wherein for at least one frame in the Sequence,
`one of the left image and the right image is an approximation
`of the other, the method comprising the Steps of
`identifying any pixels not filled in the approximation
`image; and
`assigning, to any pixel not filled in the approximation
`image, the data values of a corresponding pixel in an
`image from a preceding frame.
`2. A method according to claim 1, wherein the image from
`a preceding frame comprises a non-approximated image
`from a preceding frame.
`3. A method according to claim 2, wherein the non
`approximated image from a preceding frame comprises a
`non-approximated image from the immediately preceding
`frame.
`4. A method according to claim 3, wherein the non
`approximated image from the immediately preceding frame
`corresponds to the same eyepoint as that of the approxima
`tion image.
`5. A method of generating a Stereoscopic Sequence of
`frames, each frame having a left image and a right image, the
`method comprising the Steps of
`for each frame of a first Set of frames in the Sequence,
`generating one of the left image and the right image as
`an approximation of the other of the left image and the
`right image;
`for each frame of a Second Set of frames in the Sequence,
`generating Said other of the left image and the right
`image as an approximation of Said one of the left image
`and the right image;
`
`1O
`
`15
`
`25
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`US 6,630,931 B1
`
`8
`identifying a pixel not filled in one of the images gener
`ated as an approximation; and
`assigning a value to the pixel not filled based on the value
`of a corresponding pixel in the same image of a
`previous frame.
`6. A method according to claim 5, further comprising the
`Step of performing the generating StepS. So as to alternate the
`one of the left image and the right image that is approxi
`mated on a frame-by-frame basis.
`7. A method according to claim 6, wherein each of the
`approximations is based on depth information Stored in a Z
`buffer.
`8. A method according to claim 7, wherein the previous
`frame is the immediately preceding frame.
`9. A method of generating a Stereoscopic display, the
`display including a Sequence of frames, each frame having
`a first image of a Scene corresponding to one of a left eye
`View and a right eye view and a Second image of the Scene
`corresponding to the other of the left eye view and the right
`eye view, each of the first and Second images formed by a
`plurality of pixels, the method comprising the Steps of
`(a) rendering the first image of a first frame of the
`Sequence of frames, including determining a value for
`each of the pixels of the first image;
`(b) rendering the Second image of the first frame as an
`approximation of the first image, including approxi
`mating a value for each of the pixels of the Second
`image based on the value of a corresponding pixel of
`the first image;
`(c) rendering the Second image of a second frame of the
`Sequence of frames, including determining a value for
`each of the pixels of the Second image of the Second
`frame;
`(d) rendering the first image of the Second frame as an
`approximation of the Second image of the Second
`frame, including approximating a value for each of the
`pixels of the Second image of the Second frame based
`on the value of a corresponding pixel of the first image
`of the Second frame;
`(e) repeating Steps (a) through (d) for different frames of
`the Sequence of frames, to render each frame of the
`Sequence of frames,
`(f) identifying any pixels not filled in each of the images
`generated as an approximation; and
`(g) for each pixel not filled, assigning said pixel a value
`based on a corresponding pixel of the same image from
`the immediately preceding frame.
`10. An apparatus for generating a Stereoscopic Sequence
`of frames, each frame having a left image and a right image,
`the apparatus comprising:
`a memory Storing the frames of the Sequence,
`processing circuitry coupled to the memory, the process
`ing circuitry generating, for each frame of a first Set o