`US007542035B2
`
`c12) United States Patent
`Oxaal
`
`(10) Patent No.:
`(45) Date of Patent:
`
`US 7,542,035 B2
`Jun.2,2009
`
`(54) METHOD FOR INTERACTIVELY VIEWING
`FULL-SURROUND IMAGE DATA AND
`APPARATUS THEREFOR
`
`(76)
`
`Inventor: Ford Oxaal, 42 Western Ave., Cohoes,
`NY (US) 12047
`
`( *) Notice:
`
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 656 days.
`
`(56)
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`
`3,725,563 A
`4,667,236 A
`4,728,839 A
`4,763,280 A
`4,821,209 A
`4,899,293 A *
`5,027,287 A
`5,185,667 A
`
`4/1973 Woycechowsky
`5/1987 Dresdner
`3/1988 Coughlan eta!.
`8/1988 Robinson eta!.
`4/1989 Hempel eta!.
`2/1990 Dawson eta!. .............. 345/423
`6/1991 Artigalas eta!.
`2/1993 Zimmermann
`
`(21) Appl. No.: 10/602,666
`
`(22) Filed:
`
`Jun.25,2003
`
`(65)
`
`Prior Publication Data
`
`US 2004/0004621 Al
`
`Jan. 8, 2004
`
`Related U.S. Application Data
`
`(63) Continuation of application No. 09/871,903, filed on
`Jun. 4, 2001, now abandoned, which is a continuation
`of application No. 09/228,760, filed on Jan. 12, 1999,
`now Pat. No. 6,243,099, and a continuation-in-part of
`application No. 08/749,166, filed on Nov. 14, 1996,
`now Pat. No. 5,903,782.
`
`(60) Provisional application No. 60/006,800, filed on Nov.
`15, 1995, provisional application No. 60/071,148,
`filed on Jan. 12, 1998.
`
`(51)
`
`Int. Cl.
`(2006.01)
`G06T 17100
`(2006.01)
`G09G 5100
`(52) U.S. Cl. ....................................... 345/420; 345/582
`(58) Field of Classification Search . ... ... ... ... .. .. 345/419,
`345/420, 427, 582, 583, 585; 703/1, 2, 6;
`463/32, 34
`See application file for complete search history.
`
`(Continued)
`
`FOREIGN PATENT DOCUMENTS
`
`EP
`
`1 341 383 A2
`
`9/2003
`
`(Continued)
`
`OTHER PUBLICATIONS
`
`Comaniciu, D., Ramesh, V., and Meer, P., "Real-Time Tracking of
`Non-Rigid Objects Using Mean-shift," IEEE Computer Vision and
`Pattern Recognition, vol. 1 II, 2000, pp. 142-149.
`
`(Continued)
`
`Primary Examiner-Ryan R Yang
`(74) Attorney, Agent, or Firm---Groover & Associates
`
`(57)
`
`ABSTRACT
`
`A method of modeling of the visible world using full-sur(cid:173)
`round image data includes steps for selecting a view point
`within a p-surface, selecting a direction of view within the
`p-surface, texture mapping full-surround image data onto the
`p-surface such that the resultant texture map is substantially
`equivalent to projecting full-surround image data onto the
`p-surface from the view point to thereby generate a texture
`mapped p-surface, and displaying a predetermined portion of
`the texture mapped p-surface. An apparatus for implementing
`the method is also described.
`
`123 Claims, 4 Drawing Sheets
`
`antipodal point
`tangent point
`
`GOOGLE EXHIBIT 1003, Page 1 of 15
`
`
`
`US 7,542,035 B2
`Page 2
`
`U.S. PATENT DOCUMENTS
`
`OTHER PUBLICATIONS
`
`Shapiro
`6/1994
`5,321,776 A
`Kuban eta!.
`10/1994
`5,359,363 A
`Freeman
`3/1995
`5,396,284 A
`Bianchi
`7/1995
`5,434,617 A
`Zhang et al.
`2/1996
`5,495,292 A
`Aviv
`9/1997
`5,666,157 A
`Oxaal
`1111997
`5,684,937 A
`5,694,533 A * 12/1997
`Richards et a!. ............. 345/420
`5,923,334 A * 7/1999
`Luken ........................ 345/423
`6,028,584 A * 212000
`Chiang eta!. ............... 345/628
`6,049,281 A
`4/2000
`Osterweil
`Martinet a!.
`6,147,709 A
`1112000
`Nayar eta!.
`6,215,519 Bl
`4/2001
`6,243,099 Bl
`Oxaal
`6/2001
`6,344,852 Bl
`Zhu
`212002
`Mills eta!.
`6,509,926 Bl
`112003
`Glatt
`6,724,421 Bl
`4/2004
`Miledetal.
`6,757,434 B2
`6/2004
`6,763,068 B2
`7/2004
`Oktem
`2003/0128756 Al
`7/2003 Oktem
`
`FOREIGN PATENT DOCUMENTS
`
`wo
`
`WO 02/062056 Al
`
`8/2002
`
`Y. Yardimci, I. Yilrnaz, A. E. Cetin, "Correlation Tracking Based on
`Wavelet Comain Information," Proceedings ofSPIE vol. #5204, San
`Diego, Aug. 5-7, 2003.
`A M. Bagci, Y. Yardimci, A. E. Cetin, "Moving Object Detection
`Using Adaptive Subband Decomposition and Franctional Lower(cid:173)
`Order Statistics in Video Sequences," Signal Processing, 82 ( 12):
`1941-1947, Dec. 2002 .
`C. Stauffer, W. Grimson, "Adaptive Background Mixture Models for
`Real-Time Tracking." Proc. IEEE CS Conf. on Computer Vision and
`Pattern Recognition, vol. 2, 1999, pp. 246-252.
`"A System for Video Surveillance and Monitoring," in Proc. Ameri(cid:173)
`can Nuclear Society (ANS) Eighth International Topical Meeting on
`Robotics and Remote Systems, Pittsburgh, PA, Apr. 25-29, 1999 by
`Collins, Lipton and Kanade.
`Aube, 12th International Conference on Automatic Fire Detection,
`2001.
`X. Zhou, R. Collins, T. Kanade, and P. Metes, "A Master-Slave
`System to Acquire Biometric Imagery of Humans at Distance", ACM
`International Workshop on Video Surveillance, Nov. 2003.
`* cited by examiner
`
`GOOGLE EXHIBIT 1003, Page 2 of 15
`
`
`
`U.S. Patent
`
`Jun. 2, 2009
`
`Sheet 1 of 4
`
`US 7,542,035 B2
`
`FIG. 1
`
`FIG. 2
`
`FIG. 3
`
`GOOGLE EXHIBIT 1003, Page 3 of 15
`
`
`
`U.S. Patent
`
`Jun. 2, 2009
`
`Sheet 2 of 4
`
`US 7,542,035 B2
`
`FIG. 4A
`
`1'
`
` A
`I 8 C
`~---
`1
`Ooo
`VP
`
`f
`
`FIG. 48
`
`1'
`
` A
`I B c
`e-':--e
`' Ooo
`
`VP
`
`f
`
`GOOGLE EXHIBIT 1003, Page 4 of 15
`
`
`
`U.S. Patent
`
`Jun.2,2009
`
`Sheet 3 of 4
`
`US 7,542,035 B2
`
`FIG. 5
`
`FIG. 6
`
`FIG. 7
`antipodal point
`tangent point
`
`GOOGLE EXHIBIT 1003, Page 5 of 15
`
`
`
`U.S. Patent
`
`Jun.2,2009
`
`Sheet 4 of 4
`
`US 7,542,035 B2
`
`0
`"'¢"
`L
`
`i':::;E
`~
`
`C)
`
`l() l -
`
`1-
`:::>
`0.. :z
`
`.__ r--
`
`co
`•
`-LL
`
`0
`
`~
`
`UJ
`_J
`al
`~
`a:::
`0
`_J
`0
`0
`
`I
`:::> a..
`r
`
`(.)
`
`0
`-r-
`
`0
`<.0
`'---
`
`~
`0
`0:::
`
`I
`I
`I
`L _________
`
`-I
`
`C) r-
`
`0
`Nv-
`
`~
`a.. en
`
`0
`
`GOOGLE EXHIBIT 1003, Page 6 of 15
`
`
`
`US 7,542,035 B2
`
`1
`METHOD FOR INTERACTIVELY VIEWING
`FULL-SURROUND IMAGE DATA AND
`APPARATUS THEREFOR
`
`CROSS-REFERENCE TO RELATED
`APPLICATIONS
`
`This is a Continuation ofSer. No. 09/871,903 (filed Jun. 4,
`2001 and now abandoned), which is a Continuation of U.S.
`Pat. No. 6,243,099 (filed Jan. 12, 1999 as Ser. No, 09/228,
`760), the '099 patent being a Continuation-in-Part ofU.S. Pat.
`No. 5,903,782 (filed Nov. 14, 1996 as Ser. No. 08/749,166),
`which application claims priority from Provisional Patent
`Application Ser. No. 60/006,800 (filed Nov. 15, 1995), the
`'099 patent also claiming priority from Provisional Patent
`Application Ser. No. 60/071,148 (filed Jan. 12, 1998).
`
`REFERENCE TO COMPUTER PROGRAM
`LISTING SUBMITTED ON CD
`
`This application incorporates by reference the computer
`program listing appendix submitted on (1) CD-ROM entitled
`"Viewer Computer Program Listing" in accordance with 37
`C.P.R. §1.52(e). Pursuant to 37 C.P.R. §1.77(b )(4), all of the
`material on the CD-ROM is incorporated by reference herein,
`the material being identified as follows:
`
`FileName
`
`Size in Kilobytes
`
`File Type
`
`Date Created
`
`FIGS. 9AG.txt
`FIGS.-10AB.txt
`ggConstants.h
`glmain.c
`sidebyside.c
`sphere_map.c
`warp.c
`warp.h
`
`11KB
`2KB
`2KB
`11KB
`15 KB
`2KB
`5 KB
`1KB
`
`text
`text
`text
`text
`text
`text
`text
`text
`
`Oct. 30,2000
`Oct. 30,2000
`Oct. 30,2000
`Oct. 30,2000
`Oct. 30,2000
`Oct. 30,2000
`Oct. 30,2000
`Oct. 30,2000
`
`A portion of the disclosure of this patent document includ(cid:173)
`ing said computer code contains material that is subject to
`copyright protection. The copyright owner has no objection to
`the facsimile reproduction by anyone of the patent document
`or the patent disclosure, as it appears in the Patent and Trade(cid:173)
`mark Office patent file or records, but otherwise reserves all
`copyright rights whatsoever.
`
`BACKGROUND OF THE INVENTION
`
`The present invention relates generally to a method and
`corresponding apparatus for viewing images. More specifi(cid:173)
`cally, the present invention relates to a method and corre(cid:173)
`sponding apparatus for viewing full-surround, e.g., spherical,
`image data.
`Systems and techniques for changing the perspective of a
`visible image in producing a resultant image, or systems and
`methods of transforming an image from one perspective form
`to another have been the subject of scientific thought and
`research for many years. Systems and techniques for trans(cid:173)
`forming visible images can generally be divided into three
`separate categories:
`(1) perspective generation systems and methods suitable
`for applications such as flight simulators;
`(2) three-dimensional (3D) to two-dimensional (2D) con(cid:173)
`version systems and methods; and
`(3) miscellaneous systems and methods.
`The first category includes U.S. Pat. No. 3,725,563, which
`discloses a method of and apparatus for raster scan transfor-
`
`2
`mations using rectangular coordinates which are suitable for
`electronically generating images for flight simulators and the
`like. More specifically, the patent discloses a technique for
`raster shaping, whereby an image containing information
`from one viewpoint is transformed to a simulated image from
`another viewpoint. On the other hand, U.S. Pat. No. 4,763,
`280 discloses a curvilinear dynamic image generation system
`for projecting rectangular coordinate images onto a spherical
`display surface. In the disclosed system, rectangular coordi-
`10 nates are converted to spherical coordinates and then the
`spherical coordinates are distorted for accomplishing the
`desired simulation of curvature.
`The second category of systems and techniques perform
`3D-to-2D conversion, or vice versa. For example, U.S. Pat.
`15 No. 4,821,209 discloses a method of and apparatus for data
`transformation and clipping in a graphic display system,
`wherein data transformation is accomplished by matrix mul(cid:173)
`tiplication. On the other hand, U.S. Pat. No. 4,667,236 dis(cid:173)
`closes a television perspective effects system for providing
`20 perspective projection whereby each point of a three-dimen(cid:173)
`sional object is projected onto a two-dimensional plane. New
`coordinates X' andY' are prepared from the original coordi(cid:173)
`nates X, Y and Z, and the viewing distance D, using the
`general formulas X'=XD/Z and Y'=YD/Z. As the object to be
`25 displayed is rotated around the X or Y axis, the viewing
`distance D is changed for each point.
`In the third category, miscellaneous systems and methods
`are disclosed by, for example, U.S. Pat. No. 5,027,287, which
`describes a device for the digital processing of images to
`30 obtain special geometrical effects wherein digital image data
`corresponding to intersection points on a rectangular X,Y
`grid are transposed by interpolation with respect to intersec(cid:173)
`tion points of a curved surface. U.S. Pat. No. 4,882,679, on the
`other hand, discloses a system and associated method of
`35 reformatting images for three-dimensional display. The dis(cid:173)
`closed system is particularly useful for generating three-di(cid:173)
`mensional images from data generated by diagnostic equip(cid:173)
`ment, such as magnetic resonance imaging.
`However, none of the above described methods or systems
`permit viewing in circular perspective, which is the best way
`to view spherical data. Circular perspective does all that linear
`perspective does when zoomed in, but it allows the view to
`zoom out to the point where the viewer can see almost every-
`45 thing in the spherical data simultaneously in a visually palat(cid:173)
`able and coherent way.
`What is needed is a method for viewing full-surround, e.g.,
`spherical, image data employing circular perspective. More(cid:173)
`over, what is needed is an apparatus for viewing full-sur-
`50 round, e.g., spherical, image data employing circular perspec(cid:173)
`tive. What is also needed is a method for viewing full(cid:173)
`surround, e.g., spherical, image data employing circular
`perspective which is computationally simple. Preferably, the
`method for viewing full-surround, e.g., spherical, image data
`55 employing circular perspective can be employed on any per(cid:173)
`sonal computer (PC) system possessing a three dimensional
`(3-D) graphics capability.
`
`40
`
`SUMMARY OF THE INVENTION
`
`60
`
`Based on the above and foregoing, it can be appreciated
`that there presently exists a need in the art for viewing meth(cid:173)
`ods and corresponding apparatuses which overcome the
`above-described deficiencies. The present invention was
`65 motivated by a desire to overcome the drawbacks and short(cid:173)
`comings of the presently available technology, and thereby
`fulfill this need in the art.
`
`GOOGLE EXHIBIT 1003, Page 7 of 15
`
`
`
`US 7,542,035 B2
`
`25
`
`3
`The present invention implements a novel and practical
`circular perspective viewer for spherical data. Moreover, it
`implements the circular perspective viewer within the context
`of existing 3D graphics utilities native to personal computers
`(PCs ). Thus, the method and corresponding apparatus for
`circular perspective viewing is practical for a broad market.
`One object according to the present invention is to provide
`a method and corresponding apparatus for modeling the vis(cid:173)
`ible world by texture mapping full-surround image data.
`Another object according to the present invention is to 10
`provide a method and corresponding apparatus for modeling
`the visible world by texture mapping full-surround image
`data onto a p-surface whereby the resultant texture map is
`substantially equivalent to projecting full-surround image
`data onto the p-surface from a point Q inside the region X of 15
`the p-surface.
`Still another object according to the present invention is to
`provide a method and corresponding apparatus for modeling
`the visible world by texture mapping full-surround image
`data wherein the viewer is allowed to interactively rotate the 20
`model.
`Yet another object according to the present invention is to
`provide a method and corresponding apparatus for modeling
`the visible world by texture mapping full-surround image
`data wherein the viewer is allowed to interactively change the
`direction of vision.
`A still further object according to the present invention is to
`provide a method and corresponding apparatus for modeling
`the visible world by texture mapping full-surround image
`data, wherein the viewer is allowed to interactively alter the 30
`focal length or view angle.
`Another object according to the present invention is to
`provide a method and corresponding apparatus for modeling
`the visible world by texture mapping full-surround image
`data, wherein the viewer is allowed to interactively change the 35
`direction of view.
`Still another object according to the present invention is to
`provide a method and corresponding apparatus for modeling
`the visible world by texture mapping full-surround image
`data, wherein the viewpoint is close to the surface of the 40
`p-sphere.
`Another object according to the present invention is to
`provide a method and corresponding apparatus for modeling
`the visible world by texture mapping full-surround image
`data, wherein the viewer is allowed to interactively change the 45
`direction of view.
`A further object according to the present invention is to
`provide a method and corresponding apparatus for modeling
`the visible world by texture mapping full-surround image
`data, wherein the viewer is allowed to select an area of the 50
`image and cause another model of the visible world to be
`loaded into said viewing system.
`Another object according to the present invention is to
`provide a method and corresponding apparatus for modeling
`the visible world by texture mapping full-surround image 55
`data, wherein the viewer is allowed to perform any combina(cid:173)
`tion of actions specified immediately above.
`It will be appreciated that none of the above-identified
`objects need actually be present in invention defined by the
`appended claims. In other words, only certain, and not all, 60
`objects of the invention have been specifically described
`above. Numerous other objects advantageously may be pro(cid:173)
`vided by the invention, as defined in the appended claims,
`without departing from the spirit and scope of the invention.
`These and other objects, features and advantages according 65
`to the present invention are provided by a method of modeling
`the visible world using full-surround image data. Preferably,
`
`4
`the method includes steps for selecting a view point within a
`p-surface, and texture mapping full-surround image data onto
`the p-surface such that the resultant texture map is substan(cid:173)
`tially equivalent to projecting full-surround image data onto
`the p-surface from the view point to thereby generate a texture
`mapped p-surface.
`According to one aspect of the invention, the method also
`includes a step for either rotating the texture mapped p-sur(cid:173)
`face or changing the direction of view to thereby expose a new
`portion of the texture mapped p-surface. According to another
`aspect of the invention, a first texture mapped p-sphere is
`replaced by a second texture mapped p-sphere by interac(cid:173)
`tively selecting the new viewpoint from viewpoints within the
`second texture mapped p-sphere.
`These and other objects, features and advantages according
`to the present invention are provided by a method of modeling
`of the visible world using full-surround image data, the
`method comprising steps for providing the full surround
`image data, selecting a view point within a p-surface, texture
`mapping full-surround image data onto the p-surface such
`that the resultant texture map is substantially equivalent to
`projecting full-surround image data onto the p-surface from
`the view point to thereby generate a texture mapped p-sur(cid:173)
`face, and displaying a predetermined portion of the texture
`mapped p-sphere.
`These and other objects, features and advantages according
`to the present invention are provided by an apparatus for
`modeling the visible world using full-surround image data,
`comprising first circuitry for selecting a view point within a
`p-surface, second circuitry for texture mapping full-surround
`image data onto the p-surface such that the resultant texture
`map is substantially equivalent to projecting full-surround
`image data onto the p-surface from the view point to thereby
`generate a texture mapped p-surface, and third circuitry for
`displaying a predetermined portion of the texture mapped
`p-sphere.
`According to one aspect, the present invention provides a
`method of modeling of the visible world using full-surround
`image data, the method including steps for selecting a view
`point within a p-surface, selecting a direction of view within
`the p-surface, texture mapping full-surround image data onto
`the p-surface such that the resultant texture map is substan-
`tially equivalent to projecting full-surround image data onto
`the p-surface from the view point to thereby generate a texture
`mapped p-surface, and displaying a predetermined portion of
`the texture mapped p-surface. If desired, texture mapped
`p-surface can be rotated so as to simulate rotating the direc(cid:173)
`tion of view in the opposite direction. Preferably, the method
`further includes a step for interactively changing the direction
`of view to thereby expose a corresponding portion of the
`texture mapped p-surface. In addition, the method permits a
`viewer to interactively alter at least one of focal length or an
`angle of view relative to the textured mapped p-surface to
`thereby vary the displayed portion of the texture mapped
`p-surface. Advantageously, the method can include steps for
`selecting a new viewpoint, repeating the texture mapping step
`using the new viewpoint, and redisplaying the predetermined
`portion of the p-surface, whereby a first image portion occu(cid:173)
`pying the predetermined portion displayed during the dis-
`playing step is different than a second image portion occupy(cid:173)
`ing the predetermined portion during the redisplaying step. In
`that instance, the selecting step can include interactively
`selecting the new viewpoint. Moreover, the first texture
`mapped p-surface is replaced by a second texture mapped
`p-surface by interactively selecting the new viewpoint from
`
`GOOGLE EXHIBIT 1003, Page 8 of 15
`
`
`
`5
`viewpoints within the second texture mapped p-surface. Ben(cid:173)
`eficially, the new viewpoint can be close to the surface of the
`p-surface.
`According to another aspect, the present invention pro(cid:173)
`vides a method for interactively viewing a model of the vis(cid:173)
`ible world formed from full-surround image data, including
`steps for providing the full surround image data, selecting a
`view point within a p-surface, establishing a first direction of
`view within the p-surface, texture mapping full-surround
`image data onto the p-surface such that the resultant texture 10
`map is substantially equivalent to projecting full-surround
`image data onto the p-surface from the view point to thereby
`generate a texture mapped p-sphere, interactively changing
`the direction of view to thereby select a second direction of
`view, and displaying a predetermined portion of the texture 15
`mapped p-sphere as the texture mapped p-sphere moves
`between the first and second directions of view. Advanta(cid:173)
`geously, the interactively changing step results in rotating the
`texture mapped p-sphere so as to simulate rotating the direc(cid:173)
`tion of view in the opposite direction.
`According to a further aspect, the present invention pro(cid:173)
`vides an apparatus for interactively viewing a model of the
`visible world formed from full-surround image data stored in
`memory, including a first device for selecting a view point
`within a p-surface, a second device for establishing a first 25
`direction of view within the p-surface, a third device for
`texture mapping full-surround image data onto the p-surface
`such that the resultant texture map is substantially equivalent
`to projecting full-surround image data onto the p-surface
`from the view point to thereby generate a texture mapped 30
`p-sphere, a fourth device for interactively changing the direc(cid:173)
`tion of view to thereby select a second direction of view, and
`a display device for displaying a predetermined portion of the
`texture mapped p-sphere as the texture mapped p-sphere
`moves between the first and second directions of view. 35
`Advantageously, the fourth device effectively rotates the tex(cid:173)
`ture mapped p-sphere so as to simulate rotating the direction
`of view in the opposite direction. Preferably, the first through
`fourth devices are software devices.
`These and other objects, features and advantages of the 40
`invention are disclosed in or will be apparent from the fol(cid:173)
`lowing description of preferred embodiments.
`
`20
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`These and various other features and aspects of the present
`invention will be readily understood with reference to the
`following detailed description taken in conjunction with the
`accompanying drawings, in which like or similar numbers are 50
`employed throughout, and in which:
`FIG. 1 illustrates a set of all rays from a predetermined
`viewpoint, which illustration facilitates an understanding of
`the present invention;
`FIG. 2 illustrates a set of points, excluding the viewpoint, 55
`located on a corresponding one of the rays, which illustration
`facilitates an understanding of the present invention;
`FIG. 3 illustrates the formation of a projection of the set of
`points, or a subset thereof, illustrated in FIG. 2;
`FIGS. 4A and 4B illustrate the resultant images generated 60
`by two different projections, respectively, given a constant
`viewpoint;
`FIG. 5 illustrates the concept oflinear perspective:
`FIG. 6 illustrates the concept of circular perspective;
`FIG. 7 illustrates the concept of stereo graphic projection;
`and
`
`65
`
`US 7,542,035 B2
`
`6
`FIG. 8 is a high level block diagram of a circular perspec(cid:173)
`tive viewing system according to the present invention.
`
`DETAILED DESCRIPTION OF THE PREFERRED
`EMBODIMENTS
`
`The method and corresponding apparatus according to the
`present invention are similar to that disclosed in U.S. Pat. No.
`5,684,937, which patent is incorporated herein by reference
`for all purposes, in that it generates perspective views derived
`for constants less than or equal to two, and greater than or
`equal to one, i.e., l.O~X~2.0. However, it will be appreci-
`ated that the inventive method and apparatus are different
`from U.S. Pat. No. 5,684,937 in that the method for deriving
`perspective is different than 'explicitly' dividing all angles by
`a selected constant, as disclosed in that patent. Instead, the
`angles are 'implicitly' divided by a constant by moving the
`viewpoint around inside a "p-sphere". Additional details will
`be provided below.
`By employing the method and corresponding apparatus
`according to the present invention, it is possible to create a
`virtual pictosphere using a conventional 3-D graphics system.
`Preferably, the inventive method and apparatus texture map
`the visible world onto a sphere. It should be mentioned that
`when the user selects a viewpoint at the center of this sphere
`and renders the view using the primitives of a conventional
`3D graphics system, the user implicitly divides all angles by
`one, and results in a linear perspective view. However, when
`the user selects a viewpoint on the surface of this sphere,
`selects a direction of view towards the center, and renders the
`view using the primitives of a conventional 3D graphics sys(cid:173)
`tem, the user implicitly divides all angles by two, thus creat(cid:173)
`ing a circular perspective view. Moreover, by allowing the
`viewpoint to move around within or on the sphere, the user
`achieves results virtually identical to those achieved by U.S.
`Pat. No. 5,684,937 for constants ranging from 1.0 to 2.0.
`It will be appreciated that the method and corresponding
`apparatus according to the present invention implement a
`novel and practical circular perspective viewer of spherical
`data. The inventive method and apparatus advantageously can
`be achieved within the context of existing 3-D graphics utili(cid:173)
`ties and hardware native to PCs. It will be noted from the
`statement immediately above that the inventive method and
`apparatus advantageously can be implemented in a broad
`45 range of existing systems.
`The method and corresponding apparatus according to the
`present invention are predicated on the following starting, i.e.,
`given, conditions:
`(1) the set of all rays V from a given point VP, as illustrated
`inFIG.l;
`(2) a set of points P not including VP, each point in P being
`contained by one and only one ray in V, as illustrated in
`FIG. 2; and
`(3) the set of color values C, each color inC being associ(cid:173)
`ated with one and only one ray in V, and also thereby
`associated with the point in P contained by said ray.
`Moreover, the following definitions apply:
`(1) POINTS P: The visible world.
`(2) A PROJECTION OF P: A subset of points P. Any
`number of points Pn contained in P may be slid closer to
`or further from point VP along their corresponding rays.
`The resultant new configuration of points P is called a
`projection of P. The concept can best be understood by
`referring to FIG. 3;
`(3) MAGIC POINT, VIEWPOINT, OR POINT OF PRO(cid:173)
`JECTION: Point VP. Please note, no matter how points
`P are projected, their appearance will remain the same
`
`GOOGLE EXHIBIT 1003, Page 9 of 15
`
`
`
`US 7,542,035 B2
`
`10
`
`7
`when viewed from point VP. This latter concept may best
`be understood by referring to FIGS. 4A and 4B.
`(4) FULL-SURROUND IMAGE DATA: data which
`samples the points P. This data encodes, explicitly or
`implicitly, the association of a color value with a given 5
`direction from a given point of projection. It should be
`mentioned at this point that full-surround image data is
`useful in many fields of entertainment because, when
`delivered to many viewers, it enables the construction of
`an independent viewing system defined below.
`(5) P-SPHERE: a computer graphics representation of any
`polyhedron where there exists at least one point x inside
`(neither intersecting, nor lying outside) the polyhedron
`which may be connected to every point of the polyhe(cid:173)
`dron with a distinct line segment, no portion of which 15
`said line segment lies outside the polyhedron or inter(cid:173)
`sects the polyhedron at a point not an endpoint. The
`union of all such points x form the region X of the
`p-sphere. For a convex p-sphere, the region X is all
`points of the interior of the p-sphere. Examples of com- 20
`puter graphics objects which may be modeled as
`p-spheres include a tetrahedron, a cube, a dodecahedron,
`and a faceted sphere.
`(6) P-SURFACE: a computer graphics representation of
`any surface with a well-defined inside and outside, 25
`where there exists at least one point x inside (neither
`intersecting, nor lying outside) the surface which may be
`connected to every point of the surface with a distinct
`line segment, no portion of which said line segment lies
`outside the surface or intersects the surface at a point not 30
`an endpoint. The union of all such points x form the
`region X of the p-surface. For a convex p-surface, the
`region X is all points of the interior of the p-surface.
`Examples of computer graphics objects which may be
`modeled as p-surfaces: tetrahedron, cube, sphere, ellip- 35
`said, cylinder, apple torus, lemon torus, b-spline sur(cid:173)
`faces closed or periodic in u and v. A p-sphere is a
`p-surface.
`(7) LINEAR PERSPECTIVE: the projection of a portion
`of the visible world onto a plane or portion of a plane, as 40
`illustrated in FIG. 5.
`(8) CIRCULAR PERSPECTIVE: the projection of the
`visible world, or portion thereof onto a plane, or a por(cid:173)
`tion of a plane, after performing the perspective trans(cid:173)
`formation of the visible world according to U.S Pat. No. 45
`5,684,93 7, where the constant employed is 2. After such
`a transformation, when the direction of vision specified
`in said transformation is perpendicular to the projection
`plane, there is a one-to-one mapping of all the points P
`defining the visible world and all the points of an infinite 50
`plane. This definition is illustrated in FIG. 6;
`(9) STEREOGRAPHIC PROJECTION: the one-to-one
`mapping of each point on a sphere to each point of an
`infinite tangent plane, said mapping performed by con(cid:173)
`structing the set of all rays from the antipodal point of the 55
`tangent point, the intersection of said rays with the plane
`defining said mapping. The understanding of this defi(cid:173)
`nition will be facilitated by reference to FIG. 7. Please
`note that circular perspective and stereographic projec(cid:173)
`tion produce geometrically similar (identically propor- 60
`tioned) mappings when the direction of vision specified
`in the perspective transformation of circular perspective
`contains the tangent point specified in stereo graphic pro(cid:173)
`jection;
`(10) INDEPENDENT VIEWING SYSTEM: an interac(cid:173)
`tive viewing system in which multiple viewers can
`freely, independently of one another, and independently
`
`65
`
`8
`of the source of the image data, pan that image data in all
`directions with the effect that each viewer feels like they
`are "inside" of that imagery, or present at the location
`from which the imagery was produced, recorded, or
`transmitted; and
`(11) STANDARD COMPUTER GRAPHICS SYSTEM: a
`computer graphics system which supports linear per(cid:173)
`spective viewing, including the changing of the focal
`length or the altering of the view angle, the apparent
`rotation of viewed objects, and/or the apparent changing
`of direction of vision, and the texture mapping of image
`data onto objects within the class ofp-surface.
`It will be appreciated that in a standard computer graphics
`system by texture mapping full-surround image data onto a
`p-surface such that the resultant texture map is effectively
`equivalent to projecting the full-surround imagery onto the
`p-surface from a some point Q contained in the region X of the
`p-surface, a representation of the visible world is achieved.
`Referring to the file entitled FIGS. 9AG.txt, the method for
`viewing full-surround, e.g., spherical, image data will now be
`described. It should be mentioned that the corresponding
`code implementing the inventive method is written in the "C"
`language, although a plurality of programming languages are
`well known and readily adapted to this purpose. It will also be
`appreciated that code lines starting with "gl" or "glut" indi(cid:173)
`cate calls to a conventional graphics library (GL) such as
`OpenGL™. One of ordinary skill in the art will readily appre(cid:173)
`ciate that the last function in this listing is