`_______________________________________________________
`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`_______________________________________________________
`
`GOOGLE INC.
`
`Petitioner
`
`v.
`
`GRANDEYE LTD.
`
`Patent Owner
`
`____________________
`
`CASES:
`
`IPR2013-00546 (Patent 8,077,176)
`
`IPR2013-00547 (Patent 6,243,099)
`
`IPR2013-00548 (Patent 7,542,035)
`
`____________________
`
`EXPERT DECLARATION OF JAMES H. OLIVER, Ph.D.
`
`Mail Stop PATENT BOARD
`Patent Trial and Appeal Board
`U.S. Patent and Trademark Office
`P.O. Box 1450
`Alexandria, VA 22313-1450
`
`GRANDEYE EXHIBIT 2028, Page 1 of 115
`
`Google Inc. v. Grandeye Ltd.
`IPR2013-00548
`
`
`
`Declaration of Prof. James H. Oliver – May 2, 2014
`
`Page 2
`
`Table of Contents
`
`INTRODUCTION............................................................................................................. 3
`
`PROFESSIONAL BACKGROUND AND QUALIFICATIONS ................................... 5
`
`DEFINITIONS.................................................................................................................. 7
`“Texture Mapping”..................................................................................................................... 8
`“Environment Mapping”........................................................................................................10
`
`OXAAL PATENTS (‘176), (‘035) AND (‘099).......................................................11
`
`TSAO, ET AL. (PHOTOVR) ........................................................................................17
`Distinctions between Tsao et al. and Oxaal (‘176), (‘099) & (‘035) ..................20
`
`THE FIELD OF THE OXAAL PATENTS IS “IMAGE PROCESSING”...................27
`Image Processing Is Distinct From Computer Graphics .........................................27
`Source Code Example..............................................................................................................29
`
`CONCLUSION ................................................................................................................30
`
`Appendix A: Curriculum Vitae
`Appendix B: Oxaal – Full surround image data example
`Appendix C: PhotoVR examples
`Appendix D: Source Code Appendix from Patent 7,542,035
`
`GRANDEYE EXHIBIT 2028, Page 2 of 115
`
`
`
`Declaration of Prof. James H. Oliver – May 2, 2014
`
`Page 3
`
`Introduction
`
`1. I, James H. Oliver, declare as follows:
`
`2. I have been retained by Oncam Grandeye Inc. to provide expert assessment of the
`patent and prior art referred to in this case and their relevance to the inter partes review
`of the following US Patents:
`
` US Patent 8,077,176 B2 – Method for Interactively Viewing Full-Surround Image Data
`and Apparatus Therefor, by Oxaal, 2011, henceforth referred to as Patent ‘176, Case
`IPR2013-00546
`
` US Patent 6,243,099 B1 – Method for Interactive Viewing Full-Surround Image Data and
`Apparatus Therefor, by Oxaal, 2001, henceforth referred to as Patent ‘099, Case
`IPR2013-00547
`
` US Patent 7,542,035 B2 – Method for Interactively Viewing Full-Surround Image Data
`and Apparatus Therefor, by Oxaal, 2009, henceforth referred to as Patent ‘035, Case
`IPR2013-00548
`
`3. I have reviewed the patent and the prior art related to this case. My assessment is an
`objective evaluation of the facts presented in this case as they relate to common
`practice based on my extensive experience in computer graphics, visualization, and
`virtual reality. Comments below reflect not only my current understanding of
`relevant technology, but also the general understanding among people working in
`image processing technology on or around January 7, 1998.
`
`4. In addition to the patents at issue I have reviewed the following additional
`publications and prior art:
`
`GRANDEYE EXHIBIT 2028, Page 3 of 115
`
`
`
`Declaration of Prof. James H. Oliver – May 2, 2014
`
`Page 4
`
` Expert Declaration of James H. Oliver, Ph.D., filed with the USPTO, April 15, 2013,
`in Ex Parte Reexamination Control No. 90/012,689.
`
` Declaration of John R. Grindon, D.Sc., in Support of Petition for Inter Partes Review
`of US Patent 7,542,035, August 30, 2013
`
` Decision: Institution of Inter Partes Review, Cases IPR2013-00546, IPR2013-00547, and
`IPR2013-00548, February 5, 2014
`
` US Patent 5,684,937 – Method and Apparatus for Performing Perspective Transformation
`on Visible Stimuli, by Oxaal, 1997, henceforth referred to as Patent ‘937
`
` US Patent 5,903,782 – Method and Apparatus for Producing a Three-Hundred and Sixty
`Degree Visual Data Set, by Oxaal, 1999, henceforth referred to as Patent ‘782
`
` “Texture Mapping as a Fundamental Drawing Primitive,” Paul Haeberli and
`Mark Segal, Proceedings of the Fourth Eurographics Workshop on Rendering, pp. 259-266,
`June 1993
`
` “QuickTimeVR – An Image-Based Approach to Virtual Environment
`Navigation,” by S.E. Chen, SIGGRAPH '95 Proceedings of
`the 22nd Annual
`Conference on Computer Graphics and Interactive Techniques, pp. 29-38, 1995
`
`for Virtual
` “Photo VR: A System of Rendering High Quality Images
`Environments Using Sphere-like Polyhedral Environment Maps,” by Tsao, et al.,
`RAMS’96, Proceedings of 2nd Workshop on Real-Time and Media Systems, pp. 397-403,
`1996
`
`GRANDEYE EXHIBIT 2028, Page 4 of 115
`
`
`
`Declaration of Prof. James H. Oliver – May 2, 2014
`
`Page 5
`
` Coxeter, H. S. M. Introduction to Geometry, 2nd ed. Wiley, New York pp. 289-290,
`1969
`
`Professional Background and Qualifications
`
`5. I have been a student and practitioner of engineering for more than 35 years, having
`received my B.S. (Union College, 1979), M.S. (Michigan State University, 1981), and
`Ph.D. (Michigan State University, 1986) degrees, all in mechanical engineering. My
`particular expertise is
`in the general area of human computer
`interaction
`technologies, encompassing computer graphics, geometric modeling, virtual and
`augmented reality, and collaborative networks
`for applications
`in product
`development and complex system operation. I hold courtesy faculty appointments in
`the Departments of Aerospace Engineering, Electrical and Computer Engineering,
`and Industrial and Manufacturing Systems Engineering. In addition, I have held a
`variety of industry positions as a practicing engineer, and began my academic career
`in 1988.
`
`6. I am currently employed by, and hold the title of University Professor at, Iowa State
`University of Science and Technology (ISU) as the Larry and Pam Pithan Professor
`of Mechanical Engineering, where I teach mechanical design at the introductory,
`sophomore-level, the senior undergraduate level (both required of all ME majors), as
`well as two graduate-level design courses in computer graphics and computer aided
`design.
`
`7. Since my arrival at ISU in 1991 I have continuously enhanced and developed our
`graduate course ME557, Computer Graphics and Geometric Modeling to keep up with the
`rapid advances in the field and to support the growing research emphasis on
`advanced visualization and virtual reality technology at ISU. The course has grown in
`
`GRANDEYE EXHIBIT 2028, Page 5 of 115
`
`
`
`Declaration of Prof. James H. Oliver – May 2, 2014
`
`Page 6
`
`popularity over the years and is now cross-listed with the Department of Electrical
`and Computer Engineering and the Department of Computer Science. It also
`attracts substantial on-line enrollment from students across the country. The course
`covers the theory and practice of contemporary computer graphics technology
`including object modeling, homogeneous coordinates, coordinate transformations,
`projections,
`lighting models, rendering, texture mapping, as well as a variety of
`advanced techniques including stencil buffers, shadows, particle systems, etc.
`
`8. As a recognized expert in the field, I was asked in 1993 to review what has since
`become the seminal book in the field of surface modeling (The NURBS Book, by L.
`Piegl and W. Tiller, 1995). This technology is at the heart of all contemporary
`computer modeling software tools and has matured significantly only within the past
`20 years. I leveraged my research experience in this field, and with permission of the
`authors, developed a graduate course in the mathematical foundations of surface
`modeling to align with the manuscript, and ultimately adopted the book for my
`course. The course is now offered as an advanced (600-level) graduate course, and I
`teach it every other year.
`
`9. From 1997-2001 I took a leave from my university position to accept a position in
`the software industry. I joined Engineering Animation Incorporated (Nasdaq: EAII)
`to lead their core technology team focused on CAD-independent,
`large model
`visualization to facilitate virtual product prototyping. In 1999, I conceptualized,
`planned and led development of e-Vis, the first commercial software product to
`combine high-performance product visualization with secure Internet-based
`collaboration capabilities to empower distributed product development and supply
`chain integration. After several corporate acquisitions, these technologies are now
`referred to as TeamCenter Visualization, part of the Siemens PLM Software tool suite,
`and are used by manufacturers around the world.
`
`GRANDEYE EXHIBIT 2028, Page 6 of 115
`
`
`
`Declaration of Prof. James H. Oliver – May 2, 2014
`
`Page 7
`
`10. In fall 2003 I was named director of the Virtual Reality Applications Center (VRAC)
`at ISU and have fostered its continued growth. Under my leadership VRAC’s
`ongoing contract research has increased from $9M to $20M and faculty involvement
`has broadened to encompass colleagues from all of ISU colleges. From 2005-2007 I
`led fund raising, technical specification, bid process and vendor management for a
`$5M upgrade of our flagship device, the C6 – now the world’s highest resolution
`immersive VR facility, and in 2008 led an $800K upgrade of Lee Liu Auditorium in
`Howe Hall, making it the world’s highest resolution stereoscopic immersive theater.
`
`11. I have garnered financial support for my research program from several federal
`sources including the National Science Foundation (NSF), NASA, and the US
`Department of Defense research branches of the Navy, Air Force and Army.
`Industry sponsors of my research include John Deere, Rockwell Collins, and Boeing.
`I have received numerous professional honors and awards including the Gustus L.
`Larson Memorial Award from the American Society of Mechanical Engineering
`(ASME)
`recognizing early
`career
`achievement,
`and the National Science
`Foundation’s prestigious Young Investigator Award. I served six years as Associate
`Editor of the ASME Transactions, Journal of Computing and Information Science in
`Engineering. I am a Fellow of the ASME and hold three US patents on innovations in
`mechanical design and manufacturing.
`
`12. More details of my qualifications are presented in my comprehensive curriculum vita,
`which is submitted for reference as Appendix A.
`
`Definitions
`
`13. For the purposes of this report, some common terminology is defined as it would be
`understood by those of ordinary skill in the art of image processing, especially as of
`
`GRANDEYE EXHIBIT 2028, Page 7 of 115
`
`
`
`Declaration of Prof. James H. Oliver – May 2, 2014
`
`Page 8
`
`the priority date of the ‘176 patent, the ‘099 patent, and the ‘035 patent, in light of
`the specifications thereof.
`
`“Texture Mapping”
`
`14. At the priority date of the subject patents, texture mapping was well understood. For
`example, Hearn and Baker1 provide a typical computer graphics (CG) textbook
`definition: “a common method for adding surface detail is to map texture patterns
`onto the surfaces of objects. The texture pattern may either be defined in a
`rectangular array or as a procedure that modifies surface intensity values. This
`approach is referred to as texture mapping or pattern mapping.”
`
`15. Although originally developed to enhance visual realism (i.e., the computer graphics
`equivalent of applying a decal to a physical object), over the past 40 years the basic
`techniques underlying texture mapping have been generalized to encompass many
`additional CG effects (e.g., environment mapping, volume visualization, and many
`others). The ubiquity of texture mapping has also led to its standardization in
`software utilities (e.g., OpenGL and DirectX) as well as hardware, such as NVIDIA
`graphics cards.
`
`16. In its most common manifestation “texture mapping” refers to the entire process of
`rendering a textured CG scene. The process requires that 2D images must first be
`associated with 3D geometric models via the assignment of “texture coordinates,”
`i.e., each vertex (x, y, z) in the 3D model is associated with a 2D location (u, v) in the
`image texture. During the standard CG rendering process a lighting model typically
`determines each pixel’s color. With the addition of texture mapping, each pixel’s
`color is augmented (or replaced completely) with additional elements derived from
`
`1 Donald Hearn and M. Pauline Baker, Computer Graphics: C Version, 2nd Edition,
`Prentice Hall, Upper Saddle River, NJ, 1997, pp. 554-556
`
`GRANDEYE EXHIBIT 2028, Page 8 of 115
`
`
`
`Declaration of Prof. James H. Oliver – May 2, 2014
`
`Page 9
`
`the texture itself. Both color contributions are subject to a projection from the 3D
`world space onto a 2D view plane, which generally incorporates a perspective
`transformation.
`
`17. The Institution Decision filed for the IPR cases on the subject patents relies on very
`old prior art (Haeberli) for its construction of “texture mapping” as “applying image
`data to a surface” (p. 11). However, this construction is inadequate to encompass the
`generality of texture mapping applications. For example, for volume rendering,
`textures are rendered directly – surface geometry is not explicitly represented. A
`broader construction, understood by one of ordinary skill in the art, is: “associating,
`by reference, locations in a computer graphics object with locations in image data.”
`
`18. Furthermore, with one exception, the long list of texture mapping-related processes
`considered in the Institution Decision does not modify the meaning of texture
`mapping as construed here. The one exception is Phong shading, which is now the
`defacto standard rendering method of computer graphics. Phong shading
`interpolates vertex normals across polygon edges before scan conversion, which
`applies a lighting model to each pixel to obtain a smooth appearance and specular
`highlights. To be precise, Haeberli
`(the basis for
`the Institution Decision
`construction) describes a method for simulating Phong shading via texture mapping.
`One of ordinary skill in the art would recognize that Phong shading is not texture
`mapping.
`
`19. To facilitate the analysis that follows, the process of texture mapping described in
`paragraph 16 is explicitly decomposed into the following three steps:
`
` Texture Creation – A source image (or multiple images) must first be acquired or
`generated. Photographs are a common source for textures in many applications.
`
`GRANDEYE EXHIBIT 2028, Page 9 of 115
`
`
`
`Declaration of Prof. James H. Oliver – May 2, 2014
`
`Page 10
`
`Note that the lens used to create a photographic image applies a projection of the
`light representing the physical world in 3D and maps it onto a 2D surface on the
`film or sensor of the camera in order to record a 2D representation of the world.
`
` Texture Application – Very often, assigning 3D model vertices to corresponding 2D
`texture coordinates requires a specific type of projection to yield the desired
`results. Many different strategies exist to produce a variety of effects.
`
` Rendering – After the texture is applied to a 3D model it is typically rendered
`(displayed) on a 2D view window. As described in paragraph 16, for each pixel in
`the view window, part of its color is computed via application of a lighting model
`that simulates the physical
`interaction of light striking an object. If texture
`mapping is enabled, another contribution to each pixel’s color is determined from
`the texture map by interpolating texture values assigned at each vertex. These so-
`called “fragment” or primitive fill operations are also subject to a projection
`specified by the developer subject to application requirements.
`
`“Environment Mapping”
`
`20. Environment mapping is a common generalization of texture mapping used to
`provide background or far field-of-view elements of the scene. An environment map
`is a 2D image that represents the distant environment. The 2D image is indirectly
`related to the 3D world (or model environment) via a variety of 3D->2D mappings.
`These mappings define the relationship between a point on a sphere, cylinder or
`cube and its equivalent location on a “flattened” version of it. Environment maps are
`sometimes illustrated in their unfolded state to show their two-dimensional nature.
`They can be used to add realism to rendered objects in a scene (e.g., to depict the
`reflection of the sky on a shiny object, or rendered directly to provide an
`environment viewer such as QuickTimeVR (Chen) and PhotoVR (Tsao, et al.).
`
`GRANDEYE EXHIBIT 2028, Page 10 of 115
`
`
`
`Declaration of Prof. James H. Oliver – May 2, 2014
`
`Page 11
`
`Oxaal Patents (‘176), (‘035) and (‘099)
`
`‘035 and ‘099 patents. These patents have generally
`21. I have reviewed the ‘176,
`identical text and drawings, except that the ‘099 patent includes some additional
`drawings with computer code which is included by reference in ‘035.
`
`22. These patents are based in the field of image processing. The field of image
`processing, in January 12, 1998, was still significantly separated from the field of
`computer graphics. The focus of image processing has traditionally been on
`extracting information from 2D images, while computer graphics was characterized
`by synthesizing 2D images from 3D models. These distinct goals generally led to
`different emphases. Image processing was concerned primarily with accuracy via
`precise analytic transformation computations typically on a pixel-by-pixel basis. In
`contrast, computer graphics was generally characterized by approximations of
`physical phenomena (e.g., lighting) and routinely compromised accuracy in return for
`computational efficiency.
`
`23. The boundaries of these fields have changed significantly over time: in the 1970s and
`1980s there was a very clear separation between these fields of technology. Since
`1998, as the processing power and data bandwidth of integrated circuits increased,
`the two fields have become less distinct. It is therefore important to note that the
`field of technology addressed by the Oxaal patents would have been regarded as
`image processing, not computer graphics.
`
`24. The focus of image processing has traditionally been on extracting information from
`2D images, while computer graphics was characterized by synthesizing 2D images
`from 3D models. These distinct goals generally led to different emphases. Image
`processing was concerned primarily with accuracy via precise analytic transformation
`
`GRANDEYE EXHIBIT 2028, Page 11 of 115
`
`
`
`Declaration of Prof. James H. Oliver – May 2, 2014
`
`Page 12
`
`computations typically on a pixel-by-pixel basis. In contrast, computer graphics was
`generally characterized by approximations of physical phenomena (e.g., lighting) and
`routinely compromised accuracy in return for computational efficiency.
`
`25. By the early 1970’s computer graphics was focused on producing realistic images of
`3D models while utilizing the limited computational power of the day. Before
`1971 rendering involved calculating the angle between a polygonal face normal
`vector and the vector from a hypothetical light source. Color was assigned to each
`polygonal facet of the model according to this angle to simulate lighting. For curved
`surfaces represented by polygonal meshes, this resulted in a “faceted” appearance.
`
`26. In 1971 Henri Gouraud introduced a new rendering algorithm enabled by assigning
`independent surface normal vectors at each vertex of the mesh. The light-
`source/normal vector computation was done to compute a unique color at each
`vertex. After transforming all polygon vertices into view-window (2D pixel)
`coordinates, Gouraud applied an efficient scan-line processing algorithm to
`interpolate color values, first along each edge of each triangle, and then pixel-by-pixel
`across each horizontal scan-line. The result was relatively smooth shading of a curved
`surface mesh.
`
`27. As a Ph.D. student in the early 1970’s, Ed Catmull (founder of Pixar) was inspired to
`further increase visual realism by observing that in the real world objects are often
`enhanced by adding decorative surface detail, for example, like wood veneer on a
`table top or wall paper on a wall. His doctoral dissertation in 1974 introduced
`“texture mapping” to accomplish a similar effect in a simulated computer graphics
`scene. Texture mapping involved the association of a 2D image to 3D geometry so
`that during rendering, for each pixel that maps onto a 3D surface, its associated
`texture image is sampled to determine color. Color derived from texture mapping
`
`GRANDEYE EXHIBIT 2028, Page 12 of 115
`
`
`
`Declaration of Prof. James H. Oliver – May 2, 2014
`
`Page 13
`
`could also be blended with corresponding contributions from Gouraud shading to
`produce increased realism in computer-generated scenes.
`
`28. Although texture mapping incorporates digital 2D images, the image processing
`research of this era was completely distinct, focused primarily on image analysis, such
`as segmentation and object correlation, as well as image transformation.
`
`29. The Oxaal patents describe methods for interactive viewing of full surround image
`data. They indicate many options and variations, as pioneer patents often do. In
`general, a main concept is the use of texture mapping to create a textured p-surface,
`as an intermediate 3D geometric model object,
`in the process of viewing full
`surround image data. The use of texture mapping for this purpose was very
`surprising at the time. Texture mapping was itself a known technology, and hardware
`accelerators were commercially available: the surprise is that you would want to use
`that technology in the field of image processing, in the way indicated by Oxaal.
`
`30. Oxaal discovered that an image texture which has been mapped onto an arbitrarily
`shaped surface, using a defined projection point, can be subsequently rendered with
`precise perspective, when viewed from that projection point, using standard CG
`hardware and software techniques. Standard CG libraries, such as OpenGL and
`DirectX provide software bindings to hardware accelerated rendering algorithms.
`These libraries typically support linear perspective transformations.
`
`31. Oxaal’s methods address the use of full-surround image textures, generated, for
`example, from a “fisheye” camera lens. Since the projection implemented to apply
`the texture to the 3D geometry mimics the path of the light that created it, the
`rendering of it onto the 2D view plane can be accomplished directly with hardware
`accelerated linear perspective.
`
`GRANDEYE EXHIBIT 2028, Page 13 of 115
`
`
`
`Declaration of Prof. James H. Oliver – May 2, 2014
`
`Page 14
`
`32. Oxaal’s patents were motivated by the need to present a portion of full surround
`image data (e.g., a hemispherical fisheye image) in a familiar 2D view window with
`natural linear perspective. From an image processing point of view, the challenge
`would be viewed as image transformation from a portion of one (distorted) 2D
`image into another (undistorted) 2D image. The traditional
`image processing
`approach to this challenge would be to apply the inverse distortion mapping
`transformation on a pixel-by-pixel basis.
`
`33. When read in light of Oxaal’s specifications, one of ordinary skill in the art would
`understand the term “full surround image data” as corresponding to image data that
`encompasses a large field of view in both horizontal and vertical directions. By way
`of an example, a picture is included in Appendix B to illustrate “full-surround image
`data” as would be understood by one of ordinary skill in the art.
`
`34. In contrast, Oxaal introduced the counterintuitive pre-processing step of texturing a
`spherical p-surface with the spherical
`image using the mapping transformation
`equivalent to the path taken by the light when the image was created. This
`computation is done once, not each time the view is changed, and it is done for far
`fewer points (typically 2,5002 for Oxaal’s p-surface) rather than for each pixel in the
`view window. For example, in the image processing approach a 640x800 resolution
`view window would require transformation of 512,000 pixels. Further, Oxaal teaches
`that given a p-surface textured in this way, and a viewpoint defined at the center of
`projection; the sub-image defined by any view definition (i.e., direction and field-of-
`view (FOV)) can be rendered to yield a precise linear perspective view, without
`additional transformation, using standard computer graphics hardware.
`
`2 See Oxaal’s embodiment as source code, included as Appendix D. In particular, note
`that the function createHemisphere is called twice to instantiate two hemispherical
`polyhedral (p-surfaces) with 50 x 50 vertices each.
`
`GRANDEYE EXHIBIT 2028, Page 14 of 115
`
`
`
`Declaration of Prof. James H. Oliver – May 2, 2014
`
`Page 15
`
`35. Another benefit of Oxaal’s discovery is that changing the viewpoint (in a very
`specific way) can produce other useful perspective views of the full surround image
`data, with no additional processing. The left-hand image in Figure 1 below shows a
`2D schematic of a view frustum defined with viewpoint corresponding to the center
`of projection, which yields a linear perspective 2D view of the full surround image
`data, as described above. Oxaal observed that the viewpoint could be moved, but
`only in the opposite direction of the view vector, to yield other valid renderings of
`the full surround image data. For example, the right-hand image of Figure 1 shows
`the same view frustum definition with the viewpoint moved to the surface of the
`sphere opposite of the view direction. This viewpoint yields a circular perspective
`2D view. Any viewpoint between two shown in Figure 1 yields a valid rendering of
`the full surround image data with elliptical perspective. With the p-surface so defined,
`each of these results is obtained without additional processing using standard
`computer graphics rendering with linear perspective.
`
`Figure 1: Oxaal – precise perspective depending on viewpoint
`
`36. Note that this precise control of variable perspective is exactly the same effect as the
`art taught by Oxaal’s earlier ‘937 patent. The primary distinction between ‘937 and
`
`GRANDEYE EXHIBIT 2028, Page 15 of 115
`
`
`
`Declaration of Prof. James H. Oliver – May 2, 2014
`
`Page 16
`
`the subject patents is the discovery that rather than “explicitly” evaluating the proper
`perspective by applying an analytical relationship on a pixel-by-pixel basis each time
`the viewpoint is moved, as taught in ‘937,
`in the subject patents the necessary
`perspective transformations are applied “implicitly” by simply moving the viewpoint
`as described above, while, by virtue of the texture mapped p-surface, the scene is
`always rendered with linear perspective using standard computer graphics technology
`(For example, see ‘035, Col. 6, lines 7-19).
`
`37. Oxaal’s approach is counterintuitive to one of ordinary skill in the art of image
`processing as of January 12, 1998. In most environment mapping applications, the
`environment map exists only as a 2D image texture, and is not directly associated
`with 3D model geometry. Instead a single texture-mapped quadrilateral primitive is
`used to represent the view window. The quadrilateral is not explicitly modeled, and is
`not a spatial model of the world boundary.
`
`38. In contrast, Oxaal’s “p-surface” is comprised of textured 3D model geometry
`(generally a triangular mesh) with texture coordinates assigned as described above.
`The p-surface represents a 3D model of the world boundary and its texture is applied
`with a projection that corresponds to the intrinsic properties of the lens that created
`the (full-surround) source image. Thus, if viewed from the center of projection, the
`textured p-surface appears as the eye would naturally “see” the world. Using standard
`OpenGL rendering, a view frustum is defined so that a portion of the textured p-
`surface is visible, and the scene is rendered using the standard texture-rendering
`utilities with standard linear perspective.
`
`39. One of ordinary skill in the art would have expected that Oxaal’s approach would, in
`general, require rendering many more textured primitives. It is surprising that Oxaal’s
`approach requires creation of a 3D object (the textured p-surface) for handling 2D
`
`GRANDEYE EXHIBIT 2028, Page 16 of 115
`
`
`
`Declaration of Prof. James H. Oliver – May 2, 2014
`
`Page 17
`
`data (the environment map or image). However, one benefit of Oxaal’s approach is
`increased flexibility in viewing options. Another is increased scalability.
`
`Tsao, et al. (PhotoVR)
`
`40. Tsao presents an environment map viewer based on an aggregation of many
`standard (linear perspective) photographs. Tsao is motivated by a perceived
`limitation of QuickTime VR which provides a cylindrical environment map that
`necessarily limits view panning in the vertical direction. His approach is a spherical
`environment map viewer that provides unlimited view panning in both horizontal
`and vertical directions.
`
`41. Source images for Tsao’s viewer are assumed to be a series of photographs, taken
`with a normal lens (i.e., linear perspective) from a single position in space, with view
`direction aimed at regular intervals of azimuth and elevation angles. Although he
`assumes the center of the camera sensor (image plane, and hence center of
`projection, COP) remains in the same position for all photographs, he acknowledges
`that in practice, this is difficult to maintain, and camera position errors are common.
`Since they are acquired with a “normal” lens, Tsao’s source photographs necessarily
`have a limited field of view, i.e., they present a natural (linear perspective) view of a
`limited portion of the typical human hemispherical view of the world. They appear
`flat and undistorted compared to the (distorted) full surround image data shown in
`Appendix B.
`
`42. Using the source images, Tsao creates a textured spherical polyhedron via “ray
`casting.” First the source images are positioned (registered) on a hypothetical sphere
`such that each source image is tangent to the sphere, and the center of each is
`positioned according to the azimuth and elevation angles from which it was taken.
`
`GRANDEYE EXHIBIT 2028, Page 17 of 115
`
`
`
`Declaration of Prof. James H. Oliver – May 2, 2014
`
`Page 18
`
`As shown in Figure 2, since the field of view is generally larger than the angular
`increment of azimuth and/or elevation between adjacent source images, the images
`so arranged will exhibit substantial “overlap.” This is an intentional design feature of
`Tsao’s approach.
`
`Figure 2: 2D view of 5 rectilinear source images registered on portion of sphere at
`increments of 22.5 degrees showing source image overlap
`
`43. Next a spherical polyhedron approximating the hypothetical sphere is introduced,
`comprised of triangles and trapezoids with vertices located on the surface of the
`hypothetical sphere. In Tsao’s embodiment there are many more polygons in the
`spherical polyhedron (216) than source images (94), which guarantees substantial
`overlap of source images, i.e., each polygon of the spherical polyhedron is spanned
`by multiple overlapping source images. For example, Figure 3 shows a 2D depiction
`of a single 22.5-degree increment from Figure 2 with cross sections of the polyhedral
`sphere shown – in this illustrative 2D example, the 90-degree arc of a circle is
`approximated by 8 polygons that are spanned by 5 source images. Note that the
`
`GRANDEYE EXHIBIT 2028, Page 18 of 115
`
`
`
`Declaration of Prof. James H. Oliver – May 2, 2014
`
`Page 19
`
`spherical polyhedron inscribes the hypothetical sphere, while the source images
`circumscribe it.
`
`Figure 3: Spherical polyhedron inscribing - and source images circumscribing -
`hypothetical sphere
`
`44. Each polygon (triangle or trapezoid) of the spherical polyhedron is then scan
`converted,
`i.e., divided into pixels. The density of the pixels is related to the
`resolution of the source images at their registration points. A ray is fired from the
`center of projection (COP) through each pixel and intersected with as many of the
`source images as it hits. A texture element (texel) is generated for each pixel of each
`polygon of the spherical polyhedron by averaging color values from all of the
`intersected source images. Figure 4 depicts this ray casting approximation.
`
`45. Although Tsao implements this projection of linear perspective photographs onto a
`spherical p