`
`Control No.: 90/012,689
`
`Examiner: KE, PEN G
`
`Patent No.: 7,542,035
`
`Confirmation No.: 2147
`
`Atty Docket No. GRND-L5
`
`Control No.: 90/012,590
`
`Examiner: KE, PEN G
`
`Patent No.: 6,243,099
`
`Confirmation No.: 2143
`
`Atty Docket No. GRND-L6
`
`EXPERT DECLARATION OF JAMES H. OLIVER, Ph.D.
`
`Table of Contents
`
`PROFESSIONAL BACKGROUND AND QUALIFICATIONS ........................................................................... 2
`
`DEFINITIONS ......................................................................................................................................................... 5
`"Texture Mapping" ............................................................................................................................................................... 5
`"Environment Mapping" .................................................................................................................................................... 6
`
`OXAAL PATENTS 7,542,035 ('035) AND 6,243,099 ('099) .................................................................... 6
`Source Code Example ....................................................................................................................................................... 10
`
`THE FIELD OF THE OXAAL PATENTS IS "IMAGE PROCESSING" .......................................................... 11
`Image Processing Is Distinct From Computer Graphics .................................................................................... 11
`
`WHAT COULD PROVIDE A REASON TO COMBINE ................................................................................... 12
`
`LUKEN (US PATENT 5,923,334) .................................................................................................................... 14
`
`GULLICHSEN ET AL. (US PATENT 5,796,426) ........................................................................................... 17
`
`CHIANG ET AL. (US PATENT 6,028,584) ..................................................................................................... 20
`
`GREENE ................................................................................................................................................................. 24
`
`HAEBERLI ............................................................................................................................................................. 28
`
`CONCLUSION ........................................................................................................................................................ 29
`
`Appendices:
`Appendix A: Curriculum Vitae
`Appendix B: Source Code Appendix from provisional application 60/071,148
`Appendix C: Source Code Appendix from 6,243,099
`Appendix D: Source Code Appendix from 7,542,035
`
`Rule 132 Declaration of Prof. James Oliver- April15, 2013
`
`Page 1
`
`
`
`I, James H. Oliver, declare as follows:
`
`1. I have been retained by Oncam Grandeye Inc. to provide expert assessment of the
`
`patents and prior art referred to in this case and their relevance to the reexamination
`
`of US Patent Numbers 6,243,099 and 7,542,035. I have reviewed these patents and
`
`the prior art related to this case. My assessment is an objective evaluation of the facts
`
`presented in this case as they relate to common practice based on my extensive
`
`experience in computer graphics, visualization, and virtual reality. Comments below
`
`reflect not only my current understanding of relevant technology, but also the general
`
`understanding among people working in image processing technology on January 7,
`
`1998.
`
`Professional Background and Qualifications
`
`2. I have been a student and practitioner of engineering for more than 35 years, having
`
`received my B.S. (Union College, 1979), M.S. (Michigan State University, 1981), and
`
`Ph.D. (Michigan State University, 1986) degrees, all in mechanical engineering. My
`
`particular expertise
`
`is
`
`in
`
`the general area of human computer interaction
`
`technologies, encompassing computer graphics, geometric modeling, virtual and
`
`augmented
`
`reality, and collaborative networks
`
`for applications
`
`in product
`
`development and complex system operation. I hold courtesy faculty appointments in
`
`the Departments of Aerospace Engineering, Electrical and Computer Engineering,
`
`and Industrial and Manufacturing Systems Engineering. In addition, I have held a
`
`variety of industry positions as a practicing engineer, and began my academic career
`
`in 1988.
`
`3. I am currently employed by, and hold the title of University Professor at, Iowa State
`
`University of Science and Technology (ISU) as the Larry and Pam Pithan Professor
`
`Rule 132 Declaration of Prof. James Oliver- AprillS, 2013
`
`Page 2
`
`
`
`of Mechanical Engineering, where I teach mechanical design at the introductory,
`
`sophomore-level, the senior undergraduate level (both required of all ME majors), as
`
`well as two graduate-level design courses in computer graphics and computer aided
`
`design.
`
`4. Since my arrival at ISU in 1991 I have continuously enhanced and developed our
`
`graduate course ME557, Computer Graphics and Geometric Modeling to keep up with the
`
`rapid advances in the field and to support the growing research emphasis on
`
`advanced visualization and virtual reality technology at ISU. The course has grown in
`
`popularity over the years and is now cross-listed with the Department of Electrical
`
`and Computer Engineering and the Department of Computer Science. It also
`
`attracts substantial on-line enrollment from students across the country. The course
`
`covers the theory and practice of contemporary computer graphics technology
`
`including object modeling, homogeneous coordinates, coordinate transformations,
`
`projections, lighting models, rendering, texture mapping, as well as a variety of
`
`advanced techniques including stencil buffers, shadows, particle systems, etc.
`
`5. As a recognized expert in the field, I was asked in 1993 to review what has since
`
`become the seminal book in the field of surface modeling (The NURBS Book, by L.
`
`Piegl and W. Tiller, 199 5). This technology is at the heart of all contemporary
`
`computer modeling software tools and has matured significantly only within the past
`
`20 years. I leveraged my research experience in this field, and with permission of the
`
`authors, developed a graduate course in the mathematical foundations of surface
`
`modeling to align with the manuscript, and ultimately adopted the book for my
`
`course. The course is now offered as an advanced (600-level) graduate course, and I
`
`teach it every other year.
`
`6. From 1997-2001 I took a leave from my university position to accept a position in
`
`Rule 132 Declaration of Prof. James Oliver- AprillS, 2013
`
`Page 3
`
`
`
`the software industry. I joined Engineering Animation Incorporated (Nasdaq: EAII)
`
`to lead their core technology team focused on CAD-independent, large model
`
`visualization to facilitate virtual product proto typing. In 1999, I conceptualized,
`
`planned and led development of e-Vis, the first commercial software product to
`
`combine high-performance product visualization with secure
`
`Internet-based
`
`collaboration capabilities to empower distributed product development and supply
`
`chain integration. After several corporate acquisitions, these technologies are now
`
`referred to as TeamCenter Visualization, part of the Siemens PIM Software tool suite,
`
`and are used by manufacturers around the world.
`
`7. In fall2003 I was named director of the Virtual Reality Applications Center (VRAC)
`
`at ISU and have fostered its continued growth. Under my leadership VRAC's
`
`ongoing contract research has increased from $9M to $20M and faculty involvement
`
`has broadened to encompass colleagues from all of ISU colleges. From 2005-2007 I
`
`led fund raising, technical specification, bid process and vendor management for a
`
`$5M upgrade of our flagship device, the C6 - now the world's highest resolution
`
`immersive VR facility, and in 2008 led an $800K upgrade of Lee Liu Auditorium in
`
`Howe Hall, making it the world's highest resolution stereoscopic immersive theater.
`
`8. I have garnered fmancial support for my research program from several federal
`
`sources including the National Science Foundation (NSF), NASA, and the US
`
`Department of Defense research branches of the Navy, Air Force and Army.
`
`Industry sponsors of my research include John Deere, Rockwell Collins, and Boeing.
`
`I have received numerous professional honors and awards including the Gustus L.
`
`Larson Memorial Award from the American Society of Mechanical Engineering
`
`(ASME)
`
`recognizing early career achievement, and
`
`the National Science
`
`Foundation's prestigious Young Investigator Award. I served six years as Associate
`Editor of the ASME Transactions, Journal if Computing and Irifbrmation Science in
`
`Rule 132 Declaration of Prof. James Oliver- AprillS, 2013
`
`Page 4
`
`
`
`Engineering. I am a Fellow of the ASME and hold three US patents on innovations in
`
`mechanical design and manufacturing.
`
`9. More details of my qualifications are presented in my comprehensive curriculum vita,
`
`which is submitted for reference as Appendix A.
`
`Definitions
`
`10. For the purposes of this report, some common terminology is defined as it would be
`
`understood by those of ordinary skill in the art of image processing, especially as of
`
`January 12,1998.
`
`"Texture Mapping"
`
`11. Texture mapping is a common 3D computer graphics (CG) technique that enables
`
`enhanced visual realism of a 2D CG rendering of a 3D scene. Texture mapping can
`
`be thought of as the computer graphics equivalent to applying a decal to a physical
`
`object. In common use, the term "texture mapping" generally refers to the entire
`
`process of rendering a textured CG scene. The process requires that 2D images must
`
`first be associated with 3D geometric models via the assignment of "texture
`coordinates," i.e., each vertex (x, y, 0 in the 3D model is associated with a 2D
`location (u, v) in the image texture. During the standard CG rendering process a
`
`lighting model determines each pixel's color. With the addition of texture mapping,
`
`each pixel's color is augmented (or replaced completely) with additional elements
`
`derived from the texture itself. Both color contributions are subject to a projection
`
`from the 3D world space onto a 2D view plane, which generally incorporates a
`
`perspective transformation.
`
`Rule 132 Declaration of Prof. James Oliver- AprillS, 2013
`
`Page 5
`
`
`
`"Environment Mapping"
`
`12. Environment mapping is a technique for providing background or far field-of-view
`
`elements of the scene. An environment map is a 2D image that represents the distant
`
`environment. The 2D image is indirectly related to the 3D world (or model
`
`environment) via a variety of 3D->2D mappings. These mappings define the
`
`relationship between a point on a sphere, cylinder or cube and its equivalent location
`
`on a "flattened" version of same. (Environment maps are sometimes illustrated in
`
`their unfolded state to show their two-dimensional nature, as e.g. in Plate 2 of the
`
`Greene article discussed below.)
`
`Oxaal Patents 7,542,035 ('035) and 6,243,099 ('099)
`
`13. I have reviewed patents 7,542,035 (the '035 patent) and 6,243,099 (the '099 patent).
`
`These two Oxaal patents refer to a Source Code Appendix, which I have also
`
`reviewed. The version flied with provisional application 60/071,148 is attached as
`
`Appendix B; the version flied with the '099 patent is attached as Appendix C; and the
`
`version attached to the '035 patent is attached as Appendix D. These patents have
`
`generally identical text and drawings, except that the '099 patent includes some
`
`additional drawings with code taken from the same Appendix.
`
`14. These patents are based in the field of image processmg. The field of image
`
`processing, in January 12, 1998, was still significantly separated from the field of
`
`computer graphics. The boundaries of these fields have changed significantly over
`
`time: in the 1970s and 1980s there was a very clear separation between these fields of
`
`technology. Since 1998, as the processing power and data bandwidth of integrated
`
`circuits increased, the two fields have become less distinct. It is therefore important
`
`to note that the field of technology addressed by the Oxaal patents would have been
`
`regarded as image processing, not computer graphics.
`
`Rule 132 Declaration of Prof. James Oliver- AprillS, 2013
`
`Page 6
`
`
`
`15. By the early 1970's computer graphics was focused on producing realistic images of
`
`3D models while utilizing the limited computational power of the day. Before
`
`1971 rendering involved calculating the angle between a polygonal face normal
`
`vector and the vector from a hypothetical light source. Color was assigned to each
`
`polygonal facet of the model according to this angle to simulate lighting. For curved
`
`surfaces represented by polygonal meshes, this resulted in a "faceted" appearance.
`
`16. In 1971 Henri Gouraud introduced a new rendering algorithm enabled by assigning
`
`independent surface normal vectors at each vertex of the mesh. The light(cid:173)
`
`source/normal vector computation was done to compute a unique color at each
`
`vertex. After transforming all polygon vertices into view-window (2D pixel)
`
`coordinates, Gouraud applied an efficient scan-line processing algorithm
`
`to
`
`interpolate color values, first along each edge of each triangle, and then pixel-by-pixel
`
`across each horizontal scan-line. The result was relatively smooth shading of a curved
`
`surface mesh.
`
`17. As a Ph.D. student in the early 1970's, Ed Catmull (founder of Pixar) was inspired to
`
`further increase visual realism by observing that in the real world objects are often
`
`enhanced by adding decorative surface detail, for example, like wood veneer on a
`
`table top or wall paper on a wall. His doctoral dissertation in 197 4 introduced
`
`"texture mapping" to accomplish a similar effect in a simulated computer graphics
`
`scene. Texture mapping involved the association of a 2D image to 3D geometry so
`
`that during rendering, for each pixel that maps onto a 3D surface, its associated
`
`texture image is sampled to determine color. Color derived from texture mapping
`
`could also be blended with corresponding contributions from Gouraud shading to
`
`produce increased realism in computer generated scenes.
`
`18. Although texture mapping incorporates digital 2D 1mages, the 1mage processmg
`
`Rule 132 Declaration of Prof. James Oliver- AprillS, 2013
`
`Page 7
`
`
`
`research of this era was completely distinct, focused primarily on image analysis, such
`
`as segmentation and object correlation, as well as image transformation.
`
`19. The Oxaal patents describe methods for interactive viewing of full surround image
`
`data. They indicate many options and variations, as pioneer patents often do. In
`
`general, a main concept is the use of texture mapping to create a textured p-surface,
`
`as an intermediate 3D geometric model object, in the process of viewing full
`
`surround image data. The use of texture mapping for this purpose was very
`
`surprising at the time. Texture mapping was itself a known technology, and hardware
`
`accelerators were commercially available: the surprise is that you would want to use
`
`that technology in the field of image processing, in the way indicated by Oxaal.
`
`20. Oxaal discovered that an image texture which has been mapped onto an arbitrarily
`
`shaped surface, using a defined projection point, can be subsequently rendered with
`
`precise perspective, when viewed from that projection point, using standard CG
`
`hardware and software techniques. Standard CG libraries, such as OpenGL and
`
`DirectX provide software bindings to hardware accelerated rendering algorithms.
`
`These libraries typically support linear perspective transformations.
`
`21. Oxaal's methods address the use of full-surround image textures, generated, for
`
`example, from a "fisheye" camera lens. Since the projection implemented to apply
`
`the texture to the 3D geometry mimics the path of the light that created it, the
`
`rendering of it onto the 2D view plane can be accomplished directly with hardware
`
`accelerated linear perspective.
`
`22. Oxaal's approach is counterintuitive to one of ordinary skill in the art of image
`
`processing as of January 12, 1998. In most environment mapping applications, the
`
`environment map exists only as a 2D image texture, and is not directly associated
`
`Rule 132 Declaration of Prof. James Oliver- AprillS, 2013
`
`Page 8
`
`
`
`with 3D model geometry. Instead a single texture-mapped quadrilateral primitive is
`
`used to represent the view window. The quadrilateral is not explicitly modeled, and is
`
`not a spatial model of the world boundary.
`
`23. Some literature does utilize a "buffer"- as opposed to the textured p-surface of the
`
`Oxaal patents - as an intermediate object. This distinction is important. Here's the
`
`definition of "buffer" from Dictionary.com: Computers: a storage device for temporari!J
`
`holding data until the computer is reacjy to receive or process the data, as when a receiving unit has an
`operating speed lower than that if the unit feeding data to it. That is, a "buffer" is a holding
`
`stage in the transfer of physical data - 1 and 0 values in a memory, or high and low
`
`states on a physical bus.
`
`24. In contrast, Oxaal's "p-surface" is comprised of textured 3D model geometry
`
`(generally a triangular mesh) with texture coordinates assigned as described above.
`
`The p-surface represents a 3D model of the world boundary and its texture is applied
`
`with a projection that corresponds to the intrinsic properties of the lens that created
`
`the (full-surround) source image. Thus, if viewed from the center of projection, the
`
`textured p-surface appears as the eye would naturally "see" the world. Using standard
`
`OpenGL rendering, a view frustum is defined so that a portion of the textured p(cid:173)
`
`surface is visible, and the scene is rendered using the standard texture-rendering
`
`utilities with standard linear perspective.
`
`25. One of ordinary skill in the art would have expected that Oxaal's approach would, in
`
`general, require rendering many more textured primitives. It is surprising that Oxaal's
`
`approach requires creation of a 3D object (the textured p-surface) for handling 2D
`
`data (the environment map or image). However, one benefit of Oxaal's approach is
`
`increased flexibility in viewing options. Another is increased scalability.
`
`Rule 132 Declaration of Prof. James Oliver- AprillS, 2013
`
`Page 9
`
`
`
`Source Code Example
`
`26. Regarding the code seen in, e.g., Figs. 9A-10B of the '099 patent: The function
`
`readTexture, which is listed from the middle of 9B through the end of 9C, opens two
`
`flies (char* flle1 and char* flle2) and returns a Texture t containing two arrays of
`
`pixel values, tex1 and tex2 (essentially, textures), read in from flle1 and flle2.
`
`27. The function display, listed from the middle of 9D through the middle of 9F, calls
`
`createHemisphere twice, passing it flrst tex1 and then tex2 to be texture mapped
`
`onto corresponding hemispheres. See also, in the '099 patent, col. 7, lines 21-34,
`
`discussing texture mapping of tex1 and tex2 to the triangulated sphere.
`
`28. Figures 1 OA and 1 OB list createHemisphere, which creates the texture mapped
`
`hemispheres. The function createHemisphere takes a display list number and the
`
`number of points on a side (which determines, for example, the size and number of
`
`triangles), and returns a texture mapped hemisphere to be displayed.
`
`29. Looping for each of the numPts points, OpenGL is notified that a triangle strip will
`
`be drawn
`
`(geom
`
`1s GL_TRIANGLE_STRIP m
`
`this
`
`embodiment,
`
`see
`
`initialize_objects, Fig. 9D). The map function is passed reference coordinates (u, v)
`
`on a unit square that index the hemisphere to be texture mapped. The map function
`
`computes and returns a point on the sphere (x, y, z) corresponding to (u, v), and then
`
`texture coordinates tx and tz are calculated. Tx and tz correspond to a point in the
`
`1mage
`
`(source)
`
`texture. The
`
`function glTexCoord2f(tx,tz),
`
`followed by
`
`g1Vertex3f(x,y,z) deflnes a hemisphere vertex and its corresponding
`
`texture
`
`coordinate as part of the triangle strip hemisphere geometry. The map function,
`
`vertex creation and texture coordinate calculations are then repeated for the other
`
`two vertices of the triangle.
`
`Rule 132 Declaration of Prof. James Oliver- April15, 2013
`
`Page 10
`
`
`
`30. Once all points to be mapped to the hemisphere have been mapped, the loop
`
`terminates, and the texture mapped hemisphere (in this embodiment, a list of texture
`
`mapped triangular strips) is returned.
`
`31. By building the p-surface as a set of OpenGL polygon primitives (in this example),
`
`they are handed to the rendering pipeline, and are manifest as explicit 3D geometry,
`
`i.e. a virtual surface which can be visible in the scene.
`
`The Field of the Oxaal Patents is "Image Processing"
`
`32. The relevant field of art is Image Processing. This is the segment of information
`
`processing which deals with handling two-dimensional representations of the
`
`external world. For example, the image formed by a lens is two-dimensional ("2D"),
`
`and correcting the perspective of such an image (or part of such an image) would be
`
`a 2D-to-2D transform.
`
`Image Processing Is Distinct From Computer Graphics
`
`33. The relevant art is image processing. This is important to notice, because the fields of
`
`image processing and computer graphics have become far less distinct since the
`
`Oxaal filing. At the time of the Oxaal filing, it was not only surprising to perform the
`
`claimed functions, but also surprising to turn to the art of computer graphics to
`
`exploit hardware capabilities which were not otherwise necessary.
`
`34. At the time of the Oxaal filing, image processmg was often directed to analytic
`
`techniques, or to manipulation of images which (by the standards of time) consumed
`
`large amounts of memory; by contrast, computer graphics were originally driven by
`
`synthesis (e.g., constructing virtual objects for computer-aided design, simulations or
`
`video games).
`
`Rule 132 Declaration of Prof. James Oliver- AprillS, 2013
`
`Page 11
`
`
`
`35. These fields developed in parallel through the past 40 years, generally with distinct
`
`research communities. For example, a top journal for practitioners of image
`
`processing is IEEE Transactions on Pattern Analysis and Machine Intelligence
`
`(P AMI), while the computer graphics community was drawn to the Association for
`
`Computing Machinery's (ACM) SIGGRAPH Conference and published in journals
`
`like the ACM's Transactions on Graphics.
`
`36. The volumes of data which had to be managed in image processing were very large
`
`in relation to the computing hardware of the 1980s, and still very challenging with
`
`the computing hardware of the late 1990s. Often high-end workstations were used
`
`for such tasks, in part because of bus limitations and processor throughput. It was
`
`often difficult to achieve good performance in interactive image processing
`
`applications because of this. Thus adding steps to the processing pipeline in an
`
`interactive image processing architecture would have been generally regarded as a
`
`very bad idea.
`
`37. The fields of image processing and computer graphics remained generally distinct
`
`through the mid-1990's and only started to "cross-fertilize" in the late 1990's as
`
`digital technologies began to revolutionize traditional photography.
`
`38. The concept of "texture mapping" was developed in computer graphics to apply
`
`"skins" to virtual objects. This was a way of adding realism to a very simplified
`
`synthetic structure. Since there was no comparable deficiency of realism in image
`
`processing, there was no perceived need, prior to the Oxaal filing, to incorporate
`
`such techniques in the art of image processing.
`
`What Could Provide a Reason to Combine
`
`39. In any field of technology, a reason for modifying a known technology, or for
`
`Rule 132 Declaration of Prof. James Oliver- April15, 2013
`
`Page 12
`
`
`
`introducing elements from another technology, would have to make sense to people
`
`who work in this technology, AND would have to be recognized as a desirable goal.
`
`For example, increased throughput in image processing pipelines would be
`
`recognized as a desirable goal in the art of image processing.
`
`40. However, persons of ordinary skill in the art of image processing would not have
`
`recognized "in order to create a projection of the complete environment" as an
`
`obvious goal. A typical image processing practitioner of the era would have viewed
`
`projection of the complete environment as unnecessary and inefficient since they
`
`were focused on transforming only a portion of the environment map, and
`
`computational efficiency was the overriding goal.
`
`41. It is equally true that a person of ordinary skill in the art of computer graphics would
`
`not have recognized "in order to create a projection of the complete environment"
`
`as stating an obvious goal.
`
`42. A person of ordinary skill in the art of image processing would not have modified
`
`Chiang et al. (US Patent 6,028,584), nor combined it with anything else, "in order to
`
`create a projection of the complete environment."
`
`43. A person of ordinary skill in the art of image processing would not have modified
`
`Gullichsen et al. (US Patent 5, 796,426), or combined it with anything else, "in order
`
`to create a projection of the complete environment."
`
`44. A person of ordinary skill in the art of computer graphics would not have regarded
`
`Gullichsen as providing any useful teaching, as of 1997 or later. A person of ordinary
`
`skill in the art of computer graphics would not even have had any apparent reason to
`
`look for useful teaching in this document.
`
`Rule 132 Declaration of Prof. James Oliver- AprillS, 2013
`
`Page 13
`
`
`
`45. A person of ordinary skill in the art of image processing would not have modified
`
`Chiang et al. (US Patent 6,028,584), or combined it with anything else, "in order to
`
`provide user with a full hemisphere of information". Chiang suggests, at the bottom
`
`of column 1 and the top of column 2, that "spherical mapping'' is not necessary, and
`
`that cylindrical or panorama mapping is adequate. Chiang's summary of the
`
`invention, in the middle of column 3, specifically refers to panoramic images. This is
`
`consistent with Chiang's main theme of modifying Quick Time.
`
`46. A person of ordinary skill in the art of image processing would not have modified
`
`Luken 5,923,334, or combined it with anything else, "in order to create a projection
`
`of the complete environment."
`
`4 7. A person of ordinary skill in the art of computer graphics would not have regarded
`
`Luken as providing any useful teaching at all, as of 1997 or later. A person of
`
`ordinary skill in the art of computer graphics would not even have had any apparent
`
`reason to look for useful teaching in this document.
`
`Luken (US Patent 5,923,334)
`
`48. Luken presents a data structure for handling environment maps. The main teaching
`
`of Luken is shown in its Figure 5, which illustrates these triangular data structures.
`
`Handling the large volumes of data in environment maps was an important concern
`
`in 1996, and this appears to be Luken's main thrust.
`
`49. Luken does not use texture mapping. Instead, view window pixel values are explicitly
`
`evaluated. Luken's data structure appears to be incompatible with texture mapping,
`
`or at least would make it difficult to implement texture mapping.
`
`50. Luken describes a method for creating an environment map scene viewer which is
`
`Rule 132 Declaration of Prof. James Oliver- April15, 2013
`
`Page 14
`
`
`
`said to be more computationally efficient than the existing state-of-the-art at that
`
`time. In particular, Luken points out the limitations of Quicktime VR, which
`
`implements a cylindrical environment map, and of Renderman, which provides both
`
`sphere and cube environment maps. Luken's main point is the development of a
`
`unique triangular data structure that facilitates computational efficiency and a
`
`relatively smooth environment map by using a regular polyhedron as
`
`the
`
`environment map. The method is described in terms of an octahedral environment
`
`map, but Luken claims it could be implemented with other polyhedral environment
`
`maps.
`
`51. Luken merely converts a cube environment map into an octahedral environment
`
`map. Like most environment mapping implementations, Luken's octahedron is not
`
`manifested as actual model geometry, but rather as a hypothetical octahedron
`
`circumscribing the cuboid environment textures. The octahedron is tessellated into
`
`smaller triangular faces and each vertex of the tessellated octahedron is referred to as
`
`a "pixel" of the octahedron. Rays defined between each octahedron pixel and its
`
`center point are intersected with the cuboid of textures. The color value from each
`
`texture intersection point is copied into the corresponding octahedron pixel. To
`
`render a view of the resulting environment map, the view frustum is discretized into
`
`view pixels and rays emanating from each are intersected with the set of hypothetical
`
`octahedron pixels. The intersection of each (viewing) ray with the octahedron is used
`
`to determine the closest octahedron pixel, and its contents are copied directly to the
`
`view pixel.
`
`52. In Luken, as in Chiang, the only thing that could be argued to be an intermediate
`
`"object" is a memory buffer. Luken uses two buffers - one to hold the data whose
`
`original spatial relation is indicated by the hypothetical octahedron, and one to hold
`
`the frame buffer that he processes in scan-line order to render the scene - merely
`
`Rule 132 Declaration of Prof. James Oliver- April15, 2013
`
`Page 15
`
`
`
`copying pixel values from one to the other. Luken uses conversion from one 2D data
`
`set to another, and does not create any intermediate object like the p-surface in the
`
`Oxaal patents. A buffer is not an "object" in the image processing pipeline, but
`
`merely a location of data. More precisely, Luken's buffer is not itself an object in the
`
`model geometry. Luken merely uses the triangular memory arrangement to facilitate
`
`his brute-force renderer.
`
`53. Luken mentions a "frame buffer 1 04" at e.g. col. 5, 1. 1. However, the discussion of
`
`the "viewing windows" in Figs. 12 and 13 show that what is buffered is a
`
`representation of a planar "view window" (field of view) into which is mapped the
`
`octahedral environment map (step 1003 in Fig. 10). The view window is a planar
`
`rectangle "positioned into nco! columns and nrow rows" in step 1401 of Fig. 14. "[T]he
`
`pixels of the view window must be made to match the pixels if [sic] the display
`
`window.) Col. 12. U. 42-45. Again, there is no p-surface.
`
`54. The octahedron in Luken's Figure 4 is not actually used as model object geometry.
`
`Instead, this is simply an illustration of the implicit spatial relationship behind the
`
`data structure of Luken's Figure 5.
`
`55. Luken does not create any intermediate texture or model geometry object between
`
`the original image data and the resulting view. Even more clearly, Luken does not use
`
`texture mapping to create any intermediate texture or object between the original
`
`image data and the resulting view.
`
`56. Luken does not disclose any "3D object generated by a computer graphics system,"
`
`and therefore does not disclose any "p-surface" (as that term is used in the Oxaal
`
`patents).
`
`Rule 132 Declaration of Prof. James Oliver- April15, 2013
`
`Page 16
`
`
`
`Gullichsen et al. (US Patent 5, 796,426)
`
`57. Gullichsen is primarily directed to a method of approximation. Gullichsen refers to
`
`the existence of texture mapping, but does not use texture mapping to create any
`
`intermediate object. Gullichsen does not create any intermediate texture or object
`
`between the original image data and the resulting view.
`
`58. Gullichsen's "video frame buffer" is merely an intermediate stage in the physical
`
`transfer of data -
`
`it is not explicit geometry. It is not an "object" in the image
`
`processing pipeline, but merely a location.
`
`59. A fisheye lens cannot produce rectilinear perspective, so the resulting raw image
`
`appears circular and severely distorted. If some portion of a fisheye image is
`
`identified, it is relatively straightforward to correct the distortion to reproduce