throbber
UNITED STATES PATENT AND TRADEMARK OFFICE
`
`____________
`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`
`____________
`
`Google Inc.
`
`
`
`Petitioners,
`
`v.
`
`Vedanti Systems Limited
`
`Patent Owner.
`
`____________
`
`Case No. IPR2016-00215
`
`Patent No. 7,974,339
`
`____________
`
`
`
`____________
`
`DECLARATION OF Dr. Omid Kia
`
`
`
`
`
`

`
`Dr. Omid Kia Declaration
`
`I, Omid Kia, make this declaration in connection with the proceedings
`
`identified above.
`
`Introduction
`
`1.
`
`I have been retained by counsel for Vedanti Systems Limited
`
`(“Vedanti”) as a technical expert in connection with the proceedings identified
`
`above. I submit this declaration on behalf of Arendi in the Inter Partes Reviews of
`
`United States Patent No. 7,974,339 (“the ‘339 patent”) in Consolidated Case No.
`
`IPR2016-00212.
`
`2.
`
`I base my opinions below on my professional training and experience
`
`and my review of documents and materials produced in these Inter Partes reviews.
`
`My compensation for this assignment is $450 per hour. My compensation is not
`
`dependent on the substance of my opinions or my testimony or the outcome of the
`
`above-identified proceedings.
`
`Qualifications
`
`3. My qualifications for forming the opinions set forth in this report are
`
`listed in this section and in Exhibit A attached, which is my curriculum vitae.
`
`Exhibit A also includes a list of my publications.
`
`4.
`
`I am currently the Chief Image Scientist at Northstrat, Inc. (hereinafter
`
`referred to as “Northstrat”). In this capacity, I serve as an expert in all of
`
`1
`
`

`
`Northstrat’s imaging initiative along high technology research and development
`
`and serve as an expert in all of Northstrat’s activities pertaining to remote sensing,
`
`surveillance, and image/signal processing problems.
`
`5.
`
`Prior to joining Northstrat, I served as the Senior Staff Scientist at ITT
`
`Exelis, Space Sciences Division (hereinafter referred to as “ITT Exelis”) in an
`
`exact same capacity. ITT Exelis is a leader in Command, Control,
`
`Communications, Computers, Intelligence, Surveillance, and Reconnaissance
`
`(C4ISR) related products and systems and information and technical services,
`
`supplying military, government, and commercial customers in the United States
`
`and globally. ITT Exelis is a pure-play aerospace, defense, and information
`
`solutions company with strong positions in enduring and emerging global markets,
`
`some 20,000 employees, and 2010 revenues of about $6 billion.
`
`6.
`
`In early 2015 ITT Exelis merged with Harris corporation with
`
`complementary capabilities and for further reading please see:
`
`https://www.harris.com/solutions, https://www.harris.com/what-we-
`
`do/intelligence-surveillance-and-reconnaissance, https://www.harris.com/solution-
`
`grouping/remote-sensing-systems and https://www.harris.com/solution/advanced-
`
`analytics.
`
`7.
`
`A copy of my C.V. is Ex. 2002.
`
`2
`
`

`
`8.
`
`In brief, and with particular relevance to the subject matter of this
`
`Inter Parties Review, my background and qualifications to be an expert witness in
`
`this matter are as follows. I received my Ph.D. degree in Electrical Engineering
`
`from the University of Maryland at College Park in 1997. My thesis addressed the
`
`compression and processing of images and has content spanning image processing,
`
`compression and communication theory and application.
`
`9.
`
`Immediately after my graduation, I continued work in media
`
`compression and processing at the Compression Group in the Information
`
`Technology Laboratory of the National Institutes of Standards and Technology. In
`
`this role I continued my research and expanded on similar topics across several
`
`media forms. In particular I served as the United States Government ambassador
`
`to the MPEG standardization group. I also expanded on my thesis research topic to
`
`exploit compressed-domain processing of media for images, video and multimedia
`
`content. I worked with engineers who utilized the MPEG standardization body’s
`
`source code for performance, quality and testing. There were at least three
`
`individuals in my immediate compression group and about 10 individuals working
`
`in other companies that contributed to the source code. These individuals were
`
`degreed with a Bachelor’s degree in either Electrical Engineering or Computer
`
`Science and had direct experience with image processing, compression and
`
`communications as it pertains to encoding and decoding of images. At least two
`
`3
`
`

`
`individuals in my immediate group were recent graduates who were performing a
`
`cross-country internship from France who were interested in pursuing advanced
`
`degrees and part of their education process had to perform actual work to gain
`
`relevant experience. These individuals had a bachelor’s degree in either Electrical
`
`Engineering or Computer Science with one to two years working experience on
`
`compression algorithms that was being studied in my group that included image
`
`compression and transmission.
`
`10. Since 1999, I have held technical leadership positions in image and
`
`signal processing fields for delivery of highly technical solutions to the market. In
`
`particular, I have worked with X-ray imaging for medical diagnostic, security
`
`scanning, and non-destructive testing purposes.
`
`11. From 1999 to 2001, I served as the Chief Technical Officer at
`
`IMACOM, a medical imaging company that manufactured and sold Fluoroscopy
`
`and Radiography systems. From 2002 to 2003 I served as the president at Sigma
`
`Vision, a company that specialized in radiography for veterinary, security, and
`
`non-destructive testing. I worked with hardware and software engineers that
`
`provided technical support in development and maintaining of the imaging
`
`machines produced by the company. A large part of the system was to provide
`
`real-time processing, display and transmission of images within the system and
`
`across system boundaries to systems such as Radiological Information Systems
`
`4
`
`

`
`(RIS) and Hospital Information Systems (HIS). Specifically interface to RIS
`
`systems were accomplished by the Picture Archiving and Communication System
`
`(PACS). We performed research and development in various compression and
`
`transmission options that included a well accepted practice of transmitting Motion
`
`JPEG video where series of frames in a video is compressed by a JPEG image
`
`compression technique. The engineers working with me had bachelor’s degree in
`
`Electrical Engineering and Computer Science with hands on experience in various
`
`aspects of the system some of which was at least one to two year equivalent
`
`experience in image processing, compression and communications.
`
`12. From 2004 to 2009, I served as the Chief Scientist and Director of
`
`Digital X-Ray Development at Imaging Sciences International, Inc. (hereinafter
`
`referred to as “ISII”), a company that marketed a broad scope of imaging and
`
`workflow solutions for the dental market. Since 2009 I have served at ITT Exelis
`
`as a senior scientist responsible for government contracts as Chief Engineer and
`
`Chief Scientist. Also at ITT Exelis, served as one of the leading experts in image
`
`and video compression.
`
`13.
`
`In these capacities, I have worked on many aspects of digital imaging
`
`for X-Rays, optical, hyperspectral, radiofrequency in active or passive modes,
`
`including algorithm development, software development, hardware development,
`
`receptor development, and system design. I have also implemented image and
`
`5
`
`

`
`video compression baselines for various products. In every case where
`
`compression was involved, I had to perform an image quality analysis to determine
`
`the appropriate level of compression or mitigation of the anticipated induced loss
`
`in the compression. Moreover, I had to also perform rate analysis to determine the
`
`level of appropriate compression to achieve in order to meet the available channel
`
`requirements ranging from dial-up modem to satellite communication.
`
`Information Considered in Forming Opinion
`
`14.
`
`I have reviewed the Inter Partes Review pleadings for the above
`
`referenced proceedings including ‘‘339 Patent, U.S. Patent No. 4,791,486
`
`(“Spriggs”) U.S. Patent No. 5,225,904 (“Golin”), U.S. Patent No. 6,529,634
`
`(“Thyagarajan”), “Spatially Adaptive Subsampling of Image Sequences”
`
`(“Belfor”) and Dr. Grindon’s Declarations. I have relied on my own knowledge
`
`and experience as well as published documents in forming my opinions.
`
`Person of Ordinary Skill in the Art
`
`15.
`
`In my opinion, a person of ordinary skill in the art pertaining to the
`
`'339 patent at the relevant date discussed below would have at least a technical
`
`degree in Electrical Engineering, Computer Science or equivalent curriculum with
`
`coursework in image processing and at least one year of hands on experience with
`
`compression and communication techniques.
`
`6
`
`

`
`16. Alternatively, the person of ordinary skill may have earned a degree in
`
`Electrical Engineering, Computer Science or equivalent curriculum with
`
`coursework in compression and communication and at least one year of hands on
`
`experience in imaging.
`
`17. Despite the focus of the invention on reducing bandwidth
`
`requirements, i.e, data reduction, Petitioner’s statement of one of ordinary skill is
`
`so expansive it encompasses persons with no coursework and no experience in data
`
`compression.
`
`‘339 Patent
`18. Coincident with the explosive growth of the Internet through the
`
`utilization of embedded multimedia in the World Wide Web, the invention of the
`
`‘339 patent addressed a demanding and important telecommunication requirement
`
`for image and video streaming. The concept of motion estimation had already
`
`formed the basis for video encoding by achieving high compression rates by
`
`simply removing redundancies associated with moving objects in the scene. With
`
`increased memory, processing and communication improvements a new era of
`
`streaming media was ushered in with video streaming being the most valuable
`
`entity.
`
`19. To address one aspect of managing the transmission requirements for
`
`high throughput and low latency, the inventors of the ’339 patent came up with a
`
`7
`
`

`
`solution of applying a pixel selection process to regions generated by an analysis
`
`process performed on the pixels of a video image.
`
`20. By separately using both of these processes, the resulting system
`
`allows for optimization of the regions and, in addition, a pixel selection process
`
`that “allows random, sequenced, or other suitable processes to be used to select and
`
`locate pixels with optimized regions.” ’339 patent, Ex. 1001, 7:7-9. Combining
`
`frame analysis to generate regions and pixel selection for each region as taught in
`
`the patent has been used to significantly improve the quality of video transmission
`
`and reception over the Internet.
`
`Terminology
`21. The term “data” is used in the ’339 patent in the computing and
`
`communications sense of the word.
`
`22. Thus, as would be understood by one of ordinary skill in the art, data
`
`refers to “digital information” or “bits that can be made available for storage,
`
`transmission and/or interpretation.”
`
`Spriggs
`23. The patent of Spriggs discloses a method for image transmission
`
`employing a process for determining when to subdivide picture areas for reducing
`
`the amount of data needed for transmitting an image. Ex. 1005 at Abstract
`
`(“Spriggs”). Spriggs begins with an image and then determines whether to
`
`8
`
`

`
`subdivide the image into smaller picture areas based on how well an estimated
`
`version of the data compares to the actual pixel data for a given area.
`
`24.
`
`If the estimated version is within a threshold of the actual pixel data,
`
`then the image is not subdivided but if the estimated version is above a threshold
`
`when compared to the actual values, the image is divided both horizontally and
`
`vertically into quadrants.
`
`25. The estimated version is determined using the four corner pixel values
`
`for each area. Spriggs interpolates between the four corner pixel values to create
`
`an estimate for each addressable pixel within the area. The process continues until
`
`all areas of the image that are not subdivided are analyzed to see if they need to be
`
`subdivided.
`
`26. Spriggs produces an encoded data stream that can be decoded by a
`
`receiver by transmitting division codes and also transmitting the sample values
`
`used in the estimation/interpolation process for making the subdivision decisions.
`
`27. Fig. 4 of Spriggs shows how the variably-sized areas are created in
`
`more detail. Spriggs begins by generating the addresses of the four corner points
`
`for an image. Ex. 1005 at Fig. 4. Next, the corner addresses are pushed onto a
`
`memory stack. The corresponding corner samples are then transmitted to the
`
`receiver.
`
`9
`
`

`
`
`
`28. Spriggs then performs an algorithm to determine whether to subdivide
`
`the image. Spriggs performs this methodology by pulling the four corner addresses
`
`off the stack and then generates interpolated samples for the frame based upon the
`
`four corner samples that correspond to the four corner addresses. Ex. 1005, 2:27-
`
`35. The interpolated version of the image is compared with the actual samples to
`
`determine if the estimated and actual samples differ by less than a threshold
`
`criterion. Id., 2:36-46. If the threshold is met then a division code of “0” is
`
`transmitted. Id., 3: 2-4. The “0” indicates that the image has not been subdivided
`
`and that a region has been generated.
`
`10
`
`

`
`29.
`
`If the comparison is not within the margin of the threshold criteria, the
`
`frame is subdivided into quadrants. Ex. 1005, 3:4-7. Five addresses, the address at
`
`the center of the region being divided and the center of each of the four sides of the
`
`region, are determined as represented below by the bolded values (EFGHI). Id.,
`
`3:7-9.
`
`
`
`30. These five center pixel values will be used as corner values for
`
`evaluating the quadrants. Because the frame is subdivided, a subdivision code of
`
`“1” is transmitted. Ex. 1005, 3:4-5.
`
`31. Then, the sample values for the five addresses are transmitted as
`
`shown and described with respect to Fig. 6. Ex. 1005, 3:63-68.
`
`11
`
`

`
`
`
`32. The top of Fig. 6 of Spriggs shows an exemplary image frame that has
`
`undergone the described encoding process and the bottom of Fig. 6 shows the
`
`exemplary output stream for the image frame, which is transmitted to a receiver.
`
`Id. The output stream consists of sample data (SA, SB, etc.) shown in the middle
`
`column and division codes (i.e. 0,1) shown in the left column.
`
`33. The bracketed data in the right column are for informational purposes
`
`and are not transmitted data. Ex. 1005, 3:65-68. These bracketed corner addresses
`
`are used to show the associated corner addresses that relate to the division codes of
`
`‘0’ and ‘1’.
`
`12
`
`

`
`34. The top of Fig. 6 shows a number of regions including regions defined
`
`by their corner points. The regions defined by corner points (AFEI), (FBIG),
`
`(IGHD) are the upper left, upper right and lower right quadrants of the image.
`
`Additionally, there are several smaller regions defined by corner points (EPOS)
`
`(PKSQ) (OSJR) (SQRN) (KINL) (JNCM) AND (NLMH).
`
`35. The method for creating the data stream of Fig. 6 begins as described
`
`above for Fig. 4 with the four corner sample values for the image (SA,SB,SC,SD)
`
`being transmitted. These values are interpolated to generate an interpolated block
`
`and the block of interpolated values is compared to the actual sample values for the
`
`image.
`
`36.
`
`In the example shown in Fig. 6, the threshold is not met and the image
`
`is subdivided. Because of the subdivision, a “1” is transmitted. Id. The
`
`methodology then determines the five center pixel values “SE, SF, SG, SH and
`
`SI”. These will be available for use as corners for the four sub-regions that are
`
`being created. The specification of previously specified corner points with the
`
`newly defined center points allows specification of corner points for newly divided
`
`sub-regions. The methodology then looks at the top left quadrant using the corner
`
`pixel values SA, SF, SE, and SI from the stack to construct interpolated values for
`
`the quadrant. The methodology compares the interpolated pixel values for the
`
`quadrant to the actual pixel values for the quadrant and in this example, it is
`
`13
`
`

`
`determined that the interpolated pixels meet the threshold criterion and a “0” is
`
`transmitted, indicating that this region (defined by corners AFEI) does not need to
`
`undergo any further subdivisions. Based upon having issued the “region data” of
`
`“0”, no further examination of the pixel data of this region is required or performed
`
`in the coding process.
`
`37. The process continues with the upper right quadrant having pixel
`
`values SF, SB, SI, SG, which are taken off of the stack. These four corner pixel
`
`values are interpolated to define interpolated pixel values for the quadrant, which
`
`are compared to the actual pixel values of the quadrant and again the threshold is
`
`met so a “0” is transmitted. No further examination of the pixel data of newly-
`
`created region FBIG takes place in the coding process.
`
`38. The encoding process continues by evaluating the lower left quadrant
`
`composed of corner pixel values “SE, SI, SC, SH” that are read off of the stack.
`
`The interpolated quadrant values when compared to the actual pixel values for the
`
`quadrant do not meet the threshold and therefore, a “1” is generated for
`
`transmission and the quadrant is further subdivided into smaller sized regions. The
`
`quadrant is divided horizontally and vertically and, as shown in Figs. 4 and 6, the
`
`five center values SJ, SK, SL, SM, SN are transmitted to the receiver. The upper
`
`left sub-quadrant defined by corner points (E,K,J,N) is interpolated by corner pixel
`
`values “SE, SK, SJ, SN” and compared to the actual sample values for that region
`
`14
`
`

`
`and the methodology determines that this quadrant should be subdivided and
`
`therefore, a “1” is generated for transmission. Again, the five center values are
`
`transmitted namely, SO, SP, SQ, SR, and SS.
`
`39. As can be seen in Fig. 6, after the final subdivision of the lower left
`
`quadrant occurs causing transmission of pixel values SO, SP, SQ, SR, and SS with
`
`no additional subdivisions occur for the regions defined by corner addresses
`
`(EPOS), (PKSQ), (OSJR), (SQRN), (KINL), (JNCM), (NLMH), and (IGHD). In
`
`fact, no corresponding pixel values need be transmitted for these regions and Figs.
`
`4 and 6 show only a set of 8 consecutive “0”s being transmitted to the receiver.
`
`Thus, for each of these regions only the subdivision code “0” must be sent. No
`
`further examination of the pixel data of these newly-created regions take place in
`
`the coding process.
`
`Golin
`40. Golin discloses a video compression system. Each region of a video
`
`frame is custom encoded. Ex. 1006, 4:68-5:1. Golin teaches a roughness estimator
`
`for detecting edges in the pixel data and if such edges or “roughness” make the
`
`encoding process unacceptable, the region is split horizontally or vertically. Ex.
`
`20. Petitioner relies on Golin for its disclosure of a roughness estimator. Golin
`
`describes fill methods, such as DPCM, that are quite distinct from pixel selection.
`
`15
`
`

`
`‘339 Patent includes both an Analysis System and a Pixel Selection System in
`Contrast to the Spriggs System that only includes an Analysis System
`41. Embodiments of the ’339 patent disclose a system and method that
`
`uses data optimization for reducing data transmission requirements. Ex. 1001,
`
`3:13-14. The data transmission system includes a frame analysis system and a
`
`pixel selection system. The frame analysis system analyzes the data within the
`
`frames to divide the frame into a plurality of regions defined by region data. Id.,
`
`3:51- 4:11. Not until the region data has been determined for a region and been
`
`received by the pixel selection system can the pixel selection system operate on
`
`that region to select pixel values for that region. The region needs to be known and
`
`received for the pixel selection process to take place. Id. The pixel selection
`
`system receives the region data and uses the region data in selecting one or more
`
`pixels from within the region to transmit to the receiver for reconstruction of the
`
`image. Id., 4:12-14.
`
`42.
`
`In contrast, the Spriggs patent includes only an analysis system for
`
`analyzing whether to split an area using quad-tree decomposition. Spriggs does not
`
`include the required pixel selection system.
`
`43. Spriggs recursively looks at an area of an image to see if the area
`
`needs to be subdivided or if corner pixel values will be sufficient to represent the
`
`area. If the corner pixel values are not sufficient to represent the data in the area
`
`when interpolated, the area is subdivided and five center values (center of the
`
`16
`
`

`
`block being subdivided and centers of each of its four sides) are determined and
`
`transmitted as shown by Spriggs.
`
`44.
`
` Each of these pixel values will be used in the corners of the quadrant
`
`regions being formed. These pixel values are data that is transmitted to the receiver
`
`to be used by a decoder for reconstructing the image and are shared among many
`
`possible regions.
`
`45.
`
` Additionally, a division code (“0” or “1”) is transmitted to the
`
`receiver indicative of whether or not the area is subdivided.
`
`46. A region of the image in Spriggs as shown below in Fig. 6 is only
`
`finalized when a “0” code has been generated. The code is transmitted to the
`
`receiver not to any pixel selection system. Thus, given the lack of any process at
`
`the transmitter that has access to the region data for selecting pixels, Spriggs does
`
`not allow for a pixel selection process based on the region data.
`
`17
`
`

`
`
`
`47. Spriggs fails to teach a pixel selection system that receives region data
`
`and that generates pixel values for each region based on the region data as required
`
`by the independent claims. Spriggs simply performs analysis on the image to
`
`determine whether an image should be subdivided horizontally and vertically and
`
`transmits corresponding division codes. These division codes are never received
`
`by a pixel selection system and the division codes are not used for selecting pixel
`
`values for the region.
`
`48.
`
` The initial pixel values that are transmitted in the output data stream
`
`consist of the four corner values for the image, e.g. SA, SB, SC, SD. Since the
`
`regions have not yet been generated, these values are not the result of any pixel
`
`18
`
`

`
`selection for a region. These values are transmitted before the regions are even
`
`known.
`
`49. The coding process of Spriggs goes on to generate the regions. In the
`
`example, regions AFEI, FBIG, JNCM and IGHD become the regions in which
`
`each of A, B, C and D are located. These regions are not known when the values
`
`have been transmitted for A, B, C and D and in particular JNCM is not known until
`
`after at least two subdivision cycles later. As the coding process continues, sets of
`
`five center pixels may be determined whenever there is a split. As was the case for
`
`the initial pixels, these pixels do not receive an associated region until a “0”
`
`division code is issued.
`
`50. Accordingly, Spriggs fails to teach generating one set of pixel data for
`
`each region based on received region data/optimized matrix data.
`
`51. The lower half of Fig. 6 of Spriggs demonstrates a data transmission
`
`stream that is sent to the receiver for the image shown in the upper half of Fig. 6.
`
`The transmission stream does not present a set of pixel data for any of the regions.
`
`The left column represents the division codes/region data, the middle column
`
`represents the transmitted sample values and the right column, which is not
`
`transmitted are helpful comments indicating the corresponding corners of the block
`
`associated with the individual division codes of 1 or 0.
`
`19
`
`

`
`52. As can be seen in Fig. 6, corresponding pixel data is not generated for
`
`each region based upon the division codes/region data. For example, regions
`
`defined by corners (AFEI) region 1, (FBIG) region 2, (EPOS) region 3, (PKSQ)
`
`region 4, (OSJR) region 5, (SQRN) region 6, (KINL) region 7, (JNCM) region 8,
`
`(NLMH) region 9 and (IGHD) region 10, are indicated by a representative “0” for
`
`the division code shown in the first column. Once the “0” is transmitted, the
`
`coding process moves onto a next region. No pixel selection process for the region
`
`is triggered. Spriggs entirely misses the approach of the ’339 patent, which
`
`combined an analysis to generate region data with a process that receives the
`
`region data and selects pixels on the basis of the region data.
`
`53. The recursive process of Fig. 4 in Spriggs is the entire coding process
`
`wherein the encoding process results in an output stream as shown in Fig. 6 of
`
`Spriggs. No further processing at the encoder occurs after the recursive process of
`
`Fig. 4 has completed. The output produces a single output of data that includes
`
`division codes and the pixel data, as corner pixel values, which were used to
`
`generate the division codes based on a desirable encoding error criteria for the
`
`regions. Neither the division codes nor the pixel data are used in a further “pixel
`
`selection” process. There is no process disclosed or suggested by Spriggs, which
`
`receives the region data and generates pixel data for a region.
`
`20
`
`

`
`54.
`
`In Fig. 4, the corner pixel values can be found among the pixel values
`
`transmitted during the production of the regions by the analysis system. These
`
`values are determined during the region generating process. There is no subsequent
`
`pixel selection process.
`
`55. The recursive flowchart of Fig. 4 determines when to subdivide
`
`regions into sub-regions and provides pixel values for transmission. However, this
`
`is a singular process and the pixel values are transmitted first before a region is
`
`finally identified by a ‘0’ division code.
`
`56. As shown in Fig. 6, SA through SI are transmitted before region AFEI
`
`is determined by the ‘0’ division code. The process of the pixel selection system is
`
`completely missing from Spriggs and the Petitioner merely repurposes its analysis
`
`for the “analysis system” suggesting that it likewise covers the “pixel selection
`
`system.”
`
`57.
`
`In Fig. 4, the “0” and “1” division codes are transmitted to the
`
`receiver. The “1”s and “0”s in the data stream can be used by the receiver to
`
`determine how to reconstruct the image at the receiver. The division codes are not
`
`received and used in a process for selecting pixel data in the transmitter.
`
`58.
`
`In Spriggs, upon each iteration of its recursive process as exemplified
`
`in Fig. 4, the process always begins with the corner pixel values and addresses of
`
`the corner pixels of the initial block. The process performs an analysis function to
`
`21
`
`

`
`determine if the block should be split and be recorded in the division codes. These
`
`division codes are immediately transmitted as they are generated as seen in Fig. 4.
`
`The region data are never received by a pixel selection system in Spriggs encoder
`
`that generates one set of pixel data for each region forming a new set of data for
`
`transmission as required by claim 1. The division codes are transmitted in a
`
`transmission stream to the receiver.
`
`59. The example of corner pixels given in the Petition is “SA, SB, SC,
`
`SD”. Each of these is the corner of a different region in Fig. 6. Spriggs’ coding
`
`process begins with these pixels. They are certainly not selected “based on”
`
`optimized matrix data. They are generated and transmitted before Spriggs’ coding
`
`process makes its first decision.
`
`60. Spriggs never selects between two or more sets of pixel data, which
`
`would require a decision process that is clearly lacking in Spriggs.
`
`61.
`
`In Fig. 6, the set SA, SB, SC, SD does not form a region. SA is in
`
`region AFEI, SB is in a different region FBIG and SC is in yet another region
`
`JNCM.
`
`Belfor
`
`62. Belfor reduces the data representing an image by performing a
`
`subsampling process where “[b]y discarding a part of the pixels, the image can be
`
`transmitted more efficiently.” Ex. 1007, 1. Fixed subsampling is compared with
`
`22
`
`

`
`spatially adaptive subsampling. Improved storage and transmission performance is
`
`shown for spatially adaptive subsampling relative to fixed subsampling.
`
`63.
`
`“In a spatially adaptive subsampling scheme, the image is subdivided
`
`into square blocks, and each block is represented by a specific spatial sampling
`
`lattice.” Ex. 1007, 1. The scheme addresses high detail and low detail areas of an
`
`image by using a dense sampling lattice in high detail blocks and a sparse sampling
`
`lattice with few pixels in a low detail block. Accommodating the varying detail
`
`through variable subsampling in conjunction with a representative lattice structure
`
`provides the improvement over using a fixed sampling and lattice structure.
`
`64. Block sizes considered in Belfor refer to the dimensions of the block.
`
`Belfor illustrates a block size with dimensions of 8 pixels by 8 pixels. Ex. 1007, 4.
`
`Furthermore, Belfor notes, “The number of possible modes is affected by the block
`
`size because for decreasing block size, the number of possible sampling lattices
`
`within the block decreases as well.” Id. at 4. Because the number of pixels in the
`
`block makes it possible to use a certain number of subsampling lattices. For a
`
`given block size, Belfor provides an associated set of modes from which to choose,
`
`each mode having a specific subsampling lattice. Id. at 4. For the use of
`
`sophisticated interpolation in decoding, the set of modes for a given block size is
`
`preferably hierarchical. Id. at 4. Hierarchical means mode n+1 is always a subset
`
`23
`
`

`
`of mode n, as shown in Fig. 4. Id. at 4.
`
`65. The system of Belfor, as shown in Fig. 5, runs each block through a
`
`pre-filtering step. This is followed by a subsampling step for each of the modes in
`
`
`
`the set.
`
`66. The subsampled image for each mode at each block is evaluated for
`
`quality by an error computation module. Then the mode allocation module makes
`
`the determination as to which mode is the best representative mode to be allocated
`
`to each individual block. After the modes have been allocated, in other words – a
`
`
`
`24
`
`

`
`mode has been assigned to each block – the data for the image is transmitted
`
`through the channel. The data would include the mode assignment for each block
`
`and the pixels remaining after applying the respective subsampling to each of the
`
`blocks.
`
`67. Belfor further discusses the complexities involved in allocating the
`
`modes to the blocks. According to Belfor, “The mode allocation is of great
`
`significance as it influences the output quality considerably.” Ex. 1007, 4. In the
`
`simplest allocation approach, only two modes are used. To produce a desired
`
`output bit rate, a fraction of the blocks will be assigned one mode and the
`
`remaining fraction of the blocks will be assigned the other mode. The fractions are
`
`chosen so that the total bits approximate the desired bit rate. To implement this
`
`approach, the less dense mode is applied to all of the blocks. The distortion or
`
`error computation is determined for each block. Then the blocks with the highest
`
`computed distortion will be assigned the denser mode in an effort to increase total
`
`number of bits while decreasing the largest amounts of distortion. It is the aim of
`
`the process to still keep the total number of bits below a desired amount while
`
`decreasing the distortion.
`
`68. Belfor seeks to describe a mode allocation algorithm that can be used
`
`with any number of modes. Ex. 1007, 5. But increasing the modes to more than
`
`two will produce a pair of equations that cannot be uniquely solved. Id. at 5. It is
`
`25
`
`

`
`necessary that the modes produce a rate-distortion curve that is convex as shown in
`
`Fig. 6. Id. at 5. In other words, as the bit rate ratio (remaining bits in the lattice
`
`divided by total bits in the block) increases moving to the right in Fig. 6, the
`
`distortion D is reduced. Id. at 5.
`
`
`
`This convexity is in line with the theoretical definition of rate distortion theory in
`
`that every added bit used to represent an image should meth

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket