`571.272.7822
`
` Paper No. 7
`
`
` Filed: May 20, 2016
`
`
`
`UNITED STATES PATENT AND TRADEMARK OFFICE
`____________
`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`____________
`
`GOOGLE INC.,
`Petitioner,
`
`v.
`
`VEDANTI SYSTEMS LIMITED,
`Patent Owner.
`____________
`
`Case IPR2016-00215
`Patent 7,974,339 B2
`____________
`
`
`
`Before MICHAEL R. ZECHER, JUSTIN T. ARBES, and
`JOHN A. HUDALLA, Administrative Patent Judges.
`
`HUDALLA, Administrative Patent Judge.
`
`
`
`DECISION
`Institution of Inter Partes Review
`35 U.S.C. § 314(a) and 37 C.F.R. § 42.108
`
`
`Petitioner, Google Inc. (“Google”), filed a Petition (“Pet.”) (Paper 2)
`requesting an inter partes review of claims 1, 6, 7, 9, 10, 12, and 13 of U.S.
`Patent No. 7,974,339 B2 (Ex. 1001, “the ’339 patent”) pursuant to 35 U.S.C.
`§§ 311–19. Patent Owner, Vedanti Systems Limited (“Vedanti”), filed a
`
`
`
`
`
`IPR2016-00215
`Patent 7,974,339 B2
`
`
`
`Preliminary Response. Paper 6 (“Prelim. Resp.”). We have jurisdiction
`under 35 U.S.C. § 314.
`Under 35 U.S.C. § 314(a), the Director may not authorize an inter
`
`partes review unless the information in the petition and preliminary response
`“shows that there is a reasonable likelihood that the petitioner would prevail
`with respect to at least 1 of the claims challenged in the petition.” For the
`reasons that follow, we institute an inter partes review as to claims 1, 6, 7, 9,
`10, 12, and 13 of the ’339 patent on the asserted ground of unpatentability
`presented. To administer the proceeding more efficiently, we also exercise
`our authority under 35 U.S.C. § 315(d) to consolidate Case IPR2016-00215
`with Case IPR2016-00212 and conduct the proceedings as one trial.
`
`
`I. BACKGROUND
`
`Related Proceedings
`A.
`Both parties identify the following proceeding related to the ’339
`
`patent (Pet. 3, 59; Paper 5, 2): Max Sound Corp. v. Google, Inc., No. 5:14-
`cv-04412 (N.D. Cal. filed Oct. 1, 2014).1 Google was served with this
`
`1 In Max Sound, plaintiff Max Sound Corporation (“Max Sound”) sued
`Google and others for infringement of the ’339 patent. Ex. 1011, 1–2.
`Although Max Sound listed Vedanti as a co-plaintiff at the outset of the
`case, Max Sound later alleged Vedanti was a defendant. See id. at 1; Order,
`Max Sound Corp. v. Google, Inc., No. 3:14-cv-04412 (N.D. Cal. Nov. 24,
`2015), ECF No. 139, 3–4. The court dismissed the action for lack of subject
`matter jurisdiction after determining Max Sound did “not demonstrate[e]
`that it had standing to enforce the ’339 patent at the time it initiated th[e]
`action, with or without Vedanti as a party.” See id. at 9. Max Sound has
`appealed the dismissal. See Notice of Appeal, Max Sound Corp. v. Google,
`Inc., No. 3:14-cv-04412 (N.D. Cal. Feb. 19, 2016), ECF No. 150. In its
`mandatory notices pursuant to 37 C.F.R. § 42.8, Vedanti states that it owns
`
`
`
`2
`
`
`
`IPR2016-00215
`Patent 7,974,339 B2
`
`
`
`complaint on November 20, 2014. See Pet. 3 (citing Ex. 1021). The ’339
`patent is also the subject of another petition for inter partes review in Case
`IPR2016-00212. Pet. 59; Paper 5, 2.
`
`Google also identifies a second action among the same parties that
`was dismissed without prejudice voluntarily: Vedanti Sys. Ltd. v. Google,
`Inc., No. 1:14-cv-01029 (D. Del. filed Aug. 9, 2014). See Pet. 3 n.1 (citing
`Exs. 1009, 1010), 59 (citing Ex. 1010). We agree with Google (see id. at 3
`n.1) that, as a result of the voluntary dismissal without prejudice, this
`Delaware action is not relevant to the bar date for inter partes review under
`35 U.S.C. § 315(b). See Oracle Corp. v. Click-to-Call Techs., LP, Case
`IPR2013-00312, slip. op. at 15–18 (PTAB Oct. 30, 2013) (Paper 26)
`(precedential in part).
`
`The ’339 patent
`B.
`The ’339 patent is directed to “us[ing] data optimization instead of
`
`compression, so as to provide a mixed lossless and lossy data transmission
`technique.” Ex. 1001, 1:36–39. Although the embodiments in the patent are
`described primarily with reference to transmitting frames of video data, the
`Specification states that the described optimization technique is applicable to
`any type of data. See Ex. 1001, 1:50–52, 4:44–46, 4:60–62, 7:42–45, 9:54–
`56. Figure 1 of the ’339 patent is reproduced below.
`
`
`the ’339 patent and that the Max Sound case was “filed without
`authorization” by Max Sound. Paper 5, 2.
`3
`
`
`
`
`
`IPR2016-00215
`Patent 7,974,339 B2
`
`
`
`
`
`
`Figure 1 depicts system 100 for transmitting data having data transmission
`system 102 coupled to data receiving system 104. Id. at 2:47–49.
`Data transmission system 102 includes frame analysis system 106 and
`pixel selection system 108. Id. at 2:65–67. The frame analysis system
`receives data grouped in frames, and then generates region data that divides
`frame data into regions. Id. at 1:42–46. Regions can be uniform or non-
`uniform across the frame, and regions can be sized as symmetrical matrices,
`non-symmetrical matrices, circles, ellipses, and amorphous shapes. Id. at
`5:54–6:3. Figure 10 of the ’339 patent is reproduced below.
`
`
`
`4
`
`
`
`IPR2016-00215
`Patent 7,974,339 B2
`
`
`
`
`
`Figure 10 depicts segmentation of an array of pixel data where the regions
`are non-uniform matrices. Id. at 10:38–41. The pixel selection system
`receives region data and generates one set of pixel data for each region, such
`as by selecting a single pixel in each region. Id. at 1:46–49. In Figure 10
`above, the “X” in each matrix represents a selected pixel. Id. at 10:24–29,
`10:47–52. Transmission system 102 then transmits matrix data and pixel
`data, thereby “reduc[ing] data transmission requirements by eliminating data
`that is not required for the application of the data on the receiving end.” Id.
`at 3:13–15, 7:63.
`Data receiving system 104 further includes pixel data system 110 and
`display generation system 112. Id. at 3:35–36. Pixel data system 110
`receives region data and pixel data and assembles frame data based on the
`region data and pixel data. Id. at 4:32–34. In turn, display generation
`system 112 receives frame data from pixel data system 110 and generates
`video data, audio data, graphical data, textual data, or other suitable data for
`use by a user. Id. at 4:44–46.
`
`
`
`
`5
`
`
`
`IPR2016-00215
`Patent 7,974,339 B2
`
`
`
`C.
`
`Claim 1
`Claim 1 of the ’339 patent is illustrative of the challenged claims and
`recites:
`1.
`
`A system for transmitting data transmission comprising:
`a analysis system receiving frame data and generating
`region data comprised of high detail and or low detail;
`a pixel selection system receiving the region data and
`generating one set of pixel data for each region forming a new
`set of data for transmission;
`a data receiving system receiving the region data and the
`pixel data for each region and generating a display;
`wherein the data receiving system comprises a pixel data
`system receiving matrix definition data and pixel data and
`generating pixel location data;
`wherein the data receiving system comprises a display
`generation system receiving pixel location data and generating
`display data that includes the pixel data placed according to the
`location data.
`Ex. 1001, 10:62–11:9.
`
`D.
`
`The Prior Art
`Google relies on the following prior art:
`Golin et al., U.S. Patent No. 5,225,904, filed Dec. 4, 1991,
`issued July 6, 1993 (Ex. 1006, “Golin”);
`Thyagarajan et al., U.S. Patent No. 6,529,634 B1, filed
`Nov. 8, 1999, issued Mar. 4, 2003 (Ex. 1008, “Thyagarajan”);
`and
`
`Ricardo A.F. Belfor et al., Spatially Adaptive Subsampling
`of Image Sequences, 3 IEEE TRANSACTIONS ON IMAGE
`PROCESSING 1–14 (Sept. 1994) (Ex. 1007, “Belfor”).
`
`
`
`
`
`6
`
`
`
`IPR2016-00215
`Patent 7,974,339 B2
`
`
`
`E.
`
`The Asserted Ground
`Google challenges claims 1, 6, 7, 9, 10, 12, and 13 under 35 U.S.C.
`§ 103(a) as unpatentable over Belfor, Thyagarajan, and Golin. Pet. 3, 19–
`58.
`
`F.
`
`Claim Interpretation
`In an inter partes review, we construe claims by applying the broadest
`reasonable interpretation in light of the specification. 37 C.F.R. § 42.100(b);
`see also In re Cuozzo Speed Techs., LLC, 793 F.3d 1268, 1275–78 (Fed. Cir.
`2015), cert. granted sub nom. Cuozzo Speed Techs. LLC v. Lee, 136 S. Ct.
`890 (mem.) (2016). Under the broadest reasonable interpretation standard,
`and absent any special definitions, claim terms are given their ordinary and
`customary meaning, as would be understood by one of ordinary skill in the
`art in the context of the entire disclosure. See In re Translogic Tech. Inc.,
`504 F.3d 1249, 1257 (Fed. Cir. 2007). Any special definitions for claim
`terms or phrases must be set forth “with reasonable clarity, deliberateness,
`and precision.” In re Paulsen, 30 F.3d 1475, 1480 (Fed. Cir. 1994).
`For purposes of this Decision, and based on the current record, we
`construe certain claim terms or phrases as follows.
`
`“region” and “matrix”
`1.
`Google contends a “region” is a “division of a frame,” Pet. 13,
`whereas Vedanti contends a “region” is “a contiguous group of pixels within
`a frame.” Prelim. Resp. 12–13. As such, both parties agree that a region is a
`part of a frame, which we conclude is consistent with the usage of “region”
`
`
`
`7
`
`
`
`IPR2016-00215
`Patent 7,974,339 B2
`
`
`
`in the Specification. We therefore consider whether a frame must also be “a
`contiguous group of pixels.”
`Vedanti asserts “a region is the result of an analysis of pixels to
`determine if they share common information and should be part of the same
`region.” Id. at 13. Vedanti further asserts the “frame analysis system”
`creates regions “based upon a comparison of pixel data, such as [a
`comparison of] adjacent pixel data to a threshold in order to determine if a
`pixel location should be included within a region.” Id. (citing Ex. 1001,
`8:26–44). Although we agree with Vedanti that the Specification of the ’339
`patent describes the use of adjacent pixel comparison to create regions, we
`do not agree that pixel comparison—and, by extension, region creation—is
`expressly limited to adjacent or contiguous pixels. See Ex. 1001, 5:52–53
`(“[O]ther suitable pixel variation detection functionality can be provided.”).
`Accordingly, for purposes of this Decision, we decline to adopt Vedanti’s
`language regarding contiguous pixels and conclude a “region” is a “division
`of a frame.”
`Regarding the term “matrix,” both parties agree that a “matrix” is a
`type of “region,” though Vedanti does not propose an express construction
`for “matrix.” Pet. 15–16; Prelim. Resp. 13. Google further contends a
`“matrix” is a “region with square or rectangular dimensions.” Pet. 15–16.
`This is consistent with the Specification, which gives examples of
`symmetrical (square) and nonsymmetrical (rectangular) matrices. See Ex.
`1001, 4:1–6 (cited at Pet. 16), 5:60–62 (quoted at Prelim. Resp. 13).
`Accordingly, for purposes of this Decision, we construe “matrix” to mean “a
`region with square or rectangular dimensions.”
`
`
`
`
`8
`
`
`
`IPR2016-00215
`Patent 7,974,339 B2
`
`
`
`
`region data
`(claims 1, 10,
`12, 13)
`
`matrix data
`(claims 7, 9,
`12)
`
`Uniform matrix dimensions
`or non-uniform matrix
`dimensions and sequences.
`Pet. 16.
`
`“region data,” “matrix data,” and “matrix definition data”
`2.
`The parties propose the following constructions of these terms:
`Term
`Google’s Proposed
`Vedanti’s Proposed
`Construction
`Construction
`None.
`Data that defines the region
`including the size, shape,
`and location of the region
`within a frame. Prelim.
`Resp. 14.
`Data that defines the region
`including the size, shape,
`and location of the region
`within a frame, wherein the
`region is a matrix. Prelim.
`Resp. 14.
`Data that defines the region
`such as the size, shape, and
`location. Prelim. Resp. 15.
`
`matrix
`definition
`data (claim 1)
`
`Uniform matrix dimensions
`or non-uniform matrix
`dimensions and sequences.
`Pet. 16.
`Starting our analysis with “matrix data,” both parties’ proposed
`constructions relate to data defining the proportions of a matrix. Pet. 16–17;
`Prelim. Resp. 14–15. This comports with the Specification’s statement that,
`“[i]n one exemplary embodiment, the matrix data can include a matrix size,
`a region size, a region boundary for amorphous regions, or other suitable
`data.” Ex. 1001, 9:8–11. Vedanti’s construction also includes “location of
`the region within a frame,” which is consistent with the notion of a “region
`boundary” in this statement. Nevertheless, the Specification does not
`require that “matrix data” must include all of these markers for a particular
`defined region, so we consider them to be exemplary. Accordingly, for
`purposes of this Decision, we interpret “matrix data” to mean “data that
`defines at least one matrix.” For similar reasons, and because a matrix is a
`
`
`
`9
`
`
`
`IPR2016-00215
`Patent 7,974,339 B2
`
`
`
`type of a region, see supra Section I.F.1, we interpret “region data” to mean
`“data that defines at least one region.”
`Regarding “matrix definition data” in claim 1, we observe that this
`recitation is similar to “region data” because both ultimately are received by
`the “data receiving system.” See Ex. 1001, 11:1–5. For purposes of this
`Decision, we adopt the same construction as for “matrix data,” namely,
`“data that defines at least one matrix.”
`
`“pixel selection data” and “selection pixel data”
`3.
`Google contends these terms, which appear in claims 7 and 10, should
`be construed as “selected pixel data transmitted without any further
`processing for each region in a frame.” Pet. 17. Google argues the
`Specification of the ’339 patent states data need not “be compressed at the
`sending end and decompressed at the receiving end” because data
`optimization is used “to transmit only the data that is necessary for the
`application, such that decompression of the data on the receiving end is not
`required.” Id. at 17–18 (quoting Ex. 1001, 1:55–60) (emphases added by
`Google). Google also highlights that, in order to overcome an Examiner’s
`anticipation rejection during prosecution of the patent, the Applicants of the
`’339 patent argued “the generated set of pixel data is selected directly . . .
`and will be transmitted without any further processing, due to the fact that
`the applicants[’] invention does not compress nor decompress data.” Id. at
`8, 18 (both quoting Ex. 1002, 591) (emphasis omitted).
`Vedanti contends these terms should be construed as “pixel data
`representative of a region of a frame for transmission to a receiver.” Prelim.
`Resp. 20. Vedanti cites the Specification’s disclosure that a pixel selection
`
`
`
`10
`
`
`
`IPR2016-00215
`Patent 7,974,339 B2
`
`
`
`system “selects one or more pixels within a predefined matrix or other
`region for transmission in an optimized data system.” Id. (quoting Ex. 1001,
`4:12–14). Vedanti also disputes the “without any further processing”
`language in Google’s proposed construction. Id. Specifically, Vedanti
`references a statement in the Specification describing the use of the ’339
`patent’s data optimization techniques “in conjunction with a compression
`system, a frame elimination system, or with other suitable systems or
`processes to achieve further savings in bandwidth requirements.” Id.
`(quoting Ex. 1001, 5:3–8).
`We agree with Vedanti that the Specification of the ’339 patent
`describes the possibility of further data processing beyond data optimization.
`See Ex. 1001, 5:3–8. Thus, even though certain portions of the Specification
`of the ’339 patent—and certain arguments in the prosecution history—
`mention that it is possible to optimize data without further processing, other
`portions of the Specification nonetheless contemplate the possibility of
`additional processing after optimization. Because the Specification does not
`foreclose expressly upon additional processing in all cases, we decline to
`adopt the “without any further processing” language in Google’s
`construction. The remaining language in the parties’ proposed constructions
`is similar; both parties acknowledge the selection of pixels relates to
`transmission and that pixel selection is done on a region-by-region basis.
`Pet. 17; Prelim. Resp. 20. This is supported by the Specification, which
`refers to the selection of “one or more pixel[s] within a predefined matrix or
`other region for transmission in an optimized data transmission system.” Ex.
`1001, 4:12–14. Accordingly, for purposes of this Decision, we construe
`
`
`
`11
`
`
`
`IPR2016-00215
`Patent 7,974,339 B2
`
`
`
`“pixel selection data” and “selection pixel data” as “data pertaining to one or
`more pixels from a region selected for transmission.”
`
`“analysis system”/“a analysis system receiving frame data and
`4.
`generating region data”
`Based on, inter alia, the description of frame analysis system 106 in
`the Specification of the ’339 patent, Vedanti contends the entire phrase “a
`analysis system receiving frame data and generating region data” should be
`construed to mean “a system that receives the frame data for a frame and
`analyzes the frame data of the frame to generate region data.” Prelim. Resp.
`17–19. Google does not propose a construction for “analysis system” other
`than to say it should be construed “consistent with frame analysis system
`106 in FIGs. 1 and 2 of the specification and to provide antecedent basis for
`the term in dependent claims 2 and 3, line 1.” Pet. 18.
`We agree with the parties that the description of frame analysis
`system 106 in Specification of the ’339 patent provides context for how the
`recited “analysis system” of claim 1 should be construed. The Specification
`states that “[i]n one exemplary embodiment, frame analysis system 106 can
`analyze adjacent pixel data values in the frame, and can apply one or more
`predetermined variation tolerances to select a matrix size for a data
`optimization region.” Ex. 1001, 3:53–56 (emphases added). The
`Specification likewise states that frame analysis system 106 “can assign a
`different matrix size on a frame by frame basis.” Id. at 3:62–66 (emphasis
`added). Given the use of the permissive language “can,” these passages
`merely provide examples of the analysis that could be performed by the
`recited “analysis system.” Put simply, these examples of analysis are not
`
`
`
`12
`
`
`
`IPR2016-00215
`Patent 7,974,339 B2
`
`
`
`required to be performed by the recited “analysis system.” For this reason,
`based on the current record, we decline to construe the “analysis system” of
`claim 1 as requiring any particular type of analysis beyond “receiving frame
`data and generating region data,” as is recited in claim 1 itself.
`
`
`II. ANALYSIS
`We now consider Google’s asserted ground and Vedanti’s arguments
`in its Preliminary Response to determine whether Google has met the
`“reasonable likelihood” threshold standard for institution under 35 U.S.C.
`§ 314(a). Google’s unpatentability contentions are supported by the
`testimony of John R. Grindon, D.Sc. See Ex. 1003.
`Google contends claims 1, 6, 7, 9, 10, 12, and 13 would have been
`obvious over the combination of Belfor, Thyagarajan, and Golin. Pet. 3, 19–
`58. Vedanti disputes Google’s contention. Prelim. Resp. 27–40.
`
`Belfor
`Belfor is directed to “a spatially adaptive subsampling scheme”
`wherein an “image is subdivided into square blocks.” Ex. 1007, 1. Each
`block uses a specific special sampling lattice; “[i]n detailed regions, a dense
`sampling lattice is used, and in regions with little detail, a sampling lattice
`with only a few pixels is used.” Id. Figure 4 of Belfor is reproduced below.
`
`A.
`
`
`
`13
`
`
`
`IPR2016-00215
`Patent 7,974,339 B2
`
`
`
`
`
`
`Figure 4 depicts a set of three exemplary sampling lattices, which also are
`known as Modes 1, 2, and 3, where the solid dots represent the pixels that
`are transmitted. Id. at 4. In Mode 1, which can be used for highly detailed
`regions, all pixels are transmitted, whereas in Mode 3, which can be used for
`“areas with a slowly varying luminance,” only 4 of the 64 pixels are
`transmitted. Id. An “interpolation module” evaluates “a criterion function
`that reflects the quality of the block for [each] particular mode,” and a mode
`is assigned to each block accordingly. Id.
`Although Belfor advocates using square blocks of the same size,
`Belfor acknowledges that it would be ideal
`to segment the image into regions that require the same spatial
`sampling frequency and sample each region according to this
`frequency[, though s]uch a solution would require a detailed
`analysis of the image, and a large amount of side information
`would be needed to transmit the shape of the regions.
`
`Id.
`
`B.
`
`
`
`Thyagarajan
`Thyagarajan is directed to “a compression scheme for image signals
`utilizing adaptively sized blocks and sub-blocks.” Ex. 1008, 1:9–11.
`14
`
`
`
`IPR2016-00215
`Patent 7,974,339 B2
`
`
`
`Thyagarajan describes the use of “contrast adaptive coding to achieve
`further bit rate reduction” by “assigning more bits to the busy areas and less
`bits to the less busy areas.” Id. at 4:17–24. Block sizes are assigned
`“us[ing] the variance of a block as a metric in the decision to subdivide a
`block.” Id. at 5:54–57. “Blocks with variances larger than a threshold are
`subdivided, while blocks with variances smaller than a threshold are not
`subdivided.” Id. at Abstract.
`Figure 3A of Thyagarajan is reproduced below.
`
`
`Figure 3A depicts an exemplary block size assignment after subdivision in
`which the blocks are of different sizes. Id. at 6:67–7:1. Thyagarajan
`contemplates both the use of blocks that are “N×N in size” and various other
`block sizes, such as an “N×M block size . . . where both N and M are
`integers with M being either greater than or less than N.” Id. at 4:66–5:3.
`
`
`
`C. Golin
`Golin is directed to “video signal processing generally and
`particularly to systems for reducing the amount of digital data required to
`represent a digital video signal to facilitate uses, for example, such as the
`transmission, recording and reproduction of the digital video signal.”
`
`
`
`15
`
`
`
`IPR2016-00215
`Patent 7,974,339 B2
`
`
`
`Ex. 1006, 1:10–15. A coder splits a video frame “into a number of small
`groups of similar pixels” called “regions.” Id. at 11:44–46. “For each
`region a code is produced for representing the values of all pixels of the
`region.” Id. at 11:46–47. Figure 26 of Golin is reproduced below.
`
`
`Figure 26 depicts a “quad-tree decomposition” wherein regions are split in
`both horizontal and vertical directions. Id. at 13:40–49. Golin also
`describes a “roughness” estimator for detecting region edges in the pixel
`data based on large changes in adjacent pixels, i.e., when the values of
`adjacent pixels differ by more than a threshold value. Id. at 19:34–44, Fig.
`18. If edges are present in a region, the region is split horizontally or
`vertically. Id. at 20:47–63. Golin also states that “multipoint interpolation
`techniques” can be used as an alternative way of determining roughness. Id.
`at 20:64–66.
`
`
`
`
`16
`
`
`
`IPR2016-00215
`Patent 7,974,339 B2
`
`
`
`D. Obviousness Analysis for Claims 1, 6, and 13
`Claims 1, 6, and 13 are unpatentable “if the differences between the
`subject matter [claimed] and the prior art are such that the subject matter as a
`whole would have been obvious at the time the invention was made to a
`person having ordinary skill in the art to which said subject matter pertains.”
`35 U.S.C. § 103(a).2 Google’s obviousness analysis relies on Belfor for
`teaching the recited pixel selection system and data receiving system of
`claim 1. Pet. 32–36. Regarding the recited analysis system, Google relies
`on a combination of Belfor, Thyagarajan, and Golin. Id. at 27–32.
`Specifically, Google cites Belfor for teaching “receiving frame data (the
`input image) and generating region data (blocks) comprised of high detail
`and or low detail (some are in detailed regions and some are in regions with
`little detail).” Id. at 28 (citing Ex. 1003 ¶ 111; Ex. 1007, 1). Google
`proposes combining this teaching from Belfor with Thyagarajan’s teachings
`involving subdivision of an image into blocks of various sizes based on
`comparing the variance in pixels in a block with a threshold. Id. at 29 (citing
`Ex. 1008, 4:66–5:3, 5:54–7:3, Fig. 3A). Google argues “[t]he block
`subdivision of Thyagarajan is a simple substitution for the block size
`determination of Belfor” that would have been motivated by Belfor’s
`purported “suggestion to find a better block subdividing method that
`
`
`2 The Leahy-Smith America Invents Act, Pub. L. No. 112-29, 125 Stat. 284
`(2011) (“AIA”), amended 35 U.S.C. § 103. Because the ’339 patent has an
`effective filing date before the effective date of the applicable AIA
`amendment, throughout this Decision we refer to the pre-AIA version of 35
`U.S.C. § 103.
`
`
`
`17
`
`
`
`IPR2016-00215
`Patent 7,974,339 B2
`
`
`
`combines the advantages of using both large blocks and small blocks.” Id. at
`30 (citing Ex. 1003 ¶ 116; Ex. 1007, 4).
`Furthermore, because “Thyagarajan is based on a derived mean value
`[and] not a direct comparison of an amount of variation between pixels,” id.
`at 31; see also Ex. 1008, 5:60–65 (setting forth an exemplary formula for
`computing variance using mean pixel values), Google further proposes
`adding Golin to the combination. Pet. 31. Google cites Golin’s teachings on
`a “roughness test” for detection of region edges by comparing the
`differences of adjacent pixels with a threshold value. Id. (citing Ex. 1006,
`20:47–64). Google argues an ordinarily skilled artisan would have had
`reason to replace Thyagarajan’s “pixel variation detail determination” with
`Golin’s “pixel variation edge detector” because it is suggested by the
`references themselves and would have been a simple substitution of one
`known element for another to obtain predictable results. Id. at 32 (citing
`Ex. 1003 ¶ 121).
`Considering Google’s analysis and submitted evidence, and Vedanti’s
`Preliminary Response, we are satisfied there is a reasonable likelihood that
`Google would prevail in showing claim 1 would have been obvious over the
`combination of Belfor, Thyagarajan, and Golin. We add the following for
`additional explanation.
`Vedanti calls into question Google’s rationale for combining Belfor,
`Thyagarajan, and Golin. Specifically, Vedanti highlights that Belfor teaches
`subdividing an image into uniformly sized blocks. Prelim. Resp. 21, 28
`(both citing Ex. 1007, 4). Consonant with Belfor’s teachings, Vedanti states
`“[t]he ‘important system parameter’ of a uniform block size minimizes the
`required side information for the regions.” Id. at 29 (quoting Ex. 1007, 4).
`
`
`
`18
`
`
`
`IPR2016-00215
`Patent 7,974,339 B2
`
`
`
`Vedanti contrasts Belfor’s teachings with those of Thyagarajan, which
`“performs subdivision on a block, resulting in blocks of different sizes.” Id.
`at 27 (citing Ex. 1008, 3). Accordingly, Vedanti contends it would not have
`been a simple substitution to replace Belfor’s “system-wide block size” with
`Thyagarajan’s block size assignment algorithm.
`We are not persuaded on this record that Belfor and Thyagarajan are
`incompatible, or that their teachings are so different as to make them
`inapplicable to one another, based on Belfor’s use of a fixed block size.
`Google’s rationale for the combination is premised on, inter alia, Dr.
`Grindon’s testimony characterizing the interplay of Belfor’s fixed block size
`and variable sampling modes as a group of smaller matrices variably sized
`according to the sampling lattice. See Pet. 22 (citing, inter alia, Ex. 1003
`¶¶ 93–94).
`The illustration from paragraph 93 of the Grindon Declaration is
`reproduced below.
`
`
`This illustration is Dr. Grindon’s depiction of a hypothetical image frame
`having nine regions in which various sampling lattices from Belfor are
`applied. Ex. 1003 ¶ 93. The illustration from paragraph 94 of the Grindon
`Declaration is reproduced below.
`
`
`
`19
`
`
`
`IPR2016-00215
`Patent 7,974,339 B2
`
`
`
`
`
`This illustration is Dr. Grindon’s depiction of the same hypothetical image
`from above, but this time Dr. Grindon states “[e]ach block in such an image
`could be modeled in terms of smaller matrices, of size according to the
`sampling lattice. In this way, “Belfor may teach or suggest regions of
`varying dimensions.” Id. ¶ 94. On the present record, Dr. Grindon’s
`testimony provides some support for Google’s contentions regarding the
`combination of Belfor and Thyagarajan, and is supported by evidence in the
`record.
`In particular, Belfor acknowledges “[t]he size of the blocks is an
`important system parameter,” and recognizes the tradeoffs inherent in using
`smaller and larger block sizes. Ex. 1007, 4 (quoted at Pet. 24). And,
`ultimately, both Belfor and Thyagarajan teach block encoding methods for
`images using pixel sampling, which supports Google’s simple substitution
`theory. Id. at 24–25 (citing Ex. 1003 ¶¶ 97–98). On these bases, we do not
`agree with Vedanti that Belfor and Thyagarajan are so “different and
`inapplicable to one another” as to undermine their combination. Prelim.
`Resp. 29. Based on the current record, Google’s evidence is sufficient to
`demonstrate a reasonable likelihood of prevailing, but the ultimate
`
`
`
`20
`
`
`
`IPR2016-00215
`Patent 7,974,339 B2
`
`
`
`assessment of that evidence, including Dr. Grindon’s testimony, will be
`based on the complete record at the end of trial.
`Vedanti also argues Google has failed to provide a rationale for
`combining the teachings of Belfor and Thyagarajan with those of Golin.
`Prelim. Resp. 30. Vedanti’s argument is premised on Golin’s horizontal and
`vertical region splitting and its possibility of creating rectangular blocks, see
`id. at 30–32 (citing Ex. 1006, 22:65–23:31), which Vedanti contends is
`incompatible with the Belfor-Thyagarajan combination’s uniform square
`blocks. See id. at 31. We are not persuaded by Vedanti’s arguments based
`on the current record because we do not agree that the Belfor-Thyagarajan
`combination posited by Google would result in uniform square blocks. To
`the contrary, and as stated in the previous paragraph, Google proposes
`modifying Belfor’s fixed block size with Thyagarajan’s variable block
`subdivision scheme. See Pet. 22–25. In addition, Google cites Golin for its
`“pixel variation edge detector” algorithm, and not its particular method of
`splitting blocks. See Pet. 26, 31–32.
`Furthermore, we are satisfied that Google has shown at this stage of
`the proceeding some articulated reasoning with some rational underpinning
`that would support the legal conclusion of obviousness. See KSR Int’l Co. v.
`Teleflex Inc., 550 U.S. 398, 417–18 (2007). Specifically, Google’s
`substitution of Golin’s pixel variation edge detector for Belfor-
`Thyagarajan’s “contrast method” arises from common teachings on
`“determining when to subdivide a block based on some measure of pixel
`variation to achieve a balance between the amount of data reduction and
`image quality.” See Pet. 25–26 (citing Ex. 1003 ¶¶ 102–03). We also are
`persuaded at this stage of the proceeding by Google’s and Dr. Grindon’s
`
`
`
`21
`
`
`
`IPR2016-00215
`Patent 7,974,339 B2
`
`
`
`assertions that substituting one algorithm for another in this way would have
`led to predictable results and that a person of ordinary skill in the art would
`have had reason to make the proffered combination. See id.
`With respect to the “analysis system” of claim 1, Google cites
`Belfor’s input image for teaching “receiving frame data.” Pet. 28. This is
`supported by the adaptive coding scheme illustrated in Figure 5 of Belfor,
`which is reproduced in Google’s Petition. Id. at 27–28; see also Ex. 1007, 4
`(referring to “[t]he input image” with reference to Figure 5). Google also
`cites Belfor’s subdivision of the image into blocks for teaching “generating
`region data.” Pet. 19, 28 (both quoting Ex. 1007, 1).
`Vedanti contends the cited references “fail to receive and analyze the
`frame data to generate regions” because each reference “subdivides the
`image characterized by the frame data into a series of blocks without
`performing an analysis of the frame data.” Prelim. Resp. 33–35. Vedanti
`bases its argument on Belfor’s use of a single block size and Thyagarajan’s
`teaching that “an image is divided into blocks of pixels