`
`
`
`
`
`Filed on behalf of Unified Patents Inc.
`By:
`
`ERISE IP, P.A.
`Eric A. Buresh, Reg. No. 50,394
`eric.buresh@eriseip.com
`7015 College Blvd., Suite 700
`Overland Park, Kansas 66211
`
`
`
`
`IPR2019-01085
`U.S. Patent 8,135,073
`
`
`UNIFIED PATENTS INC.
`Ashraf Fawzy, Reg. No. 67,914
`afawzy@unifiedpatents.com
`1875 Connecticut Ave., NW, Floor 10
`Washington, D.C. 20009
`
`Jonathan R. Bowser, Reg. No. 54,574
`jbowser@unifiedpatents.com
`Unified Patents Inc.
`1875 Connecticut Ave., NW, Floor 10
`Washington, D.C. 20009
`
`
`UNITED STATES PATENT AND TRADEMARK OFFICE
`
`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`
`
`UNIFIED PATENTS INC.,
`Petitioner
`
`v.
`
`DYNAMIC DATA TECHNOLOGIES, LLC,
`Patent Owner
`
`
`Case No. IPR2019-01085
`Patent No. 8,135,073
`
`PETITION FOR INTER PARTES REVIEW
`OF U.S. PATENT NO. 8,135,073
`
`
`
`
`
`
`i
`
`
`
`
`
`TABLE OF CONTENTS
`Introduction ................................................................................................. 1
`Summary of the Patent ................................................................................. 1
`
`IPR2019-01085
`U.S. Patent 8,135,073
`
`
`
`
`I.
`II.
`
`
`A. Technology Background .............................................................................. 1
`
`B. Description of the Alleged Invention of the ’073 Patent ............................... 5
`
`C. Summary of the Prosecution History of the ’073 Patent ............................... 5
`
`D. Level of Ordinary Skill in the Art ................................................................ 6
`
`III. Requirements for Inter Partes Review Under 37 C.F.R. § 42.104 ................ 6
`A. Grounds for Standing ................................................................................... 6
`
`B. Identification of Challenged Claims and Relief Requested .......................... 6
`
`C. Claim Construction Standard ....................................................................... 8
`
`IV. The Challenged Claims are Unpatentable .................................................... 8
`
`
`A. Ground 1: Claims 1-4, 14, 18, and 20 are Obvious Over Yang in view of
`Paik .................................................................................................................... 9
`
`B. Ground 2: Claims 6-8, 16, and 21 are Obvious Over Yang in View of Paik in
`Further View of Liu ...........................................................................................39
`
`C. Ground 3: Claim 19 is Obvious Over Yang in View of Paik in Further View
`of Kawamura .....................................................................................................49
`
`SHOWING OF ANALOGOUS, PRIOR ART STATUS ............................55
`V.
`VI. DISCRETIONARY INSTITUTION ...........................................................58
`VII. CONCLUSION ..........................................................................................59
`VIII. Mandatory Notices Under 37 C.F.R. § 42.8(A)(1) ......................................60
`
`
`A. Real Parties-in-Interest ................................................................................60
`
`B. Related Matters ...........................................................................................60
`
`C. Lead and Back-Up Counsel Under 37 C.F.R. § 42.8(b)(3) ..........................62
`
`
`
`ii
`
`
`
`
`
`
`I.
`
`Introduction
`
`
`
`
`IPR2019-01085
`U.S. Patent 8,135,073
`
`Petitioner Unified Patents Inc. (“Petitioner” or “Unified”) respectfully
`
`requests inter partes review (“IPR”) of Claims 1-4, 6-8, 14, 16, 18-21 (collectively,
`
`the “Challenged Claims”) of U.S. Patent 8,135,073 (“the ’073 Patent,” Ex. 1001).
`
`The ’073 Patent describes systems and methods for decoding video data that
`
`includes determining a re-mapping strategy for video enhancement of a first video
`
`frame and re-using the same re-mapping strategy to enhance a second frame within
`
`the same video stream. This method of re-using a re-mapping strategy was known
`
`long before the ’073 Patent, as shown by both Yang (Ex. 1004) and Paik (Ex. 1005),
`
`as discussed in more detail below. The Challenged Claims are therefore obvious over
`
`the prior art and should be found unpatentable.
`
`II.
`
`Summary of the Patent
`
`A. Technology Background
`Digital video is formed from a sequence of individual video frames that
`
`include pixel data. See Freedman Decl. (Ex. 1003) at ¶ 34 (citing Richardson (Ex.
`
`1009) at 36-38). Each frame is an array of pixels organized in rows and columns to
`
`form the image represented by the frame, where the pixels reflect characteristics of
`
`objects represented in a scene of a video. Richardson (Ex. 1009) at 36-38. These
`
`rows and columns of pixels are generally divided into small regions called “blocks”
`
`of data. Id. As shown below, each frame is a table or matrix of pixels, i.e., pixels in
`
`
`
`1
`
`
`
`
`
`
`
`
`
`IPR2019-01085
`U.S. Patent 8,135,073
`
`rows and columns form an image represented by the frame. Richardson (Ex. 1009)
`
`at 10-11, 17-19. The pixel data reflects characteristics of objects represented in a
`
`scene of the video, such as shapes and edges. Id. at 10, 33; see also Freedman Decl.
`
`(Ex. 1003) at ¶ 34.
`
`Richardson (Ex. 1009) at Fig. 2.2.
`
`
`
`Digital video files can be large due to the large amounts of image data
`
`associated with each frame. Id. at 2-5. To efficiently transmit them to end-user
`
`devices for quick playback, video coding techniques are used to compress (i.e.
`
`encode) video files for efficient transmission and later receipt and decompression
`
`(i.e., decoding), followed by output at an end-user display device. Id. Such
`
`compression is achieved, in part, by removing redundancy in and between frames.
`
`Id. at ¶ 35. Specifically, within a particular sequence of video images, individual
`
`
`
`2
`
`
`
`
`
`
`
`
`
`IPR2019-01085
`U.S. Patent 8,135,073
`
`frames can be correlated to benefit from redundant video information from within a
`
`given frame (spatial correlation) and from successive frames captured at around the
`
`same time (temporal correlation):
`
`
`
`Richardson (Ex. 1009) at 53, Fig. 3.2.
`
`Many aspects of video coding were well-known long before the ’073 Patent,
`
`including region-based video coding that use prediction techniques to remove spatial
`
`and temporal redundancy in coded video data. See ’073 Patent (Ex. 1001) at 2:20-
`
`33. As acknowledged in the ’073 Patent (and illustrated in Fig. 3.2 above), it was a
`
`well-known aspect of the Moving Picture Experts Group or “MPEG-2” standard
`
`(adopted in 1996) to encode video frames using spatial prediction within a single
`
`
`
`3
`
`
`
`
`
`
`
`
`
`IPR2019-01085
`U.S. Patent 8,135,073
`
`frame to create so-called “I-frames.” Id. at 2:20-33. Similarly, the MPEG standard
`
`utilized temporal prediction that relies on similarities between adjacent frames
`
`within a video stream. See Freedman Decl. (Ex. 1003) at ¶¶ 35-36, 41 (citing Symes
`
`at 159-162). For example, “P-frames” are predicted based on the information
`
`contained in a previous frame, and “B-frames” are predicted based on information
`
`contained in previous and subsequent frames (bi-directional prediction). ’073 Patent
`
`at 2:24-33; see also Freedman Decl. (ex. 1003) at ¶¶ 38, 41-42, 45-46 (citing Symes
`
`Ex. 1010 at 179). An illustration of the temporal relationship between I, P, and B
`
`frames is shown below, where each frame in the video stream is displayed in time
`
`going from left to right, starting with the I frame:
`
`Symes (Ex. 1010) at 179.
`
`After the video data is encoded, it is stored or transmitted to a receiver for
`
`eventual decoding and display to a user. Decoders generally reverse the coding
`
`
`
`
`
`4
`
`
`
`
`
`
`
`
`
`IPR2019-01085
`U.S. Patent 8,135,073
`
`process performed by the corresponding encoder. Freedman Decl. (Ex. 1003) at ¶
`
`40; Richardson (Ex. 1009) at 52, 98.
`
`B. Description of the Alleged Invention of the ’073 Patent
`The ’073 Patent is directed to a method and device for decoding digital video
`
`data based on the results of decoding prior video frames. See ’073 Patent at Abstract.
`
`Specifically, independent Claims 1 and 14 of the ’073 Patent recite determining a
`
`re-mapping strategy to enhance a first video frame and re-using that strategy on
`
`subsequent frames to avoid determining an enhancement strategy for each new
`
`frame. ’073 Patent at Claims 1 and 14. Further, Claims 6 and 7 of the ’073 Patent
`
`recite a method of determining whether subsequent frames are a close match to the
`
`original frame to ensure that re-using the re-mapping strategy is still effective. ’073
`
`Patent at Claims 6-7. Both of these concepts are found in the prior art, as detailed
`
`below.
`
`C. Summary of the Prosecution History of the ’073 Patent
`The ’073 Patent issued from an application filed on June 7, 2005, claiming
`
`priority to a PCT application filed on December 12, 2003 and a provisional
`
`application filed on December 20, 2002.1 See ’073 Patent (Ex. 1001).
`
`
`1 For the purposes of this Petition, Petitioner assumes, but does not concede, that
`
`December 20, 2002 is the earliest priority date of the ’073 Patent.
`
`
`
`5
`
`
`
`
`
`
`
`
`
`IPR2019-01085
`U.S. Patent 8,135,073
`
`On November 15, 2010, the Examiner rejected a subset of claims under 35
`
`U.S.C. § 102(e) in view of U.S. Patent 7,609,767 to Srinivasan. See ’073 File History
`
`(Ex. 1002) at 221-228. After Patent Owner demonstrated that Srinivasan was not
`
`prior art under § 102(e), a Notice of Allowance for all claims was issued on June 27,
`
`2011. Id. at 100-106.
`
`D. Level of Ordinary Skill in the Art
`A person having ordinary skill in the art (“PHOSITA”) would have been a
`
`person having, as of December 20, 2002: (1) at least an undergraduate degree in
`
`electrical engineering or closely related scientific field, such as physics, computer
`
`engineering, or computer science, or similar advanced post-graduate education in
`
`this area; and (2) two or more years of experience with video or image processing.
`
`Less work experience may be compensated by a higher level of education, such as a
`
`Master’s Degree, and vice versa. See Freedman Decl. (Ex. 1003) at ¶¶ 30-31.
`
`III. Requirements for Inter Partes Review Under 37 C.F.R. § 42.104
`
`A. Grounds for Standing
`Petitioner certifies that the ’073 Patent is available for IPR and that Petitioner
`
`is not barred or estopped from requesting IPR challenging the Challenged Claims.
`
`37 C.F.R. § 42.104(a).
`
`B. Identification of Challenged Claims and Relief Requested
`In view of the prior art, evidence, and analysis discussed in this Petition, IPR
`
`should be instituted and Claims 1-4, 6-8, 14, 16, 18-21 of the ’073 Patent should be
`
`
`
`6
`
`
`
`
`
`
`
`
`
`IPR2019-01085
`U.S. Patent 8,135,073
`
`found unpatentable and cancelled based on the following grounds of unpatentability.
`
`37 C.F.R. § 42.104(b)(2). None of the prior art references listed in the grounds below
`
`were cited during prosecution of the ’073 Patent.
`
`Exhibits
`1004
`1005
`
`Proposed Grounds of Unpatentability
`Ground 1: Claims 1-4, 14, 18, and 20 are obvious under § 103(a)
`over U.S. Patent No. 6,873,657 to Yang et al. (“Yang” or “Ex.
`1004”) in view of U.S. Patent No. 6,163,621 to Paik et al. (“Paik”
`or “Ex. 1005”)
`Ground 2: Claims 6-8, 16, and 21 are obvious over Yang in view
`of Paik in further view of U.S. Patent No. 5,809,173 to Liu et al.
`(“Liu” or “Ex. 1006”)
`Ground 3: Claim 19 is obvious over Yang in view of Paik in further
`view of U.S. Patent No. 6,078,693
`to Kawamura et al.
`(“Kawamura” or “Ex. 1007”)
`
`In view of the prior art, evidence, and arguments herein, the Challenged
`
`1004
`1005
`1006
`
`1004
`1005
`1007
`
`
`
`Claims are unpatentable and should be cancelled. 37 C.F.R. § 42.104(b)(1). Based
`
`on the prior art references identified below in light of the knowledge of a PHOSITA,
`
`IPR of these claims should be instituted. 37 C.F.R. § 42.104(b)(2). This review is
`
`governed by pre-AIA 35 U.S.C. §§ 102 and 103.
`
`Section IV, infra, identifies where each element of the Challenged Claims is
`
`found in the prior art. 37 C.F.R. § 42.104(b)(4). The exhibit numbers of the evidence
`
`relied upon to support the challenges are provided above, and the relevance of the
`
`
`
`7
`
`
`
`
`
`
`
`
`
`IPR2019-01085
`U.S. Patent 8,135,073
`
`evidence to the challenges raised is provided in Section IV. 37 C.F.R. § 42.104(b)(5).
`
`Exhibits 1001-1017 are also attached.
`
`C. Claim Construction Standard
`The Board “constru[es each] claim in accordance with the ordinary and
`
`customary meaning of such claim as understood by one of ordinary skill in the art
`
`and the prosecution history pertaining to the patent.” 37 C.F.R. § 42.100(b). The
`
`“ordinary and customary meaning” also “tak[es] into consideration the language of
`
`the claims [and] the specification.” Panel Claw, Inc. v. Sunpower Corp., IPR2014-
`
`00386, Paper No. 7 at 7 (PTAB June 30, 2014) (citing Phillips v. AWH Corp., 415
`
`F.3d 1303, 1312–13 (Fed. Cir. 2005)). Petitioner submits that no claim terms require
`
`construction and that the claim terms should be afforded their ordinary and
`
`customary meaning as understood by one of ordinary skill in the art.
`
`IV. The Challenged Claims are Unpatentable
`
`The below grounds demonstrate how the cited prior art teaches and/or renders
`
`obvious each and every limitation of the Challenged Claims.
`
`
`
`8
`
`
`
`
`
`
`
`
`
`IPR2019-01085
`U.S. Patent 8,135,073
`
`A. Ground 1: Claims 1-4, 14, 18, and 20 are Obvious Over Yang in view
`of Paik
`1. Claim 1
`
`[1(A)i] A method, comprising: receiving, at a decoder, a video stream containing
`encoded frame based video information including an encoded first frame and an
`encoded second frame,
`
`Yang teaches this element. For example, Yang teaches a method “compris[ing]
`
`the steps of: receiving the enhanced signal including at least one frame” (i.e.,
`
`receiving…a video stream containing encoded frame based video information)2.
`
`Yang (Ex. 1004) at 4:4-9. Yang’s disclosure that the video stream contains “at least
`
`one frame” indicates the video stream in Yang is “frame based” because it includes
`
`at least one frame. See Freedman Decl. (Ex. 1003) at ¶ 49 (explaining that video
`
`comprising frames is based on frames and therefore “frame based”). Further, Yang
`
`is based on the MPEG standard that encompasses encoded, framed based video, and
`
`therefore the encoded video in Yang that is in accordance with the MPEG standard
`
`is frame based. Yang (Ex. 1004) at 1:24-2:17, 2:44-55 (“An HD program is typically
`
`broadcast at 20 Mb/s and encoded according to the MPEG-2 video standard), 5:5-14
`
`
`2 Throughout this Petition, text in italics is used to signify claim language, while
`
`reference names are also italicized.
`
`
`
`9
`
`
`
`
`
`
`
`
`
`IPR2019-01085
`U.S. Patent 8,135,073
`
`and 5:29-30 (describing television receiver 110 in Fig. 1 as including an MPEG
`
`decoder 130); See also Freedman Decl. (Ex. 1003) at ¶¶ 34-36, 49.
`
`The video signal is received, at a decoder, as depicted in annotated Figure 1
`
`of Yang below, showing the video signal being first received at a tuner 120,
`
`proceeding through an IR processor 125, and then being received at a component
`
`labelled as “MPEG DECODER 130”:
`
`
`The claimed “decoder” of the ’073 Patent includes all of the components 120
`
`depicted in Figure 2 (below), including the separate “decoding unit 124.” ’073 Patent
`
`at 5:41-50.
`
`
`
`10
`
`
`
`
`
`
`
`
`
`IPR2019-01085
`U.S. Patent 8,135,073
`
`
`
`’073 Patent at Figure 2. Thus, the claimed decoder in the ’073 Patent includes
`
`additional processing units beyond just the “decoding unit 124” that are necessary
`
`to provide the decoded output. Similarly, as depicted below in Figure 1 in Yang,
`
`while element 130 is described as an “MPEG decoder,” the additional processing
`
`elements 135 that combine to provide the desired output are components of a
`
`“decoder” such as that described and claimed in the ’073 Patent. Thus, one of skill
`
`in the art would recognize that, for example, the “post processing circuits” 135,
`
`including the “adaptive peaking unit” 140 are part of a decoder. See Freedman Decl.
`
`(Ex. 1003) at ¶ 50; Id. at Fig. 1. Yang further teaches the frame-based video signal
`
`includes “a set of I, P and B frames” (i.e., an encoded first frame and an encoded
`
`second frame). See Freedman Decl. (Ex. 1003) at ¶¶ 38, 41-42, 46 (explaining that
`
`
`
`11
`
`
`
`
`
`
`
`
`
`IPR2019-01085
`U.S. Patent 8,135,073
`
`I, P, and B frames are separately encoded video frames and citing Symes at 178-
`
`184).
`
`[1(A)ii] the encoding of the second frame depends on the encoding of the first
`frame,
`
`Yang teaches this limitation. In Yang, the encoding of P and B video frames
`
`(i.e., second frames) depends upon the encoding of a preceding I frame (i.e., first
`
`frame). Yang (Ex. 1004) at 2:20-33. Yang teaches: “[i]t is possible then to accurately
`
`predict the data of one frame based on the data of a previous frame. In P frames each
`
`16x16 sized macroblock is predicted from the macroblocks of previously encoded I
`
`or P frames.” Yang (Ex. 1004) at 2:17-43 (emphasis added). Thus, Yang teaches the
`
`encoding of the second frame (P frame) depends on the encoding of the first frame
`
`(I frame). See Freedman Decl. (Ex. 1003) at ¶¶ 38, 41-42, 46 (explaining the
`
`temporal relationship between I, P, and B frames and explaining how the encoding
`
`of P and B frames depends on the encoding of I frames according to the MPEG-2
`
`video coding standard and citing Symes (Ex. 1010) at 178-180).
`
`[1(A)iii] the encoding of the second frame includes motion vectors indicating
`differences in positions between regions of the first frame, the motion vectors
`define the correspondence between regions of the second frame and
`corresponding regions of the first frame; and
`
`Yang teaches the P frames (i.e. second frames) are compressed (i.e. encoded)
`
`using “motion compensation based prediction, which exploits
`
`temporal
`
`redundancy.” Yang (Ex. 1004) at 2:19-24; see also Freedman Decl. (Ex. 1003) at ¶
`
`
`
`12
`
`
`
`
`
`
`
`
`
`IPR2019-01085
`U.S. Patent 8,135,073
`
`40 (explaining that video frame compression is a subset of encoding). “Since frames
`
`are closely related, it is assumed that a current picture can be modeled as a translation
`
`of the picture at the previous time.” Yang (Ex. 1004) at 2:17-25. Yang further teaches
`
`the encoder searches the previous frame to establish motion vectors that indicat[e]
`
`differences in positions between regions of the first frame, and define the
`
`correspondence between regions of the second frame and corresponding regions of
`
`the first frame:
`
`in half pixel increments for other macroblock locations that are a close
`match to the information that is contained in the current macroblock.
`The displacements in the horizontal and vertical directions of the best
`match macroblocks from a cosited macroblock are called motion
`vectors. The difference between the current block and the matching
`block and the motion vector are encoded.
`
`Id. at 2:31-38 (emphasis added). Yang teaches correspondence between regions of
`
`the second frame and corresponding regions of the first frame because it expressly
`
`teaches “motion vectors” that comprise the “displacements in the horizontal and
`
`vertical directions of the best match macroblocks from a cosited macroblock,” and
`
`a PHOSITA would have understood that macroblocks are regions of a frame, and
`
`further would have understood that “displacements in the horizontal and vertical
`
`direction of the best match macroblocks” establish the correspondence between such
`
`regions in a first and second frame because “displacement” refers to the amount a
`
`
`
`13
`
`
`
`
`
`
`
`
`
`IPR2019-01085
`U.S. Patent 8,135,073
`
`region has moved (i.e., correspondence). See Freedman Decl. (Ex. 1003) at ¶¶ 37,
`
`46 (citing Symes at 159-166).
`
`Thus, Yang teaches that P frames (i.e. second frames) include motion vectors
`
`indicating differences in positions between regions of the first frame, the motion
`
`vectors define the correspondence between regions of the second frame and
`
`corresponding regions of the first frame.
`
`[1(B)] via the decoder: decoding the first frame;
`
`Yang teaches decoding the first frame via a decoder. The ’073 Patent admits
`
`that “[d]ecoding of I-frames is well known in the art.” ’073 Patent (Ex. 1001) at
`
`2:29-33. Consistent with the ’073 Patent and well-known teachings in the art, Yang
`
`discloses that the decoder in Figure 1 decodes I frames (i.e., the first frame). Yang
`
`(Ex. 1004) at 8:24-26 (the decoder in Figure 1 “decod[es] a video signal
`
`representative of a set of I, P and B frames”); see Freedman Decl. (Ex. 1003) at ¶¶
`
`40, 50 (explaining Yang’s disclosure that it “decod[es] a video signal” would be
`
`understood by a PHOSITA to refer to decoding a signal comprising video data.)
`
`[1(C)] determining a re-mapping strategy for video enhancement of the decoded
`
`first frame using a region-based analysis;
`
`Yang teaches an adaptive peaking unit (adaptive peaking unit 140 in Figure
`
`1), which is part of the decoder in Yang as described in the preceding section, that
`
`determines a re-mapping strategy for video enhancement of the decoded first frame.
`
`
`
`14
`
`
`
`
`
`
`
`
`
`IPR2019-01085
`U.S. Patent 8,135,073
`
`The adaptive peaking unit “generates a value of a gain for use in the adaptive peaking
`
`process, such as a sharpness enhancement algorithm.” Yang (Ex. 1004) at 5:36-44.
`
`This sharpness enhancement algorithm and associated “gain map,” constitutes a “re-
`
`mapping strategy for video enhancement” because the sharpness enhancement
`
`algorithm re-maps the pixels in a video stream (by way of its “gain map”) to enhance
`
`video quality through improved sharpness. See, e.g., Yang at 8:31-34 (“If the input
`
`video frame is an I-frame, the gain map (gain of each pixel in a frame) for the frame
`
`is computed in accordance with prior art and stored into gain memory.”) The “gain
`
`map” in Yang dictates how each pixel is re-mapped from its initial value to a new
`
`value to enhance sharpness of the video frames. See Yang at 8:24-51; see also
`
`Freedman Decl. (Ex. 1003) at ¶ 51 (explaining the “gain map” of Yang). Further,
`
`sharpness enhancement is a form of video enhancement because Yang describes the
`
`sharpness enhancement algorithm as producing “an enhanced luminance signal for
`
`the video signals” as well as “utilizing the enhanced luminance signal to enhance the
`
`quality of video signals.” Yang (Ex. 1004) at 2:50-54; 5:39-43.
`
`Yang does not explicitly require the re-mapping strategy for video
`
`enhancement to be performed using region-based analysis because the “gain map”
`
`in Yang is applied on a pixel basis. However, the combination of Yang in view of
`
`Paik teaches region-based analysis for the re-mapping strategy. Paik teaches a
`
`region-based video enhancement algorithm that improves contrast, which is a video
`
`
`
`15
`
`
`
`
`
`
`
`
`
`IPR2019-01085
`U.S. Patent 8,135,073
`
`enhancement that is closely related to sharpness, by forming a histogram of intensity
`
`values in distinct regions of an image (i.e., using a region-based analysis) and re-
`
`mapping the intensity values to improve contrast and enhance the video (i.e., a re-
`
`mapping strategy for video enhancement), as illustrated in Figure 1 below:
`
`Paik (Ex. 1005) at 7:51-8:48; Fig. 2; See Freedman Decl. (Ex. 1003) at ¶ 55
`
`(explaining that contrast is another form of video enhancement that is closely related
`
`to sharpness enhancement). Paik teaches a re-mapping strategy for video
`
`
`
`
`
`16
`
`
`
`
`
`
`
`
`
`IPR2019-01085
`U.S. Patent 8,135,073
`
`enhancement because Paik teaches, in step 18, that the “transform function [] maps
`
`the brightness levels to equalized brightness levels for improving image contrast.”
`
`Paik (Ex. 1005) at 4:41-67. As admitted by the ’073 Patent, region-based re-mapping
`
`strategies were “well known.” See ’073 Patent at 1:16-19; 2:34-43 (stating that
`
`“[m]ethods of determining re-mapping strategies for regions of decoded frames
`
`using such analysis are well known, and those skilled in the art are directed to U.S.
`
`Pat. No. 6,259,472 and U.S. Pat. No. 5,862,254 which disclose such re-mapping of
`
`intensity values”) (emphasis added). The contrast enhancement method of Paik is of
`
`the same kind as the exemplary contrast enhancing algorithms that are described as
`
`“well-known” in the ’073 Patent because Paik teaches the very same re-mapping of
`
`intensity values within regions of an image within a video stream. See Paik (Ex.
`
`1005) at Abstract; 1:54-67; 2:57-67; 4:41-67; 7:51-8:48; See also Freedman Decl.
`
`(Ex. 1003) at ¶¶ 53, 55. Thus, Paik provides a prior art example of the very re-
`
`mapping strategy disclosed in the ’073 Patent as background art.
`
`A PHOSITA would have been motivated to look to teachings of Paik at least
`
`because Yang teaches the fundamental aspect of the ’073 Patent—storing and re-
`
`using a re-mapping strategy for video enhancement on subsequent frames—and
`
`expressly teaches that its system “may be used with more than one type of video
`
`enhancement algorithm.” Yang (Ex. 1004) at 5:39-43 (emphasis added). Paik is
`
`exactly the alternate type of video enhancement algorithm explicitly suggested by
`
`
`
`17
`
`
`
`
`
`
`
`
`
`IPR2019-01085
`U.S. Patent 8,135,073
`
`Yang. See KSR Int’l. Co. v. Teleflex Inc., 550 U.S. 398, 401, 127 S. Ct. 1727, 1731
`
`(2007).
`
`Further, a PHOSITA would have been motivated to substitute the sharpness
`
`enhancing algorithm of Yang with the contrast enhancing algorithm of Paik because
`
`it was known that sharpness and contrast are interchangeable adjustments, and one
`
`adjustment is often more effective than another depending on the video source. See
`
`Freedman Decl. (Ex. 1003) at ¶ 55 (citing Kimoto (Ex. 1012) at 1:11-27 as within
`
`the knowledge of a PHOSITA and explaining that sharpness and contrast are closely
`
`related and one is often more effective for a particular video source). In essence,
`
`contrast and sharpness are two sides of the same coin—sharpness improves contrast
`
`near the edge transitions of luminosity within an image, and contrast enhancement
`
`improves contrast on a broader scale in an image and is not limited to the edge
`
`transitions. Id. (explaining that sharpness enhancement is contrast enhancement
`
`performed on a localized basis). Further, it was known that region-based video
`
`enhancement is often more effective than global or pixel-level video enhancement
`
`because region-based enhancement provides more relevant context from the nearby
`
`area of the image and is therefore not polluted by extraneous information from
`
`distant portions of the image, resulting in better image improvement, as described in
`
`Paik. See Paik (Ex. 1005) at 1:24-34; Freedman Decl. (Ex. 1003) at ¶ 55 (explaining
`
`the known problem, also identified in Paik, that degradation occurs when non-
`
`
`
`18
`
`
`
`
`
`
`
`
`
`IPR2019-01085
`U.S. Patent 8,135,073
`
`region-based video enhancement is used). Given Yang’s express teaching that
`
`reusing a video enhancement strategy can be performed with other types of video
`
`enhancement, a PHOSITA would have been motivated to replace the sharpness
`
`algorithm with one of the “well known” contrast enhancement methods to improve
`
`the related parameter of contrast when appropriate based on the type of video source
`
`to gain the advantages that the contrast algorithm would present in that setting. Yang
`
`(Ex. 1004) at 5:39-43l Freedman Decl. (Ex. 1003) at ¶ 55.
`
`A PHOSITA would have been further motivated to substitute the sharpness
`
`enhancement of Yang with the contrast enhancement of Paik because it would have
`
`involved the use of a known interchangeable technique (i.e., contrast enhancement
`
`using region based analysis) to improve a similar device (i.e., video enhancement
`
`system of Yang) in the same way, because Yang’s sharpness algorithm and Paik’s
`
`contrast enhancement algorithm both adjust luminosity of the pixels within an image
`
`to improve video quality. See Freedman Decl. (Ex. 1003) at ¶ 55. A PHOSITA
`
`would have had a reasonable expectation of success by adding region-based
`
`enhancement algorithm of Paik to the storage and re-use teachings of Yang because
`
`these video enhancements algorithms, as the ’073 Patent admits, were “well known”
`
`and within the skill of an ordinary artisan to implement. See Freedman Decl. (Ex.
`
`1003) at ¶ 55.
`
`
`
`19
`
`
`
`
`
`
`
`
`
`IPR2019-01085
`U.S. Patent 8,135,073
`
`[1(D)] re-mapping regions of the decoded first frame according to the
`determined video enhancement re-mapping strategy for the first frame so as to
`enhance the first frame;
`
`Yang teaches this element. To simplify the mapping for this element, the
`
`limitations in the claim have been color coded to match the corresponding disclosure
`
`in Yang, and Figure 1 of Yang, below:
`
`
`
`Id. at Fig. 1. Referring to Figure 1, Yang teaches:
`
`The output of adaptive peaking unit 140 is an enhanced luminance
`signal for the video signals that adaptive peaking unit 140 receives from
`MPEG decoder 130. The luminance signals determined by adaptive
`peaking unit 140 provides a more accurate, visually distinct and
`temporally consistent video image than that provided by prior art
`adaptive peaking units. Adaptive peaking unit 140 transfers the
`enhanced luminance signal to other circuits within post processing
`
`
`
`20
`
`
`
`
`
`
`
`
`
`IPR2019-01085
`U.S. Patent 8,135,073
`
`circuits 135. Post-processing circuits 135 are capable of utilizing the
`enhanced luminance signal to enhance the quality of video signals.
`
`Yang (Ex. 1004) at 5:44-54. See Freedman Decl. at ¶ 52. Thus, Yang’s disclosure
`
`that the “output of adaptive peaking unit 140 is an enhanced luminance signal”
`
`teaches the claimed enhanced the first frame. Similarly, Yang’s teaching of “video
`
`signal that adaptive peaking unit 140 receives from MPEG decoder 130” teaches the
`
`claimed decoded first frame because it is a video signal including frames that is
`
`received from the decoder and is therefore decoded. Further, Yang’s “adaptive
`
`peaking unit 140” that determines the luminance signals teaches the claimed
`
`according to the determined video enhancement strategy. Finally, Yang’s “other
`
`circuits within post processing circuits 135” that “enhance the quality of video
`
`signals” teaches the claimed action of re-mapping regions.
`
`Yang does not teach performing the video enhancement algorithm using
`
`“region based analysis.” However, a PHOSITA would have been motivated to
`
`replace the adaptive peaking algorithm of Yang with the region-based contrast
`
`enhancement re-mapping strategy of Paik as described above in Claim 1(C). For
`
`example, Paik teaches a method of improving contrast by forming a histogram of
`
`intensity values within separate regions of an image (i.e., using a region-based
`
`analysis) and re-mapping those intensity values to improve contrast and thereby
`
`enhance the video (i.e., a re-mapping strategy for video enhancement). Paik
`
`(Ex. 1005) at 1:54-2:62; See Freedman Decl. (Ex. 1003) at ¶¶ 53, 55. The contrast
`
`
`
`21
`
`
`
`
`
`
`
`
`
`IPR2019-01085
`U.S. Patent 8,135,073
`
`enhancement method of Paik is equivalent to the example contrast enhancing
`
`algorithms cited as background art in the ’073 Patent because it teaches the same re-
`
`mapping of intensity values within regions of an image within a video stream. ’073
`
`Patent at 2:34-43; Paik (Ex. 1005) at Abstract; 1:54-67; 2:57-67; See also Freedman
`
`Decl. (Ex. 1003) at ¶¶ 53, 55.
`
`A PHOSITA would have been motivated to substitute the sharpness
`
`enhancing algorithm of Yang with the contrast enhancing algorithm of Paik because
`
`sharpness and contrast are closely related enhancements that have long been known
`
`to be adjusted in tandem, or one for the other depending on the video source, to
`
`optimize video quality. See Freedman Decl. (Ex. 1003) at ¶ 55 (citing Kimoto as
`
`within the knowledge of a PHOSITA and explaining that sharpness and contrast are
`
`closely related and that one is often more effective depending on the video source).
`
`Doing so would have been the use of a known technique (i.e., contrast enhancement
`
`using region based analysis) to improve a similar device (i.e., the video enhancing
`
`system of Yang) in the same way because Yang’s sharpness algorithm and Paik’s
`
`contrast enhancement algorithm both adjust luminosity of the pixels within an image
`
`to improve video quality. See Freedman Decl. (Ex. 1003) at ¶ 55. A PHOSITA
`
`would have had a reasonable expectation of success with the addition of Paik to
`
`Yang because these video enhancements algorithms, as the ’073 Patent admits