`
`
`
`
`
`
`
`Exhibit A
`
`U.S. Patent No. 6,437,532
`
`Method and Apparatus for Visual Lossless Image Syntactic Encoding
`
`
`
`Case 1:21-cv-00227-UNA Document 1-1 Filed 02/18/21 Page 2 of 25 PageID #: 13
`I 1111111111111111 11111 lllll 111111111111111 lllll lllll 111111111111111 11111111
`US006473532Bl
`US 6,473,532 Bl
`Oct. 29, 2002
`
`(12) United States Patent
`Sheraizin et al.
`
`(10) Patent No.:
`(45) Date of Patent:
`
`(54) METHOD AND APPARATUS FOR VISUAL
`LOSSLESS IMAGE SYNTACTIC ENCODING
`
`(75)
`
`Inventors: Semion Sheraizin; Vitaly Sheraizin,
`both of Mazkeret Batya (IL)
`
`(73) Assignee: VLS COM Ltd., Rechovot (IL)
`
`( * ) Notice:
`
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by O days.
`
`(21) Appl. No.: 09/524,618
`Mar.14, 2000
`(22) Filed:
`Foreign Application Priority Data
`
`(30)
`
`Jan. 23, 2000
`
`(IL)
`
`................................................ 134182
`
`(51)
`
`Int. Cl.7 ............................. G06K 9/36; G06K 9/46
`
`(52) U.S. Cl. ....................... 382/244; 382/260; 382/263;
`382/264; 382/270; 382/274
`
`(58) Field of Search ................................. 382/232, 244,
`382/254, 260, 263, 264, 270, 274, 162,
`240; 348/608, 620; 375/240, 240.29
`
`(56)
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`
`................... 382/232
`
`8/1994 Barrett
`5,341,442 A
`2/1996 Kim
`5,491,519 A
`12/1996 Devaney et al.
`5,586,200 A
`6/1998 Zick et al.
`5,774,593 A
`5,787,203 A * 7/1998 Lee et al.
`5,796,864 A
`8/1998 Callahan
`5,845,012 A
`12/1998 Jung
`5,847,766 A
`12/1998 Peak
`5,870,501 A
`2/1999 Kim
`5,901,178 A * 5/1999 Lee et al.
`OIBER PUBLICATIONS
`Raj Malluri, et al., "A Robust, Scalable, Object-Based Video
`Compression Technique for Very Low Bit-Rate Coding",
`IEEE Transaction of Circuit and Systems for Video Tech(cid:173)
`nology, vol. 7, No. 1, Feb. 1997.
`
`................... 375/240
`
`AwadKh. Al-Asmari,"An Adaptive Hybrid Coding Scheme
`for HDTV and Digital Sequences," IEEE Transacitons on
`Consumer Electronics, vol. 42, No. 3, pp. 926-936, Aug.
`1995.
`Kwok-tung Lo & Jian Feng, "Predictive Mean Search
`Algorithms for Fast VQ Encoding of Images," IEEE Trans(cid:173)
`actions On Consumer Electronics, vol. 41, No. 2, pp.
`327-331, May 1995.
`James Goel et al., "Pre-processing for MPEG Compression
`Using Adaptive Spatial Filtering", IEEE Transactions On
`Consumer electronics, vol. 41, No. 3, pp. 687-698, Aug.
`1995.
`Jian Feng, et al., "Motion Adaptive Classified Vector Quan(cid:173)
`tization for ATM Video Coding", IEEE Transactions on
`Consumer Electronics, vol. 41, No. 2, pp. 322-326, May
`1995.
`Austin Y. Lian, et al., "Scene-Context Dependent Refer(cid:173)
`ence-Frame Placement for MPEG Video Coding," IEEE
`Transactions on Circuits and Systems for Video Technology,
`vol. 9, No. 3, pp. 478-489, Apr. 1999.
`
`(List continued on next page.)
`
`Primary Examiner---Phuoc Tran
`(74) Attorney, Agent, or Firm---Eitan, Pearl, Latzer &
`Cohen-Zedek
`
`(57)
`
`ABSTRACT
`
`A visual lossless encoder for processing a video frame prior
`to compression by a video encoder includes a threshold unit,
`a filter unit, an association unit and an altering unit. The
`threshold unit identifies a plurality of visual perception
`threshold levels to be associated with the pixels of the video
`frame, wherein the threshold levels define contrast levels
`above which a human eye can distinguish a pixel from
`among its neighboring pixels of the frame. The filter unit
`divides the video frame into portions having different detail
`dimensions. The association unit utilizes the threshold levels
`and the detail dimensions to associate the pixels of the video
`frame into subclasses. Each subclass includes pixels related
`to the same detail and which generally cannot be distin(cid:173)
`guished from each other. The altering unit alters the intensity
`of each pixel of the video frame according to its subclass.
`
`17 Claims, 15 Drawing Sheets
`
`40
`
`INPUT
`FRAME
`MEMORY
`
`CURRENT
`FRAME
`
`PREVIOUS
`FRAME
`
`PROCESSED
`CURRENT
`FRAME
`
`
`
`Case 1:21-cv-00227-UNA Document 1-1 Filed 02/18/21 Page 3 of 25 PageID #: 14
`
`US 6,473,532 Bl
`Page 2
`
`OIBER PUBLICATIONS
`
`Kuo-Chin Fan, Kou-Sou Kan, "An Active Scene
`Analysis-Based approach for Pseudoconstant Bit-Rate
`Video Coding", IEEE Transactions on Circuits and Systems
`for Viedo Technology, vol. 8, No. 2, pp. 159-170,Apr. 1998.
`Takashi Ida and Yoko Sambansugi, "Image Segmentation
`and Contour Detection Using Fractal Codong", IEEE Tran(cid:173)
`sitions on Circuits and Systems for Video Technology, vol.
`8, No. 8, pp. 968-975, Dec. 1998.
`"A
`Rangayyan,
`Liang
`Shen &
`RangarajM.
`Segmentation-Based Lossless Image Coding Method for
`
`High-Resolution Medical Image Compression", IEEE
`Transactions on Medical Imaging, vol. 16, No. 3, pp.
`301-316, Jun. 1997.
`Adrian Munteanu et al., "Wavelet-Based Lossless Compres(cid:173)
`sion of Coronary Angiographic Images", IEEE Transactions
`on Medical Imaging, vol. 18, No. 3, pp. 272-281, Mar. 1999.
`Akira Okumura, et al., "Signal Analysis and Compression
`performance Evaluation of Pathological Microscopic
`Images, "IEEE Transactions on Medical Imaging, vol. 16,
`No. 6, pp. 701-710, Dec. 1997.
`* cited by examiner
`
`
`
`Case 1:21-cv-00227-UNA Document 1-1 Filed 02/18/21 Page 4 of 25 PageID #: 15
`
`U.S. Patent
`
`Oct. 29, 2002
`
`Sheet 1 of 15
`
`US 6,473,532 Bl
`
`1
`
`
`
`Case 1:21-cv-00227-UNA Document 1-1 Filed 02/18/21 Page 5 of 25 PageID #: 16
`
`U.S. Patent
`
`Oct. 29, 2002
`
`Sheet 2 of 15
`
`US 6,473,532 Bl
`
`0:::
`0
`1-
`z
`0
`2
`
`C.D n
`
`-.:j-
`n
`
`N
`n
`
`lO
`N
`
`s::t-
`N
`
`N
`
`~ -~
`
`0:::
`w
`0
`0
`u
`w
`0
`
`,---- --7
`
`I
`I
`
`I
`I
`0
`n1
`
`\J
`0:::
`I w
`I >
`I w
`u
`0:::
`I w
`0
`I-
`I 0:::
`<C
`I
`_j
`:=:>
`I
`0
`I
`0
`I
`I
`2
`I
`I
`w
`I
`I
`0
`L ____ __ J
`
`I
`I
`,---- --7
`
`I
`I
`I
`I
`I
`I 0:::
`W
`
`N
`N
`
`0:::
`0
`I-
`::5
`:=:>
`0
`0
`
`I
`I
`I
`I
`
`~~ 2
`I (.f)
`z
`1Z
`I c2
`0 0:::
`I I- 0(/)W
`wU1o
`I
`o~O
`>O.... u
`I
`I
`2z
`ow
`I
`u
`I
`I
`L ____ __ _j
`
`0
`N
`
`Un:::
`1-W
`U) (_) 0
`_j <C 0
`> I- u
`Zz
`c=;w
`
`0 w
`0
`>
`
`
`
`Case 1:21-cv-00227-UNA Document 1-1 Filed 02/18/21 Page 6 of 25 PageID #: 17
`
`i,-
`~
`N
`~
`11.
`~
`~
`O'I
`rJ'J.
`
`e
`
`~
`
`'"""' Ul
`0 ....,
`~ ....
`'JJ. =(cid:173)~
`
`0 s
`
`N
`~~
`N
`:-""
`I")
`0
`
`~ = ......
`~ ......
`~
`•
`r:JJ.
`d •
`
`FIG.3
`
`56
`
`FRAME
`INTER~[~A_1Ai_JOLJTPLJT I Y,Cr ,Cb
`
`-MEMORY
`
`I FRAME
`
`I
`
`PROCESSOR
`
`46
`
`(
`
`. -PREVIOUS
`48
`
`(
`
`•
`
`FRAME
`CURRENT
`PROCESSED
`
`'---42
`
`54
`
`52
`
`L ____ -----------------__ T __________ J
`I
`I
`:
`:
`
`I
`
`DETERMINER
`THRESHOLD
`
`.______, _ ____,
`ESTIMATOR
`
`:
`I
`I
`I
`I
`1
`1
`I
`L __________________ -----7
`1
`1 50
`
`~---~
`
`VISUAL
`
`.____ ___ ___,
`
`DETERMINER
`SUBCLASS
`
`FILTERED FRAMES
`
`: PARAMETER PARAMETERS PERCEPTION THDi
`I
`I _-L--____.__
`I
`I
`FRAME
`DIFFE~ENCE
`
`1
`
`I
`
`L-----~-----~
`I
`I
`'------~-____, I
`I
`:
`:
`1
`I
`I
`I
`'-----~-___, I
`:
`'-------+-----------1.-CONTROLLABLE I
`: -----:
`,----------7
`
`I
`I __ _____,____
`I
`60 I
`
`SELECTOR
`
`FILTER
`
`62
`
`FILTER BANK
`
`44~
`
`r-J
`\ 40
`
`MEMORY
`FRAME
`INPUT
`
`Y,Cr ,Cb
`
`I
`:
`I
`I ~~----'-~
`,----------7
`
`ANALYZER
`TEMPORAL
`SPATIAL-
`
`FRAME
`PREVIOUS
`
`FRAME
`CURRENT
`
`
`
`Case 1:21-cv-00227-UNA Document 1-1 Filed 02/18/21 Page 7 of 25 PageID #: 18
`
`U.S. Patent
`
`Oct. 29, 2002
`
`Sheet 4 of 15
`
`US 6,473,532 Bl
`
`N :r:
`2
`LO
`
`C
`
`N :r:
`2
`sq-
`
`N :r:
`2 n
`
`N :r:
`2
`N
`
`N :r:
`2
`..--
`
`I"---.
`
`r--...
`
`-----
`
`CL
`:r:
`r------_
`
`.....______
`
`r----...__
`
`\1g
`
`..---
`0:::
`..---
`I
`LL u
`n
`I
`
`\
`
`\
`
`....
`
`N
`
`0
`..--(I)
`0
`u
`
`\
`
`n
`rv
`I
`LL
`CL
`:r:
`
`-----c ..__.,,
`:r:
`
`0
`
`0)
`0
`
`co
`0
`
`r-----
`0
`
`<.D
`0
`
`LO
`0
`
`sq-
`0
`
`n
`0
`
`N
`0
`
`..---
`0
`
`0
`
`~
`st~ ~
`\
`N ~ u
`I
`lQ_ 1, ~
`\1 ~ ~
`~
`~
`"'
`~ !~
`~
`~
`~
`~ ~ "'
`~ ~
`O::'. ~ r-----..
`I
`~ LL
`-----..........
`~
`---r---
`I'-- ~
`~ ~ ~
`r---..
`.... - r---- ~
`r---N
`
`
`
`Case 1:21-cv-00227-UNA Document 1-1 Filed 02/18/21 Page 8 of 25 PageID #: 19
`
`i,-
`~
`N
`~
`11.
`~
`~
`O'I
`rJ'J.
`
`e
`
`'"""' Ul
`0 ....,
`Ul
`~ ....
`'JJ. =-~
`
`N
`0
`0
`N
`~~
`N
`
`I") ....
`0
`
`~ = ......
`~ ......
`~
`•
`r:JJ.
`d •
`
`B
`..,
`
`FIG.5A
`
`L ___________________________ J L ____________________________ J
`~-~ ~-~ I
`I
`
`SW-C1 t-----------; LPF-C1 I t : : t ISW-R2 t------------1LPF-R2
`
`NLF-R
`
`+
`
`TA
`
`/R3
`
`LPF-R31----------<
`
`SW-R3
`
`R2
`
`US 6,473,532 B1
`
`I::
`
`1
`
`CL_
`
`~-~ ~-~ II
`1 1
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`
`NLF-C
`
`TA
`
`I
`C2
`
`LPF-C2>---------I
`
`<m.UE
`
`SW-C2
`
`Case 1:21-cv-00227-UNA Document 1-1 Filed 02/18/21 Page 8 of 25 PageID #: 19
`
`: /
`I
`: H SW-R 1 '---------------J LP F -R 1
`I
`: I
`I
`I rl SW-DR >-----------1
`I
`
`TA-R
`
`Sheets 0f15
`
`'l-R'--'-1'-------
`
`'D-R
`
`TA-C
`
`I
`
`D-C
`
`l
`
`SW-DC\
`
`,---------------------~---7 1 ____________________ { ______ 7
`
`66
`
`Oct. 29, 2002
`
`US. Patent
`
`64
`
`Elgm
`
`
`
` Elem:
`
`A
`Y[n]
`
`
`
`
`
`Case 1:21-cv-00227-UNA Document 1-1 Filed 02/18/21 Page 9 of 25 PageID #: 20
`
`i,-
`~
`N
`~
`11.
`~
`~
`O'I
`rJ'J.
`
`e
`
`'"""' Ul
`0 ....,
`~ ....
`'JJ. =-~
`
`O'I
`
`N
`0
`0
`N
`\0
`N
`
`~
`
`I") ....
`0
`
`~ = ......
`~ ......
`~
`•
`r:JJ.
`d •
`
`B
`..,
`
`L ____________________________ J
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`1
`
`Z3
`
`LPF-R2 I T
`
`I
`
`I R2
`LPF-c1 I f ; ; f 1sw-R2
`
`I
`
`FIG.5B
`
`NLF-R
`
`TA
`
`R3
`
`SW-R3 LPF-R3----------1
`
`L---------------------------~
`I
`I
`I
`I
`I
`
`NLF-C
`
`TA
`
`C2
`
`1 Lj SW-C2 m LPF -c21------+---------i
`
`LPF-R1
`
`--,--
`SW-R1-------I
`
`R 1
`
`TA-C
`
`TA-R
`
`♦ Z1
`jRND-RO
`
`D-R
`SW-~
`
`r ____________________ { ______ 1
`
`66
`
`,---------------------C ____ 7
`
`64
`
`ZS
`sw-c1~--~sw-c1
`[z4
`1sw-co
`
`~ D-L
`
`sw-co1
`
`CJ
`
`I
`I
`I
`
`A
`Y[n]:
`i
`
`
`
`Case 1:21-cv-00227-UNA Document 1-1 Filed 02/18/21 Page 10 of 25 PageID #: 21
`
`U.S. Patent
`
`Oct. 29, 2002
`
`Sheet 7 of 15
`
`US 6,473,532 Bl
`
`C
`
`u
`
`N
`I
`2
`t.[)
`
`N
`I
`2
`-tj-
`
`N
`I
`2
`l-----+---+---+---+-----1----t---r----t-----r---r-i"'=----,1n
`
`N
`I
`2
`l-----+---+--+-------:;;,~+-----t------::;:;>4-----t-~8--i77r-1---nN
`
`N
`I
`2
`
`,,,.-..._
`C
`..__,,
`I
`
`0
`
`0)
`0
`
`CXJ
`0
`
`I'-
`0
`
`CD
`0
`
`t.[)
`0
`
`-tj-
`0
`
`n
`0
`
`N
`0
`
`..---
`0
`
`0
`
`c.o
`0
`~
`
`~
`
`
`
`Case 1:21-cv-00227-UNA Document 1-1 Filed 02/18/21 Page 11 of 25 PageID #: 22
`
`U.S. Patent
`
`Oct. 29, 2002
`
`Sheet 8 of 15
`
`US 6,473,532 Bl
`
`cO
`·- I >1---
`
`CX)
`.--
`
`r.D
`.--
`
`.--
`
`N
`.,----
`
`0
`.--
`
`CXJ
`0
`
`~
`~
`
`'
`
`r.D
`0
`
`~ '
`~ I'\
`\
`\
`
`N
`0
`
`CX)
`
`CD
`
`sq-
`.--
`
`N
`..--
`
`0
`..-----
`
`CX)
`0
`
`CD
`0
`
`s;;:t-
`0
`
`N
`0
`
`0
`
`+-' . -
`::J 0
`o:::r::
`> I--
`
`
`
`Case 1:21-cv-00227-UNA Document 1-1 Filed 02/18/21 Page 12 of 25 PageID #: 23
`
`U.S. Patent
`
`Oct. 29, 2002
`
`Sheet 9 of 15
`
`US 6,473,532 Bl
`
`f(cid:173)
`=:)
`0
`r - ,
`C
`
`L.___J >-
`
`w
`2
`<C
`et::
`LL
`
`f-
`=:)
`
`0.... z
`
`IJ)
`=:)
`0
`> w
`
`et::
`0....
`
`co
`r---
`
`3:
`if)
`
`LL
`_J
`I
`0
`I
`f-
`
`0
`r---
`
`LL
`0....
`_J
`
`0
`co
`
`-.::j-
`r---
`
`0:::
`0
`f-
`<C
`0:::
`<C
`0....
`2
`0
`u
`
`0
`I
`I-
`
`N r---
`
`LL
`0....
`I
`
`-.::j-
`co
`
`w
`f-
`<C >-
`0 et::
`WO
`22
`et:: w
`W2
`f-z
`
`+
`
`0
`Wf-
`lf) u w
`cnw2
`w 0::: <(
`u 0::: er::
`0 =:> LL..
`et:: u
`0....
`
`co
`
`-.::j-/
`
`c.D
`r---
`
`0:::
`0
`f-
`<C
`0:::
`<C
`0....
`2
`0
`u
`
`~
`Cl'.) .
`CJ
`1---4
`~
`
`
`
`Case 1:21-cv-00227-UNA Document 1-1 Filed 02/18/21 Page 13 of 25 PageID #: 24
`
`i,-
`~
`N
`~
`1J.
`~
`,I;;;..
`_,.a-...
`rJ'J.
`
`e
`
`'"""' Ul
`0 ....,
`'"""' 0
`~ ....
`'JJ. =-~
`
`N
`0
`0
`N
`~~
`N
`!""'"
`I")
`0
`
`~ = ......
`~ ......
`~
`•
`r:JJ.
`d •
`
`46
`
`MEMORY
`FRAME
`OUTPUT
`
`Y[n]OUT -
`
`82
`
`I SW I
`(80
`
`~w
`
`/78
`
`~48
`
`76
`
`I,.
`
`COMPARA~
`
`THDj ..------',.___,.______,,
`
`'
`
`C74
`
`1
`
`1 COMPARATOR
`
`HPFl
`
`72
`
`THD-LF
`
`l
`
`LPFj
`
`FRAME ~70
`INPUT
`PREVIOUS
`
`INTERMEDIATE
`
`MEMORY
`
`84
`
`~ I
`
`\
`
`'
`
`+ I
`
`FIG.BB
`
`CURRECT
`PROCESSED
`
`FRAME
`
`
`
`Case 1:21-cv-00227-UNA Document 1-1 Filed 02/18/21 Page 14 of 25 PageID #: 25
`
`i,-
`~
`N
`~
`11.
`~
`~
`O'I
`rJ'J.
`
`e
`
`'"""' Ul
`'"""' 0 ....,
`'"""'
`~ ....
`'JJ. =(cid:173)~
`
`N
`0
`0
`N
`~~
`N
`:-""
`I")
`0
`
`~ = ......
`~ ......
`~
`•
`r:JJ.
`d •
`
`~48
`
`PREVIOUS CURRENT FRAME
`
`76
`
`COMPARATOR1----'
`
`THDj ,---_.______._ _ ___,
`
`sw-~
`
`I
`
`80
`
`74
`COMPARATOR I
`
`\
`
`~ ~ •
`
`72
`
`46
`
`MEMORY
`FRAME I
`OUTPUT
`l__,
`
`Y[n]OUT _
`
`82
`
`THD-LF
`
`SW
`
`~-----1 LPF 1------------i
`
`78
`
`70
`
`FIG.BC
`
`68
`
`+
`
`CURRECT
`PROCESSED
`
`FRAME
`
`
`
`Case 1:21-cv-00227-UNA Document 1-1 Filed 02/18/21 Page 15 of 25 PageID #: 26
`
`i,,,-.
`~
`N
`~
`1J.
`~
`,I;;..
`_,.a-...
`rJ'J.
`
`e
`
`'""" u-.
`0 ....,
`'""" N
`~ ....
`rF.J. =(cid:173)~
`
`0 s
`
`N
`~~
`N
`!""'"
`I")
`0
`
`~ = ......
`~ ......
`~
`•
`r:JJ.
`d •
`
`----c;; ____ J
`•
`~ ~
`
`I
`I
`
`•
`
`,
`
`'
`
`~51
`
`I
`
`I TAS
`
`I
`
`I TA4
`
`I
`
`TA3
`
`----------------____ J L
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`!
`I
`I
`I
`
`IHPF-R3
`
`IHPF-R2I
`
`HPF-R1j
`
`MP MP MP MP
`
`o--MP
`
`HPF-l'IF
`
`I
`
`I
`I
`!
`I
`I
`I
`I
`I
`I
`I
`I
`I
`7
`L ______
`1 Yl nJ
`I
`
`L'IF
`
`,k
`
`-
`
`+,
`
`IMPIMPIMPIMPIMP\
`~~~-------7 ~------------7
`
`L _________ l
`I
`I L---~---7
`I
`11-MF~
`I
`I
`I
`I
`II
`
`I
`
`-
`
`I
`
`I
`
`/50
`
`------------------
`
`•Ix.Y
`
`FIG.9
`L----------------------
`
`I TA2 I
`
`I
`
`I TA1
`
`I
`
`VD
`
`IHPF-c2!
`
`jHPF-c1i
`
`-~
`
`~ ML ~>-ML f-
`
`-
`
`-
`
`I
`I
`I
`I
`I
`I
`
`I
`
`V ERTICAL
`
`DRIVE
`
`VD
`
`YlnJ
`
`~t:J
`
`LU~~tf(Er_______________
`
`I ~ c-
`
`
`
`Case 1:21-cv-00227-UNA Document 1-1 Filed 02/18/21 Page 16 of 25 PageID #: 27
`
`U.S. Patent
`
`Oct. 29, 2002
`
`Sheet 13 of 15
`
`US 6,473,532 Bl
`
`Ox
`
`~(/)
`0
`u
`I
`........
`..
`n
`0::::
`I
`LL
`Q_
`:r:
`
`,--------,
`C
`L-....J >-
`
`NX
`(/)
`0
`u
`I
`
`..---
`0::::
`I
`LL
`Q_
`::r::
`
`<DX
`(/)
`0
`u
`I
`..--
`
`N
`0::::
`I
`LL
`0...
`::r::
`
`N
`(/)
`0
`u
`I
`
`..---
`u
`I
`LL
`0...
`::r::
`
`v
`(/)
`0
`u
`I
`
`. .
`N u
`I
`LL
`Q_
`::r::
`
`>-,
`X
`>--<
`
`+
`
`Q_
`L
`-s;j-
`
`Q_
`L
`LO
`
`0
`>
`
`\
`
`L.()
`
`0...
`2
`C'-.J
`
`(]_
`L
`I"")
`
`_J
`L
`
`_J
`L
`
`_J
`L
`
`_J
`2
`
`,---,
`C
`
`L-....J >-
`
`
`
`Case 1:21-cv-00227-UNA Document 1-1 Filed 02/18/21 Page 17 of 25 PageID #: 28
`
`U.S. Patent
`
`Oct. 29, 2002
`
`Sheet 14 of 15
`
`US 6,473,532 Bl
`
`I"")
`L.()
`
`/
`
`'
`
`LL.
`<J
`
`I
`
`N LL
`(/)
`0
`0
`I
`
`~
`
`LL
`<J
`I
`u....
`Q_
`I
`
`I
`
`,.._
`
`(+
`+
`
`u....
`2
`
`+ +
`I
`
`u....
`2
`
`~
`C
`
`'
`
`L-...J >-
`
`r--,
`C
`L--J >-
`
`m
`
`
`
`Case 1:21-cv-00227-UNA Document 1-1 Filed 02/18/21 Page 18 of 25 PageID #: 29
`
`i,-
`~
`N
`~
`1J.
`~
`,I;;;..
`_,.a-...
`rJ'J.
`
`e
`
`'"""' Ul
`....,
`0
`~
`~ ....
`'JJ. =(cid:173)~
`
`0 s
`
`N
`~~
`N
`!""'"
`I")
`0
`
`~ = ......
`~ ......
`~
`•
`r:JJ.
`d •
`
`54
`
`I
`THD-
`
`DETERMINER
`THRESHOLD
`PERCEPTION
`
`VISUAL
`
`-
`
`-
`
`I
`I
`I
`I
`I
`I
`I
`IcoP1
`I
`I
`I
`I
`IF :
`.6. i :
`I
`I
`
`SNR
`Ix.v ! -
`
`1
`
`-
`
`FIC 11
`
`'
`I
`I
`
`7
`
`D-C
`__
`D-R _
`C2
`
`-
`
`-
`
`C1
`R3
`
`R2
`
`:--
`
`R1
`NLF -
`
`-
`
`Db
`Dr
`
`Z2
`k
`
`l5MP:
`i5MP r
`
`l
`
`.......J
`
`-~
`
`~l I3
`
`1CNT:R3UC2 h
`; ,-----------
`I
`I
`I
`I
`I
`I
`
`•
`:.
`
`I
`
`---:::i
`'
`
`I
`I
`I
`I
`I
`I
`I
`I
`
`I
`
`~56
`
`DECODER
`LOGICAL
`
`-
`
`-
`
`-
`
`-
`
`i2MLf
`~ 2ML:
`
`l
`78
`
`ICNT:GOP I
`I
`
`SELECTOR
`AMPLITUDE
`
`IN
`
`OUTLL_ AS
`
`Cb
`Cr
`y
`
`VD
`
`8.F
`
`-
`
`L ____________ _
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`HPF-8.F !
`I
`I
`,-
`Lt-
`I
`I
`I
`
`J
`-_r COMPARATOR I C2
`I
`I COMPARATOR I C1
`
`CA
`
`CA
`
`I H
`I H
`
`HPF-C2 1
`I
`I
`
`COMPARATOR R3
`
`CA
`
`CA
`
`~
`
`COMPARATOR R2
`
`-
`
`COMPARATOR R1
`
`CA
`
`I ,___
`I
`
`HPF-R11
`
`HPF-C1,
`I
`I ~
`I
`HPF-R3,
`I
`I
`HPF-R2!
`
`
`
`Case 1:21-cv-00227-UNA Document 1-1 Filed 02/18/21 Page 19 of 25 PageID #: 30
`
`US 6,473,532 Bl
`
`1
`METHOD AND APPARATUS FOR VISUAL
`LOSSLESS IMAGE SYNTACTIC ENCODING
`
`FIELD OF THE INVENTION
`
`The present invention relates generally to processing of
`video images and, in particular, to syntactic encoding of
`images for later compression by standard compression tech(cid:173)
`niques.
`
`BACKGROUND OF THE INVENTION
`
`5
`
`10
`
`2
`Takashi Ida and Yoko Sambansugi, "Image Segmentation
`and Contour Detection Using Fractal Coding", IEEE
`Transactions on Circuits and Systems for Video
`Technology, vol. 8, No. 8, pp. 968-975, December
`1998;
`Liang Shen and Rangaraj M. Rangayyan, "A
`Segmentation-Based Lossless Image Coding Method
`for High-Resolution Medical Image Compression,"
`IEEE Transactions on Medical Imaging, vol. 16, No. 3,
`pp. 301-316, June 1997;
`Adrian Munteanu et al., "Wavelet-Based Lossless Com(cid:173)
`pression of Coronary Angiographic Images", IEEE
`Transactions on Medical Imaging, vol. 18, No. 3, p.
`272-281, March 1999; and
`Akira Okumura et al., "Signal Analysis and Compression
`Performance Evaluation of Pathological Microscopic
`Images," IEEE Transactions on Medical Imaging, vol.
`16, No. 6, pp. 701-710, December 1997.
`
`SUMMARY OF THE INVENTION
`
`There are many types of video signals, such as digital
`broadcast television (TV), video conferencing, interactive
`TV, etc. All of these signals, in their digital form, are divided
`into frames, each of which consists of many pixels (image 15
`elements), each of which requires 8-24 bits to describe
`them. The result is megabits of data per frame.
`Before storing and/or transmitting these signals, they
`typically are compressed, urinal one of many standard video
`compression techniques, such as JPEG, MPEG, 20
`H-compression, etc. These compression standards use video
`signal transforms and intra- and inter-frame coding which
`exploit spatial and temporal correlations among pixels of a
`frame and across frames.
`However, these compression techniques create a number 25
`of well-known, undesirable and unacceptable artifacts, such
`as blockiness, low resolution and wiggles, among others.
`These are particularly problematic for broadcast TV
`(satellite TV, cable TV, etc.) or for systems with very low bit
`rates (video conferencing, videophone).
`Much research has been performed to try and improve the
`standard compression techniques. The following patents and
`articles discuss various prior art methods to do so:
`U.S. Pat. Nos. 5,870,501, 5,847,766, 5,845,012, 5,796,
`884, 5,774,593, 5,586,200, 5,491,519, 5,341,442;
`Raj Malluri et al, "A Robust, Scalable, Object-Based
`Video Compression Technique for Very Low Bit-Rate
`Coding," IEEE Transactions of Circuit and System for
`Video Technology, vol. 7, No. 1, February 1997;
`AwadKh. Al-Asmari, "An Adaptive Hybrid Coding
`Scheme for HDTV and Digital Sequences," IEEE
`Transactions on Consumer Electronics, vol. 42, No. 3,
`pp. 926-936, August 1995;
`Kwok-tung Lo and Jian Feng, "Predictive Mean Search
`Algorithms for Fast VQ Encoding of Images," IEEE
`Transactions On Consumer Electronics, vol. 41, No. 2,
`pp. 327-331, May 1995;
`James Goel et al. "Pre-processing for MPEG Compres(cid:173)
`sion Using Adaptive Spatial Filtering", IEEE Transac(cid:173)
`tions On Consumer Electronics, "vol. 41, No. 3, pp.
`687-698, August 1995;
`Jian Feng et al. "Motion Adaptive Classified Vector
`Quantization for ATM Video Coding", IEEE Transac(cid:173)
`tions on Consumer Electronics, vol. 41, No. 2, p.
`322-326, May 1995;
`Austin Y. Lan et al., "Scene-Context Dependent
`Reference-Frame Placement for MPEG Video
`Coding," IEEE Transactions on Circuits and Systems
`for Video Technology, vol. 9, No. 3, pp. 478-489,April
`1999;
`Kuo-Chin Fan, Kou-Sou Kan, "An Active Scene
`Analysis-Based approach for Pseudoconstant Bit-Rate
`Video Coding", IEEE Transactions on Circuits and
`Systems for Video Technology, vol. 8 No. 2, pp.
`159-170, April 1998;
`
`45
`
`An object of the present invention is to provide a method
`and apparatus for video compression which is generally
`lossless vis-a-vis what the human eye perceives.
`There is therefore provided, in accordance with a pre-
`ferred embodiment of the present invention, a visual lossless
`encoder for processing a video frame prior to compression
`by a video encoder. The encoder includes a threshold
`determination unit, a filter unit, an association unit and an
`30 altering unit. The threshold determination unit identifies a
`plurality of visual perception threshold levels to be associ(cid:173)
`ated with the pixels of the video frame, wherein the thresh(cid:173)
`old levels define contrast levels above which a human eye
`can distinguish a pixel from among its neighboring pixels of
`35 the frame. The filter unit divides the video frame into
`portions having different detail dimensions. The association
`unit utilizes the threshold levels and the detail dimensions to
`associate the pixels of the video frame into subclasses. Each
`subclass includes pixels related to the same detail and which
`40 generally cannot be distinguished from each other. The
`altering unit alters the intensity of each pixel of the video
`frame according to its subclass.
`Moreover, in accordance with a preferred embodiment of
`the present invention, the altering unit includes an inter(cid:173)
`frame processor and an intra-frame processor.
`Furthermore, in accordance with a preferred embodiment
`of the present invention, the intra-frame processor includes
`a controllable filter bank having a plurality of different filters
`50 and a filter selector which selects one of the filters for each
`pixel according to its subclass.
`Further, in accordance with a preferred embodiment of the
`present invention, the inter-frame processor includes a low
`pass filter and a high pass filter operative on a difference
`55 frame between a current frame and a previous frame, large
`and small detail threshold elements for thresholding the
`filtered difference frame with a large detail threshold level
`and a small detail threshold level, respectively, and a sum(cid:173)
`mer which sums the outputs of the two filters as amended by
`60 the threshold elements.
`Still further, in accordance with a preferred embodiment
`of the present invention, the threshold unit includes a unit for
`generating a plurality of parameters describing at least one
`of the following parameters: the volume of information in
`65 the frame, the per pixel color and the cross-frame change of
`intensity, and unit for generating the visual perception
`threshold from at least one of the parameters.
`
`
`
`Case 1:21-cv-00227-UNA Document 1-1 Filed 02/18/21 Page 20 of 25 PageID #: 31
`
`US 6,473,532 Bl
`
`4
`steps of comparing multiple spatial high frequency levels of
`a pixel against its associated visual perception threshold and
`processing the comparison results to associate the pixel with
`one of the subclasses.
`Further, in accordance with a preferred embodiment of the
`present invention, the step, of transforming includes the step
`of filtering each subclass with an associated two(cid:173)
`dimensional low pass filter.
`Still further, in accordance with a preferred embodiment
`of the present invention, the step of transforming includes
`the steps of generating a difference frame between the
`current frame and a previous transformed frame, low and
`high pass filtering of the difference frame, comparing the
`filtered frames with a large detail threshold and a small detail
`15 threshold and summing those portions of the filtered frames
`which are greater than the thresholds.
`Additionally, in accordance with a preferred embodiment
`of the present invention, the large detail threshold is 2 to 5
`percent.
`Moreover, in accordance with a preferred embodiment of
`the present invention, the method includes the step of
`rounding the output of the step of transforming.
`Finally, the intensity can be a luminance value or a
`chrominance value.
`
`20
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`3
`There is also provided, in accordance with a preferred
`embodiment of the present invention, a method of visual
`lossless encoding of frames of a video signal. The method
`includes steps of spatially and temporally separating and
`analyzing details of the frames, estimating parameters of the 5
`details, defining a visual perception threshold for each of the
`details in accordance with the estimated detail parameters,
`classifying the frame picture details into subclasses in accor(cid:173)
`dance with the visual perception thresholds and the detail
`parameters and transforming each the frame detail in accor- 10
`dance with its associated subclass.
`Additionally, in accordance with a preferred embodiment
`of the present invention, the step of separating and analyzing
`also includes the step of spatial high pass filtering of small
`dimension details and temporal filtering for detail motion
`analysis.
`Moreover, in accordance with a preferred embodiment of
`the present invention, the step of estimating comprises at
`least one of the following steps,
`determining Nli.;, a per-pixel signal intensity change
`between a current frame and a previous frame, normal(cid:173)
`ized by a maximum intensity;
`determining a NIXY, a normalized volume of intraframe
`change by high frequency filtering of the frame, sum- 25
`ming the intensities of the filtered frame and normal(cid:173)
`izing the sum by the maximum possible amount of
`information within a frame;
`generating Nip, a volume of inter-frame changes between
`a current frame and its previous frame normalized by a 30
`maximum possible amount of information volume
`within a frame;
`generating NIGoP, a normalized volume of inter-frame
`changes for a group of pictures from the output of the
`previous step of generating;
`evaluating a signal-to-noise ratio SNR by high pass
`filtering a difference frame between the current frame
`and the previous frame by selecting those intensities of
`the difference frame lower than a threshold defined as
`three times a noise level under which noise intensifies
`are not perceptible to the human eye, summing the
`intensities of the pixels in the filtered frame and nor(cid:173)
`malizing the sum by the maximum intensity and the
`total number of pixels in the frame;
`generating NY;, a normalized intensity value per-pixel;
`generating a per-pixel color saturation level p;;
`generating a per-pixel hue value h;; and
`determining a per-pixel response R;(h;) to the hue value.
`Further, in accordance with a preferred embodiment of the 50
`present invention, the step of defining includes the step of
`producing the visual perception thresholds, per-pixel, from
`a minimum threshold value and at least one of the param(cid:173)
`eters.
`Still further, in accordance with a preferred embodiment 55
`of the present invention, the step of defining includes the
`step of producing the visual perception thresholds, per-pixel,
`according to the following equation,
`
`35
`
`The present invention will be understood and appreciated
`more fully from the following detailed description taken in
`conjunction with the appended drawings in which:
`FIG. 1 is an example of a video frame;
`FIG. 2 is a block diagram illustration of a video com(cid:173)
`pression system having a visual lossless syntactic encoder,
`constructed and operative in accordance with a preferred
`embodiment of the present invention;
`FIG. 3 is a block diagram illustration of the details of the
`visual lossless syntactic encoder of FIG. 2;
`FIG. 4 is a graphical illustration of the transfer functions
`for a number of high pass filters useful in the syntactic
`40 encoder of FIG. 3;
`FIGS. SA and SB are block diagram illustrations of
`alternative embodiments of a controllable filter bank form(cid:173)
`ing part of the syntactic encoder of FIG. 3;
`FIG. 6 is a graphical illustration of the transfer functions
`45 for a number of low pass filters useful in the controllable
`filter bank of FIGS. SA and SB;
`FIG. 7 is a graphical illustration of the transfer function
`for a non-linear filter useful in the controllable filter bank of
`FIGS. SA and SB;
`FIGS. SA, SB and SC are block diagram illustrations of
`alternative embodiments of an inter-frame processor form(cid:173)
`ing a controlled filter portion of the syntactic encoder of
`FIG. 3;
`FIG. 9 is a block diagram illustration of a spatial-temporal
`analyzer forming part of the syntactic encoder of FIG. 3;
`FIGS. lOA and lOB are detail illustrations of the analyzer
`of FIG. 9; and
`FIG. 11 is a detail illustration of a frame analyzer forming
`60 part of the syntactic encoder of FIG. 3.
`
`200
`Nip+ NI cop+ NY;+ p; + (1 - R;(h;)) + SNR)
`
`wherein THDmin is a minimum threshold level.
`Moreover, in accordance with a preferred embodiment of
`the present invention, the step of classifying includes the
`
`DETAILED DESCRIPTION OF THE PRESENT
`INVENTION
`Applicants halve realized that there are different levels of
`65 image detail in an image and that the human eye perceives
`these details in different ways. In particular, Applicants have
`realized the following:
`
`
`
`Case 1:21-cv-00227-UNA Document 1-1 Filed 02/18/21 Page 21 of 25 PageID #: 32
`
`US 6,473,532 Bl
`
`6
`frame memory 40, a frame analyzer 42, an intra-frame
`processor 44, an output frame memory 46 and an inter-frame
`processor 48. Analyzer 42 analyzes each frame to separate
`it into subclasses, where subclasses define areas whose
`5 pixels cannot be distinguished from each other. Intra-frame
`processor 44 spatially filters each pixel of the frame accord(cid:173)
`ing to its subclass and, optionally, also provides each pixel
`of the frame with the appropriate number of bits. Inter-frame
`processor 48 provides temporal filtering (i.e. inter-frame
`10 filtering) and updates output frame memory 46 with the
`elements of the current frame which are different than those
`of the previous frame.
`It is noted that frames are composed of pixels, each
`having luminance Y and two chrominance Cr and Cb
`components, each of which is typically defined by eight bits.
`VLS encoder 20 generally separately processes the three
`components. However, the bandwidth of the chrominance
`signals is half as wide as that of the luminance signal. Thus,
`the filters (in the x direction of the frame) for chrominance
`20 have a narrower bandwidth. The following discussion shows
`the filters far the luminance signal Y.
`Frame analyzer 42 comprises a spatial-temporal analyzer
`50, a parameter estimator 52, a visual perception threshold
`determiner 54 and a subclass determiner 56. Details of these
`25 elements are provided in FIGS. 9-11, discussed hereinbe(cid:173)
`low.
`As discussed hereinabove, details which the human eye
`distinguishes are ones of high contrast and ones whose
`details have small dimensions. Areas of high contrast are
`areas with a lot of high frequency content. Thus, spatial(cid:173)
`temporal analyzer 50 generates a plurality of filtered frames
`from the current frame, each filtered through a different high
`pass filter (HPF), where each high pass filter retains a
`different range of frequencies therein.
`FIG. 4, to which reference is now briefly made, is an
`amplitude vs. frequency graph illustrating the transfer func(cid:173)
`tions of an exemplary set of high pass filters for frames in a
`non-interfacing scan format. Four graphs are shown. It can
`be seen that the curve labeled HPF-R3 has a cutoff frequency
`40 of 1 MHz and thus, retains portions of the frame with
`information above 1 MHz. Similarly, curve HPF-R2 has, a
`cutoff frequency of 2 MHz, HPF-C2 has a cutoff frequency
`of 3 MHz and HPF-Rl and HPF-Cl have a cutoff frequency
`of 4 MHz. As will be discussed hereinbelow, the terminol(cid:173)
`ogy "Rx" refers to operations on a row of is pixels while the
`terminology "Cx" refers to operations on a column of pixels.
`In particular, the filters of FIG. 4 implement the following
`finite impulse response (FIR) filters on either a row of pixels
`(the x direction of the frame) or a column of pixels (they
`50 direction of the frame), where the number of pixels used in
`the filter deftness the power of the cosine. For example, a
`filter implementing cos10x takes 10 pixels around the pixel
`of interest, five to one side and five to the other side of the
`pixel of interest.
`
`5
`1. Picture details whose detection mainly depends on the
`level of noise in the image occupy approximately
`50---80% of an image.
`2. A visual perception detection threshold for image
`details does not depend on the shape of the details in the
`image.
`3. A visual perception threshold THD depends on a
`number of picture parameters, including the general
`brightness of the image. It does not depend on the noise
`spectrum.
`The present invention is a method for describing, and then
`encoding, images based on which details in the image can be
`distinguished by the human eye and which ones, can only be
`detected by it.
`Reference is now made to FIG. 1, which is a grey-scale 15
`image of a plurality of shapes of a bird in flight, ranging
`from a photograph of one (labeled 10) to a very stylized
`version of one (labeled 12). The background of the image is
`very dark at the top of the image and very light at t