throbber
1111111111111111 IIIIII IIIII 11111 1111111111 lllll lllll lllll 111111111111111 1111111111 11111111
`US 20030174776Al
`
`(19) United States
`(12) Patent Application Publication
`Shimizu et al.
`
`(10) Pub. No.: US 2003/0174776 Al
`Sep. 18, 2003
`( 43) Pub. Date:
`
`(54)
`
`(75)
`
`MOTION VECTOR PREDICTIVE ENCODING
`METHOD, MOTION VECTOR DECODING
`METHOD, PREDICTIVE ENCODING
`APPARATUS AND DECODING APPARATUS,
`AND STORAGE MEDIA STORING MOTION
`VECTOR PREDICTIVE ENCODING AND
`DECODING PROGRAMS
`
`Inventors: Atsushi Shimizu, Tokyo (JP); Hirohisa
`Jozawa, Tokyo (JP); Kazuto
`Kamikura, Tokyo (JP); Hiroshi
`Watanabe, Tokyo (JP); Atsushi
`Sagata, Tokyo (JP); Seishi Takamura,
`Tokyo (JP)
`
`Correspondence Address:
`PENNIE AND EDMONDS
`1155 AVENUE OF THE AMERICAS
`NEW YORK, NY 100362711
`
`(73)
`
`Assignee: Nippon Telegraph and Telephone Cor(cid:173)
`poration, Tokyo (JP)
`
`(21)
`
`Appl. No.:
`
`10/354,663
`
`(22)
`
`Filed:
`
`Jan.30,2003
`
`Related U.S. Application Data
`
`(63)
`
`Continuation of application No. 09/254,116, filed on
`Feb. 25, 1999, filed as 371 of international application
`No. PCT/JP98/02839, filed on Jun. 25, 1998.
`
`(30)
`
`Foreign Application Priority Data
`
`Jun. 25, 1997
`Jul. 15, 1997
`
`(JP) ........................................... 09-168947
`(JP) ........................................... 09-189985
`
`Publication Classification
`
`Int. Cl.7 ....................................................... H04N 7/12
`(51)
`(52) U.S. Cl. ................................. 375/240.16; 375/240.13
`
`(57)
`
`ABSTRACT
`
`A motion vector predictive encoding method, a motion
`vector decoding method, a predictive encoding apparatus, a
`decoding apparatuses, and storage media storing motion
`vector predictive encoding and decoding programs are pro(cid:173)
`vided, thereby reducing the amount of generated code with
`respect to the motion vector, and improving the efficiency of
`the motion-vector prediction. If the motion-compensating
`mode of the target small block to be encoded is the global
`motion compensation, the encoding mode of an already(cid:173)
`encoded small block is the interframe coding mode, and the
`motion-compensating mode of the already-encoded small
`block is the global motion compensation, then the motion
`vector of the translational motion model is determined for
`each pixel of the already-encoded small block, based on the
`global motion vector (steps S1-S5). Next, the representative
`motion vector is calculated as the predicted vector, based on
`the motion vector of each pixel of the already-encoded small
`block (step S6). Finally, the prediction error is calculated for
`each component of the motion vector and each prediction
`error is encoded (steps S7 and S8).
`
`GMC
`
`INTRAFRAME
`CODING
`
`vp=(O, 0)
`
`LMC
`
`,S12
`
`CALCULATE MOTi ON VECTOR
`FOR EACH PIXEL BASED ON
`GLOBAL MOT I ON VECTOR
`
`S16
`
`CALCULATE REPRESENTATIVE
`MOT I ON VECTOR
`
`.,S17
`
`ADD PREDICTION ERROR
`
`,S18
`
`END
`
`APPLE-1024
`
`1
`
`

`

`Patent Application Publication Sep. 18, 2003 Sheet 1 of 11
`
`US 2003/0174776 Al
`
`FIG.1
`
`START
`
`INTRAFRAME
`CODING
`
`ING MO
`ENCOD
`
`vp=(O, 0)
`
`0S3
`
`LMC
`
`CALCULATE MOTION VECTOR
`FOR EACH PIXEL BASED ON
`GLOBAL MOTION VECTOR
`
`S5
`
`CALCULATE REPRESENTATIVE
`MOTION VECTOR
`
`S6
`
`CALCULATE PREDICTION ERROR
`
`S7
`
`ENCODE PREDICTION ERROR
`
`S8
`
`END
`
`2
`
`

`

`Patent Application Publication Sep. 18, 2003 Sheet 2 of 11
`
`US 2003/0174776 Al
`
`FIG.2
`
`START
`
`GMC
`
`DECODE PREDICTION ERROR
`
`S12
`
`INTRAFRAME
`CODING
`
`vp=(0,0)
`
`S14 INT
`
`LMC
`
`CALCULATE MOTION VECTOR
`FOR EACH PIXEL BASED ON
`GLOBAL MOTION VECTOR
`
`S16
`
`CALCULATE REPRESENTATIVE ,/S17
`MOTION VECTOR
`
`ADD PREDICTION ERROR
`
`S18
`
`END
`
`3
`
`

`

`O'I
`-..J
`-..J
`,i;;..
`'"""'
`-..J
`0
`@
`
`'"""'
`>
`
`0
`N
`'JJ.
`d
`
`~
`
`'"""'
`'"""'
`0 ....,
`~ ....
`'JJ. =(cid:173)~
`8
`0
`N
`~CIO
`'"""'
`~
`'JJ.
`
`.... 0 =
`~ ....
`O' -....
`~
`.... 0 =
`~ ....
`t "Cl -....
`~ .... ~ = ....
`
`I")
`
`I")
`
`""C
`
`.o MOTION VECTOR
`
`INFORMATION
`GLOBAL
`
`ENCODING SECTION
`•I MOTION-VECTOR . ,
`,
`6
`
`gmv ~
`
`VECTOR
`MOTIONo
`GLOBAL
`
`CALCULATING SECTION
`
`MOTION-VECTOR
`REPRESENTATIVE(cid:173)
`
`2
`
`MOTION-VECTOR ~~ MOTION VECTOR
`
`ENCODING SECTION
`
`INFORMATION
`LOCAL
`
`I
`l.,..._____,,3
`I
`
`\ ___ J_J
`
`mvt-1
`
`MOTION-VECTOR
`
`MEMORY
`
`1
`
`dmvt
`
`VECTOR ---1,r-----.......1~(
`MOTION o mvt
`LOCAL
`
`5
`
`FIG.3
`
`4
`
`

`

`O'I
`-..J
`-..J
`,i;;..
`'"""'
`-..J
`~ 0
`
`'"""'
`>
`
`0
`N
`'JJ.
`d
`
`'"""'
`'"""'
`0 ....,
`~ ....
`'JJ. =(cid:173)~
`8
`0
`N
`~CIO
`'"""'
`~ '?
`'JJ.
`
`,i;;..
`
`.... 0 =
`~ ....
`O' -....
`~
`.... 0 =
`~ ....
`~ "Cl -....
`~ .... ~ = ....
`
`I")
`
`I")
`
`""C
`
`VECTOR
`•oMOTION
`GLOBAL
`
`DECODING SECTION
`MOTION-VECTOR I gmv !
`
`10
`
`INFORMATION
`MOTION VECTOR
`GLOBAL
`
`CALCULATING SECTION
`
`MOTION-VECTOR
`REPRESENTATIVE(cid:173)
`
`12
`
`MOTION-VECTOR
`
`MEMORY
`
`VECTOR
`•O MOTION
`LOCAL
`
`14
`
`'
`
`I , mv t-1
`
`lJ ____ l
`
`I
`13-----:
`
`mVt
`
`}
`
`15
`
`I dmvt
`
`INFORMATION~ DECODING SECTION
`
`MOTION VECTOR~--~' MOTION-VECTOR
`
`LOCAL
`
`1 1
`
`FIG.4
`
`5
`
`

`

`Patent Application Publication Sep. 18, 2003 Sheet 5 of 11
`
`US 2003/0174776 Al
`
`FIG.5
`
`START
`
`GMC
`
`INTRAFRAME
`CODING
`
`vp=(0,0)
`
`S3
`LMC
`
`CALCULATE MOTION VECTOR
`FOR EACH PIXEL BASED ON
`GLOBAL MOTION VECTOR
`
`CALCULATE REPRESENTATIVE
`MOTION VECTOR
`
`S5
`
`S6
`
`YES
`
`CLIPPING OF REPRESENTATIVE
`MOTION VECTOR
`
`S21
`
`CALCULATE PREDICTION ERROR
`
`ENCODE PREDICTION ERROR
`
`S7
`
`S8
`
`END
`
`6
`
`

`

`Patent Application Publication Sep. 18, 2003 Sheet 6 of 11
`
`US 2003/0174776 Al
`
`FIG.6
`
`START
`
`GMC
`
`DECODE PREDICTION ERROR
`
`S12
`
`INTRAFRAME
`CODING
`
`vp=(O,O)
`
`S14
`
`LMC
`
`CALCULATE MOTION VECTOR
`FOR EACH PIXEL BASED ON
`GLOBAL MOTION VECTOR
`
`S16
`
`CALCULATE REPRESENTATIVE
`MOTION VECTOR
`
`S17
`
`CLIPPING OF REPRESENTATIVE S23
`MOTION VECTOR
`
`ADD PREDICTION ERROR
`
`JS18
`
`END
`
`7
`
`

`

`O'I
`-..J
`-..J
`,i;;..
`'"""'
`-..J
`0
`@
`
`'"""'
`>
`
`0
`N
`'JJ.
`d
`
`'"""'
`'"""'
`0 ....,
`-..J
`~ ....
`'JJ. =(cid:173)~
`8
`0
`N
`~CIO
`'"""'
`~ '?
`'JJ.
`
`.... 0 =
`~ ....
`O' -....
`~
`.... 0 =
`~ ....
`~ "Cl -....
`~ .... ~ = ....
`
`I")
`
`I")
`
`""C
`
`INFORMATION
`MOTION VECTOR
`GLOBAL
`
`--
`
`(
`6
`
`ENCODING SECTION
`
`MOTION-VECTOR
`
`gmv
`
`-
`
`VECTOR
`MO
`GL
`
`CLIPPING SECTION
`1----+ MOTION-VECTOR
`REPRESENTATIVE-
`\
`20
`
`\
`2
`
`CALCULATING SECTION
`
`.~
`
`MOTION-VECTOR
`REPRESENTATIVE-
`
`1,....____,,.3
`I
`
`j
`I
`I
`l►O' 01
`
`[ ____ _
`
`~
`
`1
`
`mvt-1
`
`----
`
`I MOTION-VECTOR I
`
`._---..i MOTION VECTOR
`
`INFORMATION
`LOCAL
`
`ENCODING SECTION
`
`MOTION-VECTOR
`
`dmvt
`
`5
`
`VECTOR
`MOT I ON v--m_v _t r _____ ___..J
`LOCAL
`
`1
`
`FIG.7
`
`8
`
`

`

`O'I
`-..J
`-..J
`,i;;..
`'"""'
`-..J
`0
`@
`
`'"""'
`>
`
`0
`N
`'JJ.
`d
`
`'"""'
`'"""'
`0 ....,
`~ ....
`'JJ. =(cid:173)~
`8
`0
`N
`~CIO
`'"""'
`~ '?
`'JJ.
`
`CIO
`
`.... 0 =
`~ ....
`O' -....
`~
`.... 0 =
`~ ....
`~ "Cl -....
`~ .... ~ = ....
`
`I")
`
`I")
`
`""C
`
`~~--~-~ MOTION
`LOCAL
`
`• --• _, ..
`\ tr-l"T l'\r"'I
`
`I
`
`15
`
`11
`
`FIG.8
`
`MOTION VECTOR 0-------+1 MOTION-VECTOR 1-----r
`
`DECODING SECTION
`
`INFORMATION
`LOCAL
`
`VECTOR
`~o MOT I ON
`GLO.BAL
`
`l
`
`~
`
`DECODING SECT I ON
`•/ MOT I ON-VECTOR I gmv
`s
`10
`
`MOT I ON VECTOR a
`
`INFORMATION
`GLOBAL
`
`CALCULATING SECTION
`
`MOTION-VECTOR
`REPRESENTATIVE(cid:173)
`
`~
`12
`
`CLIPPING SECTION
`MOTION-VECTOR
`REPRESENTATIVE(cid:173)
`
`,a~i-1-7 mv,-, MOTION-VECTOR\
`
`MEMORY
`
`)
`21
`
`l_t_ ___ ]
`
`14
`
`9
`
`

`

`Patent Application Publication Sep. 18, 2003 Sheet 9 of 11
`
`US 2003/0174776 Al
`
`FIG.9
`
`(a)
`MOTION VECTOR
`
`(bL
`
`MOTION VECTOR
`
`TRANSLATIONAL MOTION MODEL
`
`TRANSLATIONAL MOTION AND
`EXTENDING/CONTRACTING
`MOTION MODEL
`
`FIG.10
`
`REFERENCE BLOCK
`
`REFERENCE FRAME
`
`FIG.13
`
`Bob
`
`TARGET FRAME
`
`Mv2
`
`Mv3
`
`Mv1
`
`Mv
`.')
`TARGET BLOCK
`
`10
`
`

`

`O'I
`-..J
`-..J
`,i;;..
`'"""
`-..J
`~ 0
`
`'"""
`>
`
`0
`N
`rF.J.
`d
`
`'"""
`'"""
`0 ....,
`'""" 0
`~ ....
`rF.J. =(cid:173)~
`8
`0
`N
`'"""
`~CIO
`~
`rF.J.
`
`.... 0 =
`~ ....
`O' -....
`~
`.... 0 =
`~ ....
`t "Cl -....
`~ .... ~ = ....
`
`I")
`
`I")
`
`""C
`
`•
`
`ENCODED-MODE 1
`58
`
`ENCODER
`
`52
`
`INVERSE-OCT
`
`54
`53
`SECTION
`
`51
`QANTIZER
`INVERSE h.,50
`
`•
`
`QUANTIZED-INDEX 1
`
`ENCODER
`
`57
`
`FIG.11
`
`49
`
`48
`
`46
`
`(
`
`44 45
`
`31
`
`----
`
`------------
`
`1
`
`!
`,
`· I
`/ 38
`------
`43
`/ _____ ...,,-::-::56
`
`, _ _
`
`/
`
`I
`
`42
`' 7
`
`_ fl MOT I ON-VECTOR
`59
`
`ENCODER
`
`S ____ ---------------71
`39
`
`55
`
`~
`
`I
`
`--------7--7---
`
`I
`
`DETCTOR
`
`1
`
`'r-t ':""":LO~C:;;AL--.;;MOrnT~I O!it\NI I
`LOCAL MOT I ON n
`
`36------.
`
`41
`
`COMPENSATOR
`
`PARAMETER ENCODER
`
`G~OBAL-MOTION(cid:173)
`
`60
`
`35
`
`~--------~--~-----------------\ __ _
`
`34 . . GLOBAL MOT I ON
`---.-----1 COMPENSATOR
`
`DETECTOR
`
`MEMORY!
`GLOBAL MOTION--~-FRAME
`33 _ _.__
`32
`
`11
`
`

`

`~ -0
`
`0
`0
`N
`'JJ.
`d
`
`'"""'
`>
`
`O'I
`-..J
`-..J
`,i;;..
`'"""'
`-..J
`
`'"""'
`'"""'
`'"""' 0 ....,
`'"""'
`~ ....
`'JJ. =-~
`
`~
`0
`0
`N
`~CIO
`'"""'
`~ '?
`'JJ.
`
`FRAME µ
`
`MEMORY
`
`g8
`
`.... 0 =
`~ ....
`""C = O' -....
`.... 0 =
`~ ....
`"Cl -....
`> "Cl
`~ .... ~ = ....
`
`I")
`
`I")
`
`""C
`
`'
`
`, .. • 1
`
`33.
`
`________________________ J
`
`37
`~
`
`-----~----
`
`35
`
`PARAMETER DECODER
`
`GLOBAL-MOTION(cid:173)
`
`64
`
`I COMPENSATOR
`~--~ GLOBAL MOTION
`69
`
`70
`
`----~------------,
`
`----~-----------
`
`39
`
`MOTION-VECTOR
`~
`63
`
`DECODER
`
`COMPENSATOR
`LOCAL MOTION
`
`41
`
`71
`
`~
`I
`
`<
`
`~
`I I NVERSE-DCT I
`
`SECTION
`
`67
`
`66
`
`<
`
`51
`
`I
`
`FIG.12
`
`' ' '
`
`r------------.....
`
`)
`56
`
`.-------..__~2
`
`ENCODED-MODE
`
`DECODER
`
`QUANTIZER
`INVERSE
`
`L_
`~
`~ I
`
`QUANTIZED-INDEX I 4
`
`DECODER
`
`65
`
`61
`
`12
`
`

`

`US 2003/0174776 Al
`
`Sep. 18,2003
`
`1
`
`MOTION VECTOR PREDICTIVE ENCODING
`METHOD, MOTION VECTOR DECODING
`METHOD, PREDICTIVE ENCODING APPARATUS
`AND DECODING APPARATUS, AND STORAGE
`MEDIA STORING MOTION VECTOR
`PREDICTIVE ENCODING AND DECODING
`PROGRAMS
`
`TECHNICAL FIELD
`
`[0001] The present invention relates to motion vector
`predictive encoding and decoding methods, predictive
`encoding and decoding apparatuses, and storage media
`storing motion vector predictive encoding and decoding
`programs. These methods, apparatuses, and storage media
`are used for motion-compensating interframe prediction for
`motion picture encoding.
`
`BACKGROUND ART
`
`[0002] The interframe predictive coding method for cod(cid:173)
`ing motion pictures (i.e., video data) is known, in which an
`already-encoded frame is used as a prediction signal so as to
`reduce temporal redundancy. In order to improve the effi(cid:173)
`ciency of the time-based prediction, a motion-compensating
`interframe prediction method is used in which a motion(cid:173)
`compensated picture signal is used as a prediction signal.
`The number and the kinds of components of the motion
`vector used for the motion compensation are determined
`depending on the assumed motion model used as a basis. For
`example, in a motion model in which only translational
`movement is considered, the motion vector consists of
`components corresponding
`to horizontal and vertical
`motions. In another motion model in which extension and
`contraction are also considered in addition to the transla(cid:173)
`tional movement, the motion vector consists of components
`corresponding to horizontal and vertical motions, and a
`component corresponding to the extending or contracting
`motion.
`[0003] Generally, the motion compensation is executed for
`each small area obtained by dividing a picture into a
`plurality of areas such as small blocks, and each divided area
`has an individual motion vector. It is known that the motion
`vectors belonging to neighboring areas including adjacent
`small areas have a higher correlation. Therefore, in practice,
`the motion vector of an area to be encoded is predicted based
`on the motion vector of an area which neighbors the area to
`be encoded, and a prediction error generated at the predic(cid:173)
`tion is variable-length-encoded so as to reduce the redun(cid:173)
`dancy of the motion vector.
`[0004]
`In the moving-picture coding method ISO/IEC
`11172-2 (MPEG-1), the picture to be encoded is divided into
`small blocks so as to motion-compensate each small block,
`and the motion vector of a small block to be encoded
`(hereinbelow, called the "target small block") is predicted
`based on the motion vector of a small block which has
`already been encoded.
`[0005]
`In the above MPEG-1, only translational motions
`can be compensated. It may be impossible to compensate
`more complicated motions with a simpler model, such as
`MPEG-1, which has few components of the motion vector.
`Accordingly, the efficiency of the interframe prediction can
`be improved by using a motion-compensating method which
`corresponds to a more complicated model having a greater
`
`number of components of the motion vector. However, when
`each small block is motion-compensated in such a method
`for a complicated motion model, the amount of codes
`generated when encoding the relevant motion vector is
`increased.
`[0006] An encoding method for avoiding such an increase
`of the amount of generated codes is known, in which the
`motion-vector encoding is performed using a method,
`selected from a plurality of motion-compensating methods,
`which minimizes the prediction error with respect to the
`target block. The following is an example of such an
`encoding method in which two motion-compensating meth(cid:173)
`ods are provided, one method corresponding to a transla(cid:173)
`tional motion model, the other corresponding to a transla(cid:173)
`tional motion and extending/contracting motion model, and
`one of the two motion-compensating methods is chosen.
`[0007] FIG. 9 shows a translational motion model (see
`part ( a)) and a translational motion and extending/contract(cid:173)
`ing motion model (see part (b )). In the translational motion
`model of part (a), the motion of a target object is represented
`using a translational motion component (x, y). In the trans(cid:173)
`lational motion and extending/contracting motion model of
`part (b ), the motion of a target object is represented using a
`component (x, y, z) in which parameter Z for indicating the
`amount of extension or contraction of the target object is
`added to the translational motion component (x, y). In the
`example shown in FIG. 9, parameter Z has a value corre(cid:173)
`sponding to the contraction (see part (b )).
`[0008] Accordingly, motion vector
`
`vl
`
`[0009] of the translational motion model is represented by:
`
`-vl = (x, y)
`
`[0010] while motion vector
`
`---'J>
`v2
`
`[0011] of the translational motion and extending/contract(cid:173)
`ing motion model is represented by:
`
`-v2 = (x, y, z)
`
`[0012]
`In the above formulas, x, y, and z respectively
`indicate horizontal, vertical, and extending/contracting
`direction components. Here, the unit for motion compensa(cid:173)
`tion is a small block, the active motion-compensating
`method may be switched for each small block in accordance
`with the present prediction efficiency, and the motion vector
`is predicted based on the motion vector of an already(cid:173)
`encoded small block.
`
`13
`
`

`

`US 2003/0174776 Al
`
`Sep. 18,2003
`
`2
`
`[0013]
`If the motion-compensating method chosen for the
`target small block is the same as that adopted for the
`already-encoded small block, the prediction error of the
`motion vector is calculated by the following equations.
`
`[0014] For the translational motion model:
`
`(1)
`dl.t;y-v1 x,y(i)-v1 x,y(i-1)
`[0015] For the translational motion and extending/con(cid:173)
`tracting motion model:
`
`(2)
`d2x,y,z-v2 x,y,z(i)-v2 x,y,z(i-1)
`[0016] Here, vl x, y (i) and v2 x, y, z (i) mean components
`of the motion vector of the target small block, while vl x, y
`(i-1) and v2 x, y, z (i-1) mean components of the motion
`vector of a small block of the previous frame.
`
`[0017] As explained above, prediction errors d x, y and d
`x, y, z are calculated and encoded so as to transmit the
`encoded data to the decoding side. Even if the size of each
`small block is not the same in the motion-compensating
`method, the motion vector predictive encoding is similarly
`performed if the motion model is the same.
`
`[0018]
`If the motion-compensating method chosen for the
`target small block differs from that adopted for the already(cid:173)
`encoded small block, or if intraframe coding is performed,
`then the predicted value for each component is set to 0 and
`the original values of each component of the target small
`block are transmitted to the decoding side.
`
`[0019] By using such an encoding method, the redundancy
`of the motion vector with respect to the motion-compensat(cid:173)
`ing interframe predictive encoding can be reduced and the
`amount of generated codes of the motion vector can be
`reduced.
`
`[0020] On the other hand, the motion vector which has
`been encoded using the above-described encoding method is
`decoded in a manner such that the prediction error is
`extracted from the encoded data sequence, and the motion
`vector of the small block to be decoded (i.e., the target small
`block) is decoded by adding the prediction error to the
`motion vector which has already been decoded. See the
`following equations.
`
`[0021] For the translational motion model:
`
`(3)
`v1x,y(i)-v1x,y(i-1 )+dlx,y
`[0022] For the translational motion and extending/con(cid:173)
`tracting motion model:
`
`(4)
`v2x,y, z(i)-v2x,y, z(i-1 )+d2x,y,z
`[0023] Here, vl x, y (i) and v2 x, y, z (i) mean components
`of the motion vector of the target small block, while vl x, y
`(i-1) and v2 x, y, z (i-1) mean components of the already(cid:173)
`decoded motion vector.
`
`[0024]
`In the model ISO/IEC 14496-2 (MPEG-4) under
`testing for international standardization in January, 1999, a
`similar motion-compensating method
`is adopted. The
`MPEG-4 adopts a global motion-compensating method for
`predicting the general change or movement of a picture
`caused by panning, tilting and zooming operations of the
`camera (refer to "MPEG-4 Video Verification Model Version
`7.0", ISO/IEC JTC1/SC29/WG11N1682, MPEG Video
`Group, April, 1997). Hereinafter, the structure and the
`operational flow of the encoder using the global motion
`compensation will be explained with reference to FIG. 11.
`
`[0025] First, a picture to be encoded (i.e., target picture) 31
`is input into global motion detector 34 so as to determine
`global motion parameters 35 with respect to the entire
`picture. In the MPEG-4, the projective transformation and
`the affine transformation may be used in the motion model.
`
`[0026] With a target point (x, y) and a corresponding point
`(x', y') relating to the transformation, the projective trans(cid:173)
`formation can be represented using the following equations
`(5) and (6).
`
`x'-(ax+by+tx)/(px+qy+s)
`
`(5)
`(6)
`y'-(cx+dy+ty)/(px+qy+s)
`[0027] Generally, the case of "s=l" belongs to the projec(cid:173)
`tive transformation. The projective transformation is a gen(cid:173)
`eral representation of the two dimensional transformation,
`and the affine transformation is represented by the following
`equations (7) and (8), which can be obtained under condi(cid:173)
`tions of "p=Q=0" and "s=l".
`
`x'=ax+by+tx
`
`(7)
`
`(8)
`y'-cx+dy+ty
`[0028]
`In the above equations, "tx" and "ty" respectively
`represent the amounts of translational motions in the hori(cid:173)
`zontal and vertical directions. Parameter "a" represents
`extension/contraction or inversion in the horizontal direc(cid:173)
`tion, while parameter "b" represents extension/contraction
`or inversion in the vertical direction. Parameter "b" repre(cid:173)
`sents shear in the horizontal direction, while parameter "c"
`represents shear in the vertical direction. In addition, the
`conditions of "a=cos 8, b=sin 8, c=-sin 8, and d=cos 8"
`correspond to rotation of angle 8. The conditions of "a=d=l"
`and "b=c=0" equal the conventional translational motion
`model.
`
`[0029] As explained above, the affine transformation used
`as the motion model enables the representation of various
`motions such as translational movement, extension/contrac(cid:173)
`tion, reverse, shear, and rotation, and any combination of
`these motions. A projective transformation having eight or
`nine parameters can represent more complicated motions or
`deformations.
`
`[0030] The global motion parameters 35, determined by
`the global motion detector 34, and reference picture 33
`stored in the frame memory 32 are input into global motion
`compensator 36. The global motion compensator 36 gener(cid:173)
`ates a global motion-compensating predicted picture 37 by
`making the motion vector of each pixel, determined based
`on the global motion parameters 35, act on the reference
`picture 33.
`
`[0031] The reference picture 33 stored in the frame
`memory 32, and the input picture 31 are input into local
`motion detector 38. The local motion detector 38 detects, for
`each macro block (16 pixelsx16 lines), motion vector 39
`between input picture 31 and reference picture 33. The local
`motion compensator 40 generates a local motion-compen(cid:173)
`sating predicted picture 41 based on the motion vector 39 of
`each macro block and the reference picture 33. This method
`equals the conventional motion-compensating method used
`in the conventional MPEG or the like.
`
`[0032] Next, one of the global motion-compensating pre(cid:173)
`dicted picture 37 and the local motion-compensating pre(cid:173)
`dicted picture 41, whichever has the smaller error with
`respect to the input picture 31, is chosen in the encoding
`
`14
`
`

`

`US 2003/0174776 Al
`
`Sep. 18,2003
`
`3
`
`mode selector 42 for each macro block. This choice is
`performed for each macro block. If the global motion
`compensation is chosen, the local motion compensation is
`not performed in the relevant macro block; thus, motion
`vector 39 is not encoded. The predicted picture 43 chosen
`via the encoding mode selector 42 is input into subtracter 44,
`and picture 45 corresponding to the difference between the
`input picture 31 and the predicted picture 43 is converted
`into DCT (discrete cosine transformation) coefficient 47 by
`DCT section 46. The DCT coefficient 47 is then converted
`into quantized index 49 in quantizer 48. The quantized index
`49 is encoded by quantized-index encoder 57, encoded(cid:173)
`mode choice information 56 is encoded by encoded-mode
`encoder 58, motion vector 39 is encoded by motion-vector
`encoder 59, and the global motion parameters 35 are
`encoded by global-motion-parameter encoder 60. These
`encoded data are multiplexed and output as an encoder
`output.
`
`[0033]
`In order for the encoder to also acquire the same
`decoded picture as acquired in the decoder, the quantized
`index 49 is inverse-converted into a quantization represen(cid:173)
`tative value 51 by inverse quantizer 50, and is further
`inverse-converted into difference picture 53 by inverse-OCT
`section 52. The difference picture 53 and predicted picture
`43 are added to each other by adder 54 so that local decoded
`picture 55 is generated. This local decoded picture 55 is
`stored in the frame memory 32 and is used as a reference
`picture at the encoding of the next frame.
`
`[0034] Next, relevant decoding operations of the MPEG-4
`decoder will be explained with reference to FIG. 12. The
`multiplexed and encoded bit stream is divided into each
`element, and the elements are respectively decoded. The
`quantized-index decoder 61 decodes quantized index 49,
`encoded-mode decoder 62 decodes encoded-mode choice
`information 56, motion-vector decoder 63 decodes motion
`vector 39, and global-motion-parameter decoder 64 decodes
`global motion parameters 35.
`
`[0035] The reference picture 33 stored in the frame
`memory 68 and global motion parameters 35 are input into
`global motion compensator 69 so that global motion-com(cid:173)
`pensated picture 37 is generated. In addition, the reference
`picture 33 and motion vector 39 are input into local motion
`compensator 70 so that local motion-compensating pre(cid:173)
`dicted picture 41 is generated. The encoded-mode choice
`information 56 activates switch 71 so that one of the global
`motion-compensated picture 37 and the local motion-com(cid:173)
`pensated picture 41 is output as predicted picture 43.
`
`[0036] The quantized index 49 is inverse-converted into
`quantization representative value 51 by inverse-quantizer
`65, and is further inverse-converted into difference picture
`53 by inverse-DCTsection 66. The difference picture 53 and
`predicted picture 43 are added to each other by adder 67 so
`that local decoded picture 55 is generated. This local
`decoded picture 55 is stored in the frame memory 68 and is
`used as a reference picture when encoding the next frame.
`
`[0037]
`In the above-explained global motion-compensat(cid:173)
`ing predictive method adopted in MPEG-4, one of the
`predicted pictures of the global motion compensation and
`the local motion compensation, whichever has the smaller
`error, is chosen for each macro block so that the prediction
`efficiency of the entire frame is improved. In addition, the
`motion vector is not encoded in the macro block to which the
`
`global motion compensation is adopted; thus, the generated
`codes can be reduced by the amount necessary for conven(cid:173)
`tional encoding of the motion vector.
`
`[0038] On the other hand, in the conventional method in
`which the active motion-compensating method is switched
`between a plurality of motion-compensating methods cor(cid:173)
`responding to different motion models, no prediction relat(cid:173)
`ing to a shift between motion vectors belonging to different
`motion models is performed. For example, in the encoding
`method in which the motion-compensating method corre(cid:173)
`sponding to a translational motion model and the motion(cid:173)
`compensating method corresponding to a translational
`motion and extending/contracting motion model are
`switched, a shift from the motion vector of the translational
`motion and extending/contracting motion model to the
`motion vector of the translational motion model cannot be
`simply predicted using a difference, because the number of
`used parameters with respect to the motion vector is different
`between the two methods.
`
`[0039] However, redundancy of the motion vector may
`also occur between different motion models. Therefore,
`correlation between the motion vector of the translational
`motion model and the motion vector of the translational
`motion and extending/contracting motion model will be
`examined with reference to motion vectors shown in FIG.
`10. In FIG. 10, it is assumed that in the motion compensa(cid:173)
`tion of target small blocks Boa and Bob, the target small
`block Boa is motion-compensated using the method corre(cid:173)
`sponding to the translational motion model and referring to
`small block Bra included in the reference frame, while the
`target small block Bob is motion-compensated using the
`method corresponding to
`the translational motion and
`extending/contracting motion model and referring to small
`block Erb included in the reference frame.
`
`In this case, motion vector ~=(xa, ya) in FIG. 10
`[0040]
`indicates the translational motion model, while motion vec-
`tor vb=(xb, yb, zb) in FIG. 10 indicates the translational
`motion and extending/contracting motion model. Here, in
`the motion compensation of the small block Bob, small
`block Erb in the reference frame to be referred to is
`extended. Therefore, the translational motion components of
`the motion vector va and vb in FIG.10 have almost the same
`values and redundancy exists.
`
`[0041] However, in the conventional method, such redun(cid:173)
`dancy between motion vectors of different motion models
`cannot be reduced because no motion vector of a motion
`model which differs from the present motion model is
`predicted based on the motion vector of the present model.
`
`[0042]
`In the above MPEG-4, predictive encoding is
`adopted so as to efficiently encode the motion vector. For
`example, the operations of motion-vector encoder 59 in
`FIG. 11 are as follows. As shown in FIG. 13, three motion
`vectors such as motion vector MVl of the left block, motion
`vector MV2 of the block immediately above, and motion
`vector MV3 of the block diagonally above to the right are
`referred to so as to obtain a median thereof as a predicted
`value of the motion vector MV of the present block. The
`predicted value PMV of the vector MV of the present block
`is defined using the following equation (9).
`
`PMV-median (MVl, MV2, MV3)
`
`(9)
`
`15
`
`

`

`US 2003/0174776 Al
`
`Sep. 18,2003
`
`4
`
`[0043]
`If the
`the
`to
`reference block corresponds
`intraframe-coding mode, no motion vector exists. Therefore,
`the median is calculated with vector value 0 at the relevant
`position. If the reference block has been predicted using the
`global motion compensation, no motion vector exists.
`Therefore, the median is calculated with vector value 0 at the
`relevant position also in this case. For example, if the left
`block was predicted using the local motion compensation,
`the block immediately above was predicted using the global
`motion compensation, and the block diagonally above to the
`right was encoded using the intraframe coding method, then
`MV2=MV3=0. In addition, if the three reference blocks
`were all predicted using the global motion compensation,
`then MV1=MV2=MV3=0. In this case, the median is also 0
`and thus the predicted value is 0. Therefore, this case is equal
`to the case that the motion vector of the target block is not
`subjected to predictive encoding, and the encoding effi(cid:173)
`ciency is degraded.
`
`[0044]
`In the MPEG-4, the following seven kinds of
`ranges (see List 1) are defined with respect to the size of the
`local motion vector, and the used range is communicated to
`the decoder by using a codeword "fcode" included in the bit
`stream.
`
`List 1
`
`fcode
`
`Range of motion vector
`
`2
`3
`4
`5
`6
`7
`
`-16
`-32
`-64
`-128
`-256 to
`-512
`-1024
`
`to
`to
`to
`to
`+255.5
`to
`to
`
`+15.5
`+31.5
`+63.5
`+127.5
`pixels
`+511.5
`+1023.5
`
`pixels
`pixels
`pixels
`pixels
`
`pixels
`pixels
`
`[0045] The global motion parameters used in MPEG-4
`may have a wide range of -2048 to +2047.5; thus, the
`motion vector determined based on the global motion vector
`may have a value from -2048 to +2047.5. However, the
`range of the local motion vector is smaller than the above
`range and the prediction may have a large error. For
`example, if fcode=3; the motion vector of the target block
`(Vx, Vy)=( +48, +36.5); the predicted vector determined
`based on the global motion vector (PVx, PVy)=( + 102, + 75),
`then the prediction error (MVDx, MVDy)=(-54, -38.5).
`The absolute values of this error are thus larger than the
`above values of the motion vector (Vx, Vy). The smaller the
`absolute values of the prediction error (MVDx, MVDy), the
`shorter the length of the codeword assigned to the prediction
`error. Therefore, there is a disadvantage in that the amount
`of code are increased due to the prediction of the motion
`vector.
`
`[0046] Therefore, the objective of the present invention is
`to provide a motion vector predictive encoding method, a
`motion vector decoding method, a predictive encoding appa(cid:173)
`ratus, a decoding apparatuses, and computer-readable stor(cid:173)
`age media storing motion vector predictive encoding and
`decoding programs, which reduce the amount of generated
`code with respect to the motion vector, and improve the
`efficiency of the motion-vector prediction.
`
`DISCLOSURE OF INVENTION
`[0047] The motion vector predictive encoding method,
`predictive encoding apparatus, and motion vector predictive
`encoding program stored in a computer-readable storage
`medium according to the present invention relate to a motion
`vector predictive encoding method in which a target frame
`to be encoded is divided into small blocks and a motion(cid:173)
`compensating method to be applied to each target small
`block to be encoded is selectable from among a plurality of
`motion-compensating methods. In the present invention,
`when the motion vector of the target small block is predicted
`based on the motion vector of an already-encoded small
`block, if the motion model of the motion vector of the target
`small block differs from the motion model of the motion
`vector of the already-encoded small block, then the motion
`vector of the target small block is predicted by converting
`the motion vector of the already-encoded small block for the
`prediction into one suitable for the motion model of the
`motion vector used in the motion-compensating method of
`the target small block and by calculating a predicted vector,
`and a prediction error of the motion vector is encoded.
`
`[0048]
`In the decoding method, decoding apparatus, and
`decoding program stored in a computer-readable storage
`med

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket