`APPENDIX 2
`
`
`
`Appendix 2
`
`Claim Charts of U.S. Patent No. 6,023,783
`
`
`
`I. Claims 6-9 and 43-46 are unpatentable under 35 U.S.C. § 103(a) as obvious
`over Berrou ’747 in view of Forney .......................................................................... 4
`
`II. Claims 10, 11, 47, and 48 are unpatentable under 35 U.S.C. § 103(a) as
`obvious over Berrou ’747 and Forney further in view of Ungerboeck ...................34
`
`III. Claims 12 and 49 are unpatentable under 35 U.S.C. § 103(a) as obvious over
`Berrou ’747, Forney, and Ungerboeck further in view of Massey ..........................38
`
`IV. Claims 18, 19, 55, and 56 are unpatentable under 35 U.S.C. § 103(a) as
`obvious over Deutsch in view of Berrou ’747 .........................................................42
`
`V. Claims 20, 21, 57, and 58 are unpatentable under 35 U.S.C. § 103(a) as
`obvious over Deutsch and Berrou ’747 further in view of Ungerboeck .................58
`
`VI. Claims 22 and 59 are unpatentable under 35 U.S.C. § 103(a) as obvious over
`Deutsch, Berrou ’747, and Ungerboeck further in view of Massey ........................62
`
`VII. Claims 25, 26, 62, and 63 are unpatentable under 35 U.S.C. § 103(a) as
`obvious over Divsalar TCDSC, Berrou ’747, and Forney in view of Ungerboeck 66
`
`
`
`
`
`1
`
`
`
`Appendix 2
`
`EXHIBIT LIST
`
`Exhibit No. Exhibit Description
`1001
`U.S. Patent No. 6,023,783 to Divsalar, et al. (the “’783 Patent”)
`
`1002
`
`1003
`
`1004
`
`1005
`
`1006
`
`1007
`
`1008
`
`1009
`
`1010
`
`1011
`
`1012
`
`1013
`
`U.S. Patent Application No. 08/857,021 Specification & Drawings
`as Filed on May 15, 1997 (“the ’021 Application”)
`Declaration of Mark Lanning re the ’783 Patent with appendices
`
`U.S. Patent No. 5,446,747 to C. Berrou, filed on April 16, 1992
`and issued on August 29, 1995 (“Berrou ’747”)
`“Convolutional Codes I: Algebraic Structure” by G. Forney, Jr.,
`IEEE Transactions on Information Theory, 16:6, 1970 (“Forney”)
`
`
`U.S. Patent 4,907,233 to Deutsch, et al., filed on May 18, 1988,
`published on Mar 6, 1990, and issued on March 6, 1990
`(“Deutsch”)
`“Trellis-coded Modulation with Redundant Signal Sets” by G.
`Ungerboeck, IEEE Communications Magazine, February 1987
`(“Ungerboeck”)
`“Combined Multilevel Turbo-code with 8PSK Modulation” by K.
`Fazel, et al., IEEE Global Telecommunications Conference,
`November 14-16, 1995 (“Fazel”)
`“Turbo Codes for Deep-Space Communications” by D. Divsalar, et
`al., NASA TDA progress report, February 15, 1995 (“Divsalar
`TCDSC”)
`“Coding and Modulation in Digital Communications” by J.
`Massey, International Zurich Seminar on Digital Communications,
`March 1974 (“Massey”)
`U.S. Patent No. 5,734,962 to Hladik, et al., filed on July 17, 1996
`and issued on March 31, 1998 (“Hladik”)
`“Recursive Systematic Convolutional Codes and Application to
`Parallel Concatenation” by P. Thitimajshima, IEEE Global
`Telecommunications Conference, November 14-16, 1995
`(“Thitimajshima”)
`“Multiple Turbo Codes” by Divsalar, et al. (“Divsalar MTC”)
`
`2
`
`
`
`Appendix 2
`
`1015
`
`1016
`
`1017
`
`1018
`
`Exhibit No. Exhibit Description
`1014
`“Deep-Space Communications and Coding: A Marriage Made in
`Heaven” by Massey (“Massey DS”)
`“Efficient Coding/Decoding Strategies for Channels with
`Memory” by Lai (“Lai”)
`“Nonsystematic Convolutional Codes for Sequential Decoding in
`Space Applications” by Massey, et al. (“Massey NC”)
`“Near Shannon limit error – correcting coding and decoding:
`Turbo – Codes” by Berrou, et al. (“Berrou NS”)
`“Claude Berrou: from turbo codes to the neocortex” available at:
`http://www.mines-telecom.fr/en/claude-berrou-from-turbo-codes-
`to-the-neocortex/
`Office Action issued October 5, 1998, Prosecution History of ’783
`Patent
`Response to Office Action filed February 8, 1999, Prosecution
`History of ’783 Patent
`Final Office Action issued April 27, 1999, Prosecution History of
`’783 Patent
`Interview Summary of Interview Conducted July 12, 1999,
`Prosecution History of ’783 Patent
`Notice of Allowability with Examiner’s Amendment issued July
`16, 1999, Prosecution History of ’783 Patent
`U.S. Provisional Application No. 60/017,784 as Filed on May 15,
`1996
`“An Iterative Decoding Scheme for Serially Concatenated
`Convolutional Codes” by M. Siala, et al., 1995 IEEE International
`Symposium on Information Theory, September 17-22, 1995
`(“Siala”)
`
`1019
`
`1020
`
`1021
`
`1022
`
`1023
`
`1024
`
`1025
`
`
`
`3
`
`
`
`Appendix 2
`
`
`
`
`I. Claims 6-9 and 43-46 are unpatentable under 35 U.S.C. § 103(a) as
`obvious over Berrou ’747 in view of Forney
`
`Claim 6. A
`system for
`error-
`correction
`coding of a
`plurality of
`sources of
`original
`digital data
`elements,
`comprising:
`
`(a) a first
`systematic
`convolution
`al encoder,
`coupled to
`each source
`of original
`digital data
`elements,
`for
`generating
`a first set of
`series
`coded
`output
`elements
`derived
`from the
`original
`digital data
`elements;
`
`“An error-correction method for the coding of source digital data
`elements to be transmitted or broadcast.” (Ex. 1004, Abstract)
`
`“The source data elements d . . .” (Ex. 1004, col. 9, ll. 53-54) Input
`d in Fig. 1 (reproduced below) represents source digital data
`elements.
`
`Forney’s Fig. 7 (reproduced below) shows a systematic
`convolutional encoder coupled to a plurality of input sequences x1
`and x2. Each input sequence is a source of original digital data
`elements.
`
`“Systematic encoders seem to be reassuring to some people by
`virtue of preserving the original information sequences in the
`codewords.” (Ex. 1005, p. 737, col. 2, ¶ 3 (emphasis added))
`
`
`
`
`(Ex. 1004, Fig. 1 – annotations underlined)
`
`Referring to Fig. 1, “The modules 11 and 13 may be of any known
`systematic type. They are advantageously convolutional coders
`taking account of at least one of the preceding source data
`elements for the coding of the source data element d.” (Ex. 1004,
`col. 7, ll. 60-64)
`
`Fig. 1 shows module 11 as coupled to the source data elements d.
`“Each source data element d to be coded is directed, firstly,
`towards a first coding module 11 and, secondly, towards a
`temporal interleaving module 12 which itself feed a second coding
`
`
`
`4
`
`
`
`Appendix 2
`
`
`
`
`
`
`module 13.” (Ex. 1004, col. 7, ll. 50-53)
`
`“An essential feature of the invention is that the coded data
`elements y1 and y2 take account of the same source data element
`d.” (Ex. 1004, col. 7, ll. 66-68)
`
`“at least two independent steps of systematic convolutional coding,
`each of the coding steps taking account of all of the source data
`elements” (Ex. 1004, Abstract)
`
`Forney discloses a systematic convolutional encoder coupled to a
`plurality of input sequences.
`“Definition 1: An (n, k) convolutional encoder over a finite field F
`is a k-input n-output constant linear causal finite-state sequential
`circuit.
`Let us dissect this definition.
`K-input: There are k discrete-time input sequences xi, each with
`elements from F. We write the inputs as the row vector x.
`. . .
`N-output: There are n-output sequences yi, each with elements
`from F, which we write as the row vector y. The encoder is
`characterized by the map G, which maps any vector of input
`sequences x into some output vector y.”
`(Ex. 1005, p. 721, col. 2, ¶¶ 5-7; p. 722, ¶ 2 (emphasis added))
`
`“From the k-input sequences x, called information sequences,
`the encoder G generates a set of n-output sequences y, called a
`codeword, which is transmitted over some noisy channel.”
`(Ex. 1005, p. 723, col. 2, ¶ 3 (emphasis added))
`
`“Definition 2: The code generated by a convolutional encoder G is
`the set of all codewords y = xG, where the k inputs x are any
`sequences.”
`(Ex. 1005, p. 725, ¶ 4 (emphasis added))
`
`“[I]t is well known that the most efficient realization of a
`conventional systematic rate- (n-1)/n code, n > 2, with maximum
`generator degree v, is Massey’s [14] type-II encoder in which a
`single length-v register forms all parity bits, as in Fig. 7.”
`(Ex. 1005, p. 737, ¶ 1 (emphasis added))
`
`
`
`5
`
`
`
`Appendix 2
`
`
`
`
`
`
`
`
`(Ex. 1005, p. 737, Fig. 7)
`
`Forney’s Fig. 7 shows a systematic convolutional encoder coupled
`to two input data lines x1 and x2, which can be any sequences of
`original information. Each original information sequence input into
`the encoder constitutes a source of original data (e.g., the original
`information sequences x1 and x2 are the same as data sources u1
`and u2 illustrated in Fig. 9 of the ’783 Patent) Further, the
`systematic convolutional encoder generates output sequences y1,
`y2, and y3, which constitutes a set of series coded output elements
`derived from the original data x1 and x2.
`
`It would be obvious for one of ordinary skill in the art to combine
`the known turbo coder taught by Berrou ’747 with the known
`multi-input systematic convolutional encoder taught by Forney to
`yield the predictable result of a higher code rate, thereby higher
`coding efficiency because the code rate of a multi-input
`convolutional encoder was known to be higher than that of a
`similar but single-input convolutional encoder. For instance, the
`multi-input convolutional encoder shown in Forney’s Fig. 7 has a
`code rate R=2/3 (two inputs, three outputs), while the single-input
`convolutional encoder shown in Berrou ’747’s Fig. 7, has a code
`rate R=1/2 (one input, two outputs). See Ex. 1005, p. 737, Fig. 7
`Caption (specifying that Fig. 7 shows a “rate-2/3 systematic
`encoder”). See also Ex. 1004, col. 7, ll. 25-28 (“FIG. 7 shows an
`example of a “pseudo-systematic” coding module, having a
`constraint length ν=2 and an efficiency rate R=1/2, that can be
`used in the coder of FIG. 1.” (emphasis added))
`See Ex. 1004, Fig. 1 (reproduced above)
`
`6
`
`(b) at least
`
`
`
`
`
`Appendix 2
`
`
`one set of
`interleavers
`, each set
`coupled to
`respective
`sources of
`original
`digital data
`elements,
`for
`modifying
`the order of
`the original
`digital data
`elements
`from the
`respective
`coupled
`sources to
`generate a
`respective
`set of
`interleaved
`elements;
`and
`
`
`
`
`“the method comprising at least one step for the temporal
`interleaving of the source data elements, modifying the order in
`which the source data elements are taken into account for each of
`the coding steps” (Ex. 1004, Abstract (emphasis added))
`
`“Each source data element d to be coded is directed, firstly,
`towards a first coding module 11 and, secondly, towards a
`temporal interleaving module 12 which itself feed a second coding
`module 13.” (Ex. 1004, col. 7, ll. 50-53 (emphasis added))
`
`“Besides, any other technique that enables the order of the source
`data elements to be modified may be used in this temporal
`interleaving module 12.” (Ex. 1004, col. 8, ll. 9-11 (emphasis
`added))
`
`“An essential feature of the invention is that the coded data
`elements Y1 and Y2 take account of the same source data elements
`d, but these are considered according to different sequences,
`through of the interleaving technique. This interleaving may be
`obtained in a standard way by means of an interleaving matrix in
`which the source data elements are introduced row by row and
`restored column by column.” (Ex. 1004, col. 7, ll. 66 – col. 8, ll. 5
`(emphasis added))
`
`To achieve the coding advances realized by Berrou ’747’s turbo
`coder on one or more additional input data sources, one of ordinary
`skill in the art would understand that each additional data source
`must be temporally interleaved and then coded by a second or next
`convolutional coder to achieve the advances of turbo code. The
`drawing below illustrates that the addition to Fig. 1 of Berrou
`’747’s turbo code is obvious.
`
`
`
`7
`
`
`
`Appendix 2
`
`
`
`
`
`
`(Ex. 1004, Fig. 1—annotations underlined and in dotted lines)
`
`It would be obvious to one of ordinary skill in the art that the
`addition of a second data source d2 (as taught by Forney) to
`Berrou ’747’s coding system requires d2 to be not only coded by
`the first encoder 11 but also temporally interleaved and then coded
`by the second encoder 13, just as the first data source d was coded
`by the first encoder 11, and temporally interleaved and then coded
`by the second encoder 13
`
`As shown in the diagram above, the second data source d2 must be
`temporally interleaved before it feeds into the encoder 13, because
`temporal interleaving modifies the bit orders of data source d2,
`allowing the encoders 11 and 13 to encode different versions of d2,
`which leads to the robust coding of turbo code. Without temporal
`interleaving of the second input d2, the exact same data would feed
`into both encoders 11 and 13, resulting in less robust and less
`efficient coding. For instance, if encoder 11 and 13 are the same
`(common for turbo coding), and the second input d2 is not
`temporally interleaved, then their coded outputs would also be the
`same, which is unnecessarily duplicative.
`
`Berrou ’747 encourages using temporal interleaving for the next
`convolutional encoder. “This interleaving step enables all the
`source data elements to be taken into account and coded, but
`according to different sequences for the two codes.” (Ex. 1004, col.
`4, ll. 24-26) “This technique has the favorable effect, during the
`decoding, of ‘breaking’ the rectangularly arranged error packets
`with respect to which the decoding method is more vulnerable.
`This interleaving technique shall be known hereinafter as the
`
`
`
`8
`
`
`
`Appendix 2
`
`
`
`
`‘dispersal technique.’” (Ex. 1004, col. 4, ll. 44-48 (emphasis
`added))
`
`In addition, interleaving or scrambling multiple sources of data
`was well-known in the art and disclosed by Forney. See Ex. 1005,
`p. 728, Fig. 5 and id., p. 727, col. 2, para. 2 (referring to Fig. 5,
`“Input sequences are scrambled in the k × k R-scrambler A.”
`(emphasis added)).
`
`See Ex. 1004, Fig. 1 (reproduced above)
`
`Referring to Fig. 1, “The modules 11 and 13 may be of any known
`systematic type. They are advantageously convolutional coders
`taking account of at least one of the preceding source data
`elements for the coding of the source data element d.” (Ex. 1004,
`col. 7, ll. 60-64 (emphasis added))
`
`In Fig. 1, module 13 is ‘coupled’ to the interleaving module 12.
`
`“at least two independent steps of systematic convolutional coding,
`each of the coding steps taking account of all of the source data
`elements” (Ex. 1004, Abstract)
`
`“the method comprising at least one step for the temporal
`interleaving of the source data elements, modifying the order in
`which the source data elements are taken into account for each of
`the coding steps” (Ex. 1004, Abstract)
`
`“Each source data element d to be coded is directed, firstly,
`towards a first coding module 11 and, secondly, towards a
`temporal interleaving module 12 which itself feed a second coding
`module 13.” (Ex. 1004, col. 7, ll. 50-53 (emphasis added))
`
`“An essential feature of the invention is that the coded data
`elements y1 and y2 take account of the same source data element
`d.” (Ex. 1004, col. 7, ll. 66-68)
`
`It would be obvious for one of ordinary skill in the art to combine
`the known turbo coder taught by Berrou ’747 with the known
`multi-input systematic convolutional encoder taught by Forney to
`
`(c) at least
`one next
`systematic
`convolution
`al encoder,
`each
`coupled to
`at least one
`set of
`interleaved
`elements,
`each for
`generating
`a
`correspondi
`ng next set
`of series
`coded
`output
`elements
`derived
`from the
`coupled
`sets of
`interleaved
`elements,
`
`
`
`9
`
`
`
`Appendix 2
`
`
`
`
`yield the predictable result of a higher code rate, thereby higher
`coding efficiency, because the code rate of a multi-input
`convolutional encoder was known to be higher than that of a
`single-input convolutional encoder, provided that other setups are
`similar. For instance, the multi-input convolutional encoder shown
`in Forney’s Fig. 7 has a code rate R=2/3 (two inputs, three
`outputs), while the single-input convolutional encoder shown in
`Berrou ’747’s Fig. 7, has a code rate R=1/2 (one input, two
`outputs). See Ex. 1005, p. 737, Fig. 7 Caption (specifying that Fig.
`7 shows a “rate-2/3 systematic encoder”). See also Ex. 1004, col.
`7, ll. 25-28 (“FIG. 7 shows an example of a “pseudo-systematic”
`coding module, having a constraint length ν=2 and an efficiency
`rate R=1/2, that can be used in the coder of FIG. 1.” (emphasis
`added))
`
`The obvious addition of Forney’s second data source to Berrou
`’747’s turbo code, as discussed above, requires that the second
`systematic convolutional encoder of Berrou ’747 be coupled to at
`least one set of interleaved elements. Again the addition to Fig. 1
`of Berrou ’747’s turbo code is obvious.
`
`
`
`
`(Ex. 1004, Fig. 1—annotations underlined and in dotted lines)
`
`The addition of a second original data source d2 to be coded by a
`first convolutional coder (as taught by Forney), and interleaving
`then coding the second data source, just as the first data source d
`was to be interleaved and coded (as taught by Berrou ’747), would
`have been required in order to obtain the advantages of Berrou
`’747’s turbo code, and thus obvious to one of ordinary skill in the
`art. The next systematic convolutional encoder, coupled to at least
`
`
`
`10
`
`
`
`Appendix 2
`
`
`
`
`one set of interleaved elements, would generate a next set of series
`coded output elements derived from the coupled sets of interleaved
`elements.
`
`See Ex. 1004, Fig. 1 (reproduced above) The First and Second
`Systematic coding modules 11 and 13 are shown in parallel.
`
`“The present invention relies on two novel concepts, namely a
`coding method simultaneously carrying out several coding
`operations, in parallel, and a method of iterative coding.” (Ex.
`1004, col. 7, ll. 31-34 (emphasis added))
`
`
`
`(Ex. 1004, Fig. 1 – annotations underlined)
`
`“In the embodiment shown in FIG. 1, a data element X, equal to
`the source data element d, is transmitted systematically.” (Ex.
`1004, col. 8, ll. 12-14)
`
`Forney’s Fig. 7 shows a systematic convolutional encoder
`outputting sequences y1 and y2 that are equal to input sequences x1
`and x2, respectively.
`
`11
`
`each next
`set of series
`coded
`output
`elements
`being in
`parallel
`with the
`first set of
`series
`coded
`output
`elements.
`
`Claim 7.
`The system
`of claim 6,
`wherein the
`system for
`error-
`correction
`coding
`further
`outputs the
`original
`digital data
`elements.
`
`
`
`
`
`
`
`Appendix 2
`
`
`
`
`(Ex. 1005, p. 737, Fig. 7)
`
`“Definition 1: An (n, k) convolutional encoder over a finite field F
`is a k-input n-output constant linear causal finite-state sequential
`circuit.
`Let us dissect this definition.
`K-input: There are k discrete-time input sequences xi, each with
`elements from F. We write the inputs as the row vector x.
`. . .
`N-output: There are n-output sequences yi, each with elements
`from F, which we write as the row vector y. The encoder is
`characterized by the map G, which maps any vector of input
`sequences x into some output vector y.”
`(Ex. 1005, p. 721, col. 2, ¶¶ 5-7; p. 722, ¶ 2 (emphasis added))
`
`It would be obvious for one of ordinary skill to configure a
`“systematic” coder by outputting the original data, because such an
`approach taught by both Berrou ’747 and Forney was well-known
`to be an effective way of coding.
`
`Claim 8.
`The system
`of claim 6,
`wherein the
`system for
`error-
`correction
`coding
`outputs
`
`
`
`
`
`
`
`Source
`data
`
`Only coded
`output
`
`(Ex. 1005, p. 724, Fig. 3—annotations underlined)
`
`12
`
`
`
`
`
`
`
`Appendix 2
`
`
`only the
`first set of
`series
`coded
`output
`elements
`and each
`next set of
`series
`coded
`output
`elements.
`
`
`
`
`Fig. 3 illustrates a nonsystematic system since encoder G receives
`original data and outputs only coded data not the original data.
`
`“We have settled on minimal encoders as the canonical encoders
`for convolutional codes. In general a minimal encoder is
`nonsystematic; that is, the information sequences do not in general
`form part of the codeword.” (Ex. 1005, p. 735, col. 2, ¶ 1
`(emphasis added))
`
` “One suspects that the main reason that nonsystematic encoders
`have heretofore not been used is ignorant fear of error propagation.
`Such fears are largely groundless, for a feedback-free inverse
`guarantees no catastrophic error propagation.” (Ex. 1005, p. 737,
`col., 2, ¶ 2) Nonsystematic coding systems (those that transmitted
`generally only the coded data and did not transmit the original
`data) were well known.
`
`One of ordinary skill would have been motivated to replace the
`Encoder G in Fig. 3 of Forney with the prior art turbo coder in Fig.
`1 of Berrou ’747, since the turbo coder was well known in the field
`to have superior performance (e.g., bit error rate close to the
`Shannon Limit) compared to other coder types. In this case,
`deploying a well-known turbo coder to a well-known
`nonsystematic coding system (that outputs only coded data) would
`yield the predictable result of an improved coder.
`
`Further, one of ordinary skill in the art would have been motivated
`to use the turbo coder of Berrou ’747 to make the overall system
`nonsystematic (not outputting the original data) as taught by
`Forney because it conserves bandwidth and transmission power by
`reducing the amount of data to transmit by outputting only coded
`data but not the original data. Specifically, while one of ordinary
`skill in the art would have understood that a systematic turbo coder
`had certain advantages, it would have been obvious that one could
`transmit only the coded data in such applications to, among other
`things, serve the dual purposes of minimizing error rate while
`maximizing coding efficiency. Indeed, Forney recognized that “the
`main reason that nonsystematic encoders have heretofore not been
`used is ignorant fear of error propagation. Such fears are largely
`
`
`
`13
`
`
`
`Appendix 2
`
`
`
`
`groundless . . . . ” (Ex. 1005, p. 737, col. 2, ¶ 2)
`
`
`
`
`
`Claim 9.
`The
`s[ys]tem of
`claim 6,
`further
`including a
`decoder for
`receiving
`signals
`representati
`ve of at
`least some
`of the first
`set of series
`coded
`output
`elements
`and of at
`least some
`of each
`next set of
`series
`coded
`output
`elements,
`and
`
`“1. A method for error-correction coding of source digital data
`elements, comprising the steps of:
`implementing at least two independent and parallel steps of
`systematic convolutional coding, each of said coding steps taking
`account of all of said source data elements and providing parallel
`outputs of distinct series of coded data elements;
`and temporally interleaving said source data elements to modify
`the order in which said source data elements are taken into account
`for at least one of said coding steps.
`
`10. A method for decoding received digital data elements
`representing source data elements coded according to the coding
`method of claim 1, wherein said decoding method comprises an
`iterative decoding procedure comprising the steps of:
`in a first iteration, combining each of said received digital data
`elements with a predetermined value to form an intermediate data
`element,
`decoding the intermediate data element representing each received
`data element to produce a decoded data element,
`estimating said source data element, by means of said decoded data
`element, to produce an estimated data element,
`and for all subsequent iterations, combining each of said received
`data elements with one of said estimated data elements estimated
`during a preceding iteration.
`
`16. A method according to claim 10, of the type carrying out the
`decoding of a first and a second series of received data elements
`representing source data coded according to a coding method
`implementing two redundant coding steps in parallel, the first
`coding step carrying out a first redundant coding on all the source
`data taken in natural order and the second coding step carrying out
`a second redundant coding on all the source data taken in an order
`modified by a temporal interleaving step to produce two distinct
`series of coded data elements, wherein said decoding method
`comprises the consecutive steps of:
`
`14
`
`
`
`Appendix 2
`
`
`
`
`first decoding according to said first redundant coding the first
`series of received data elements taken together with at least one of
`said intermediate data elements to produce a series of first decoded
`data elements;
`temporally interleaving, identical to said interleaving step of the
`coding method, said first decoded data elements to form a series of
`decoded de-interleaved data elements;
`second decoding according to said second redundant coding said
`decoded de-interleaved data elements and the second series of
`received data elements to produce a series of second decoded data
`elements;
`estimating the source data from at least one of said series of first
`and second decoded data elements to produce a series of estimated
`data elements; and
`de-interleaving, symmetrical to said interleaving step, said
`estimated data elements.”
`(Ex. 1004, Claims (emphasis added))
`
`
`
`
`.
`(Ex. 1004, Fig. 3)
`
`“The module 311 has at least two inputs: the received data element
`X to be decoded and a data element Zp, representing this received
`data element X, estimated by the previous module 31p-1, and two
`outputs: the estimated data element Zp and the decoded value S,
`taken into account solely at output of the last module.” (Ex. 1004,
`col. 10, ll. 39-44)
`
`Based on Berrou ’747, a decoder receives digital data elements,
`which are signals representative of parallel coded output elements.
`
`Forney also discloses a decoder for receiving coded data.
`
`
`
`15
`
`
`
`Appendix 2
`
`
`
`
`Source
`data
`
`Only coded
`output
`
`
`
`(Ex. 1005, p. 724, Fig. 3 – annotations underlined)
`
` “Consider then the use of a convolutional encoder in a
`communications system, shown in Fig. 3. From the k-input
`sequences x, called information sequences, the encoder G
`generates a set of n-output sequences y, called a codeword, which
`is transmitted over some noisy channel. The received data,
`whatever their form, are denoted by r; a decoder operates on r in
`some way to produce k decoded sequences x ̂, preferably not too
`different from x.” (Ex. 1005 p. 723, col. 2, ¶ 3 (emphasis added))
`
`Based on Forney, the decoder receives data r that is representative
`of codeword y, which represents coded output elements from the
`encoder side. Thus, data r is representative of at least some of the
`first series of coded output elements and of at least some of each
`next series of coded output elements.
`
`“10. A method for decoding received digital data elements
`representing source data elements coded according to the coding
`method of claim 1, wherein said decoding method comprises an
`iterative decoding procedure comprising the steps of:
`in a first iteration, combining each of said received digital data
`elements with a predetermined value to form an intermediate data
`element,
`decoding the intermediate data element representing each received
`data element to produce a decoded data element,
`estimating said source data element, by means of said decoded data
`element, to produce an estimated data element,
`and for all subsequent iterations, combining each of said received
`data elements with one of said estimated data elements estimated
`during a preceding iteration.
`…
`16. A method according to claim 10, of the type carrying out the
`
`for
`generating
`the original
`digital data
`elements
`from such
`received
`signals.
`
`
`
`16
`
`
`
`Appendix 2
`
`
`
`
`decoding of a first and a second series of received data elements
`representing source data coded according to a coding method
`implementing two redundant coding steps in parallel, the first
`coding step carrying out a first redundant coding on all the source
`data taken in natural order and the second coding step carrying out
`a second redundant coding on all the source data taken in an order
`modified by a temporal interleaving step to produce two distinct
`series of coded data elements, wherein said decoding method
`comprises the consecutive steps of:
`first decoding according to said first redundant coding the first
`series of received data elements taken together with at least one of
`said intermediate data elements to produce a series of first decoded
`data elements;
`temporally interleaving, identical to said interleaving step of the
`coding method, said first decoded data elements to form a series of
`decoded de-interleaved data elements;
`second decoding according to said second redundant coding said
`decoded de-interleaved data elements and the second series of
`received data elements to produce a series of second decoded data
`elements;
`estimating the source data from at least one of said series of first
`and second decoded data elements to produce a series of estimated
`data elements; and
`de-interleaving, symmetrical to said interleaving step, said
`estimated data elements.”
`(Ex. 1004, Claims (emphasis added))
`
`
`.
`(Ex. 1004, Fig. 3)
`
`“The module 31i has at least two inputs: the received data element
`X to be decoded and a data element Zp, representing this received
`data element X, estimated by the previous module 31p-1, and two
`
`
`
`
`
`17
`
`
`
`Appendix 2
`
`
`
`
`outputs: the estimated data element Zp and the decoded value S,
`taken into account solely at output of the last module.” (Ex. 1004,
`col. 10, ll. 39-44)
`
`Based on Berrou ’747, a decoder estimates the source data (i.e.,
`original digital data elements) from the received signals. One of
`ordinary skill in the art would understand that since a decoder does
`not have direct access to the source data, the decoder generates the
`source data by way of estimation from its received signals.
`
`See Ex. 1005, p. 724, Fig. 3 (reproduced above)
`
` “From the k-input sequences x, called information sequences, the
`encoder G generates a set of n-output sequences y . . . A decoder
`operates on r in some way to produce k decoded sequences x ̂,
`preferably not too different from x.” (Ex. 1005, p. 723, col. 2, ¶ 3
`(emphasis added))
`
`“In practice a decoder is usually not realized in these two pieces,
`but it is clear that since all the information about the information
`sequences x comes through y, the decoder can do no better
`estimating x directly than by estimating y and making the one-to-
`one correspondence to x.” (Ex. 1005, p. 724, ¶ 2 (emphasis
`added))
`
`“When G is one-to-one, as long as the codeword estimator makes
`no errors, there will be no error in the decoded sequences.” (Ex.
`1005, p. 724, ¶ 3)
`
`Based on Forney, the decoder produces decoded sequences x^ that
`are equivalent to x. If there are little to no errors in the decoding
`process, the decoder would generate the original digital data
`element from received signals.
`
`It would be obvious for one of ordinary skill in the art to use the
`prior art decoder of either Berrou ’747 or Forney to receive signals
`representative of the coded data elements and generate the original
`data elements. Doing so yields a predictable result that the decoder
`decodes or recovers the original data elements.
`
`
`
`
`
`18
`
`
`
`Appendix 2
`
`
`Claim 43.
`A method
`for error-
`correction
`coding of a
`plurality of
`sources of
`original
`digital data
`elements,
`comprising
`the steps of:
`
`(a)
`generating
`a first set of
`series
`systematic
`convolution
`al encoded
`output
`elements
`derived
`from a
`plurality of
`sources of
`original
`digital data
`elements;
`
`
`
`
`“An error-correction method for the coding of source digital data
`elements to be transmitted or broadcast.” (Ex. 1004, Abstract)
`
`“The source data elements d . . .” (Ex. 1004, col. 9, ll. 53-54) Input
`d in Fig. 1 (reproduced below) represents source digital data
`elements. Berrou ’747 does not specify whether the “source digital
`data elements” are from one source or multiple sources.
`
`Forney’s Fig. 7 (reproduced below) shows a systematic
`convolutional encoder coupled to a plurality of input sequences x1
`and x2. Each input sequence is a source of original digital data
`elements.
`
`“Systematic encoders seem to be reassuring to some people by
`virtue of preserving the original information sequences in the
`codewords.” (Ex. 1005, p. 737, col. 2, ¶ 3 (emphasis added))
`
`
`
`
`(Ex. 1004, Fig. 1 – annotations underlined)
`
`Referring to Fig. 1, “The modules 11 and 13 may be of any known
`systematic type. They are advantageously convolutional coders
`taking account of at least one of the preceding source data
`elements for the coding of the source data element d.” (Ex. 1004,
`col. 7, ll. 60-64)
`
`Fig. 1 shows module 11 as coupled to the source data elements d.
`“Each source data element d to be coded is directed, firstly,
`towards