throbber
Multiple Turbo Codes
`
`D. Divsalar and F. Pollara1
`
`Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109
`
`ABSTRACT: In this paper, we introduce multiple turbo codes and a suit-
`able decoder structure derived from an approximation to the maximum a
`posteriori probability (MAP) decision rule, which is substantially different
`from the decoder for two-code-based encoders. We developed new rate
`1/3 and 2/3 constituent codes to be used in the turbo encoder structure.
`
`These codes, for 2 to 32 states, are designed by using primitive polyno-
`mials. The resulting turbo codes have rates b/n, b : 1, 2 and n ,2 3, 4,
`and include random interleavers for better asymptotic performance. A
`rate 2/4 code with 16QAM modulation was used to realize a turbo trel-
`lis coded modulation (TTCM) scheme at 2 bit/sec/Hz throughput, whose
`performance is within 1 dB from the Shannon limit at BER=10‘5.
`I. INTRODUCTION
`
`Coding theorists have traditionally attacked the problem of designing
`good codes by developing codes with a lot of structure, which lends itself
`to feasible decoders, although coding theory suggests that codes chosen
`“at random" should perform well if their block size is large enough. The
`challenge to find practical decoders for “almost” random, large codes has
`not been seriously considered until recently. Perhaps the most exciting
`and potentially important development in coding theory in recent years
`has been the dramatic announcement of “turbo codes” by Berrou et al. in
`1993 [7]. The announced performance of these codes was so good that
`the initial reaction of the coding establishment was deep skepticism, but
`recently researchers around the world have been able to reproduce those
`results [15, 18, 9]. The introduction of turbo codes has opened a whole
`new way of looking at the problem of constructing good codes [5] and
`decoding them with low complexity [7, 2].
`These codes achieve near-Shannon-limit error correction performance
`with relatively simple component codes and large interleavers. A required
`Eb/ND of 0.7 dB was reported for a bit error rate (BER) of 10‘5 for a rate
`1/2 turbo code [7].
`The purpose of this paper is to: (1) Design the best component codes
`for turbo codes of various rates; (2) Describe a suitable trellis termination
`rule; (3) Design pseudo-random interleavers; (4) Design turbo codes with
`multiple component codes; (5) Design an iterative decoding method for
`multiple turbo codes by approximating the optimum bit decision rule. (6)
`Design of low-rate turbo codes for power limited channels (deep—space
`communications) and CDMA; (7) Design of high rate turbo codes for
`bandwidth limited channels (Thrbo trellis coded modulation); (8) Give
`examples and simulation results.
`II. PARALLEL CONCATENATION OF CONVOLUTIONAL
`CODES
`
`The codes considered in this paper consist of the parallel concatenation
`of multiple convolutional codes with random interleavcrs (permutations)
`at the input of each encoder. This extends the original results on turbo
`codes reported in [7], which considered turbo codes formed from just two
`constituent codes and overall rate 1/2.
`
`Figure 1 illustrates a particular example that will be used in this paper
`to verify the performance of these codes. The encoder contains three
`recursive binary convolutional encoders, with m1, m2 and m3 memory
`cells, respectively.
`In general, the three component encoders may not
`be identical and may not have identical code rates. The first component
`
`1The research described in this paper was carried out at the Jet Propulsion
`Laboratory, California Institute of Technology, under contract with the National
`Aeronautics and Space Administration.
`
`
`
`Figure 1: Example of encoder with three codes
`
`encoder operates directly (or through m) on the information bit sequence
`u = (M1. -
`-
`-
`, uN) of length N, producing the two output sequences x0
`and XI. The second component encoder operates on a reordered sequence
`of information bits, uz, produced by an interleaver, ”2, of length N, and
`outputs the sequenee X2. Similarly, subsequent component encoders op-
`erate on a reordered sequence of information bits, 11}, produced by inter-
`leaver 7t], and output the sequence xj. The interleaver is a pseudorandom
`block scrambler defined by a permutation of N elements with no repe-
`titions: A complete block is read into the the interleaver and read out
`in a specified (fixed) random order. The same interleaver is used re-
`peatedly for all subsequent blocks. Figure 1 shows an example where a
`rate r = 1/ n = 1/4 code is generated by three component codes with
`m1 = m; = m3 = m = 2, producing the outputs x0 = u, x1 = u-gl/gg,
`x2 = u; - gi/go. and x3 : u3 -g1/go (here 7n is assumed to be an identity,
`i.e., no permutation), where the generator polynomials g0 and g1 have
`octal representation (7)00”, and (5)00”), respectively. Note that various
`code rates can be obtained by proper puncturing of x. , xz, X3, and even
`x0 if the decoder works (for an example, see Section VIII).
`We use the encoder in Fig. 1 to generate an (n(N +m), N) block
`code, where the m tail bits of code 2 and code 3 are not transmitted. Since
`the component encoders are recursive, it is not sufficient to set the last
`m information bits to zero in order to drive the encoder to the all-zero
`state, i.e., to terminate the trellis. The termination (tail) sequence depends
`on the state of each component encoder after N bits, which makes it
`impossible to terminate all component encoders with m predetermined
`tail bits. This issue, which had not been resolved in the original turbo
`code implementation, can be dealt with by applying the simple method
`described in [9], which is valid for any number of component codes. A
`more complicated method is described in [17].
`The design of the constituent convolutional codes, which are not neces-
`sarily optimum convolutional codes, was originally reported in [5] for rate
`1 /n codes. In this paper we extend the results to rate b/n codes. It was
`suggested in [2] that good random codes are obtained if g” is a primitive
`polynomial (without proof). This suggestion was used in [5] to obtain
`
`0-7803-2489-7/95 $4.00 © 1995 IEEE
`
`279
`
`9.3-1
`
`ERICSSON EXHIBIT 1013
`
`

`

`data sequences are an important factor in the design of the constituent
`codes, and that higher weight sequences have successively decreasing
`importance [12, 11]. Also, increasing the number of codes and, corre-
`spondingly, the number of interleavers, makes it more and more likely
`that the bad input sequences will be broken up by one or more of the
`permutations.
`The overall minimum distance is not the most important characteristic
`of the turbo code if it is due to weight—n data sequences with n > 2. The
`performance of turbo codes with random interleavers can be obtained by
`transfer function bounding techniques [6, 4, 12, 13].
`IV. DESIGN OF‘ PARTIALLY RANDOM INTERLEAVERS
`
`Interleavers should be capable of spreading low-weight input sequences so
`that the resulting codeword has high weight. In order to break low-weight
`sequences, random interleavers are desirable.
`We have designed semirandom permutations (interleavers) by gener-
`ating random integers i,
`l 5 i 5 N, without replacement. We define
`an “S-random” permutation as follows: Each randomly selected integer
`is compared to S previously selected integers.
`If the current selection
`is equal to any S previous selections within a distance of is, then the
`current selection is rejected. This process is repeated until all N integers
`are selected. The searching time for this algorithm increases with S and
`is not guaranteed to finish successfully. However, we have observed that
`choosing S < m usually produces a solution in a reasonable time.
`Note that for S = l, we have a purely random interleaver.
`
`V. DESIGN OF CONSTITUENT ENCODERS
`
`As discussed in Section III, maximizing the weight of output codewords
`corresponding to weight-2 data sequences gives the best BER performance
`for moderate bit SNR as the random interleaver size N gets large. In this
`region the dominant term in the expression for bit error probability of
`turbo codes is
`
`
`
`where dif 2 is the minimum parity-weight (weightdue to parity checks only)
`of the codewords at the output of the j "‘ constituent code due to weight-2
`data sequences, and ,6 is a constant independent of N. Define d1.2 =
`d}; + 2 as the minimum output weight including parity and systematic
`bits.
`
`Theorem. For any r = 17—2—1 recursive systematic convolutional encoder
`with generator matrix
`
`G =
`
`beb
`
`a
`MAESE
`It":
`
`
`hem)
`
`where bel, is a b X [7 identity matrix, deg[h,-(D)] S mi, 11,-(D) ¢ 110(D),
`i = 1, 2, .
`.
`.
`, b and ho(D) is a primitive polynomial of degree mj, the
`following upper bound holds
`
`J + 2
`
`mi—l
`
`b
`
`d}; 5 t
`
`In the state diagram of any recursive systematic convolutional
`Proof.
`encoder with generator matrix G, there exists at least two non-overlapping
`loops corresponding to all-zero input sequences. If hu(D) is a primitive
`polynomial there are two loops: one corresponding to zero-input, zero—
`output sequences with branch length one, and the other corresponding
`to zero-input but non-zero—output sequences with branch length 2"” — l,
`
`“good" rate 1/2 constituent codes, and, in this paper, to obtain “good” rate
`1/3 and 2/3 constituent codes. A more precise definition of “good” codes
`is given in Sections 11] and V.
`
`111. RANDOM INTERLEAVERS
`
`The challenge in designing good turbo codes is to find the pairing of
`codewords from each individual encoder, induced by a particular set of
`interleavers. Intuitively, we would like to avoid pairing low-weight code—
`words from one encoder with low-weight words from the other encoders.
`In this section we examine the effects of random interleavers on the low-
`weight input sequences which may produce low output codeword weights.
`The input sequences with a single “1” will appear again in the other en—
`coders, for any choice of interleavers. This motivates the use of recur-
`sive encoders, since the .output weight due to weight-l input sequences
`u = (. . .001000. . .) is large. Now we briefly examine the issue of
`whether one or more random interleavers can avoid matching small sep-
`arations between the 1’s of a weight-2 data sequence with equally small
`separations between the 1’s of its permuted version(s). Consider, for ex-
`ample, a particular weight-2 data sequence (. .
`- 001001000- - -), which
`corresponds to a low-weight codeword in each of the encoders of Fig. 1.
`If we randomly select an interleaver of size N, the probability that this
`sequence will be permuted into another sequence of the same form is
`roughly 2/N (assuming that N is large and ignoring minor edge effects).
`The probability that such an unfortunate pairing happens for at least one
`possible position of the original sequence (- ~ -001001000 - - ~) within the
`block size of N is approximately 1 ~ (1 — 2/N)” m l — [2. This im-
`plies that the minimum distance ol a two-code turbo code constructed with
`a random permutation is not likely to be much higher than the encoded
`weight of such an unpemuted weight—2 data sequence. By contrast, if
`we use three codes and two different interleavers, the probability that a
`particular sequence (- -
`- 001001000 ~
`- -) will be reproduced by both in-
`terleavcrs is only (2/N )2. Now the probability of finding such an unfor-
`tunate data sequence somewhere within the block of size N is roughly
`1 ~ [1 — (2/N)2]N N 4/N. Thus, it is probable that a three—code turbo
`code using two random interleavers will see an increase in its minimum
`distance beyond the encoded weight of an unpermuted weight-2 data se-
`quence. This argument can be extended to account for other weight-2 data
`sequences that may also produce low-weight codewords. For compari—
`son, let us consider a weight-3 data sequence such as (~ -
`- 0011100- - -).
`The probability that this sequence is reproduced with one random in-
`terleaver is roughly 6/ N2, and the probability that some sequence of
`the form (- - ~0011100~ - ~) is paired with another of the same form is
`l —— (l —— 6/N2)N % 6/N. Thus, for large block sizes, the bad weight-3
`data sequences have a small probability of being matched with bad weight-
`3 permuted data sequences, even in a two~code system. For a turbo code
`using three codes and two random interleavers, this probability is even
`smaller, 1 — [1_— (6/N2)2]” e 36/N3. This implies that the minimum
`distance codeword of the turbo code in Fig.
`l
`is more likely to result
`from a weight—2 data sequence of the form (~ . -001001000- - -) than from
`the weight-3 sequence (. - .0011100~ ~ -). Higher weight sequences have
`an even smaller probability of reproducing themselves after being passed
`through the random interleavers.
`For a turbo code using q codes and q -— l interleavers, the probability
`that a weight—n data sequence will be reproduced somewhere within the
`block by all q — 1 permutations is of the form 1 — [I — (fi/N”"l)‘1'l]N,
`where ,8 is a number that depends on the weight-n data sequence but does
`not increase with block size N. For large N, this probability is proportional
`to (l /N )"‘I "“4, which falls off rapidly with N, when n and q are greater
`than two. Furthermore, the symmetry of this expression indicates that
`increasing either the weight of the data sequence :1 or the number of codes
`q has roughly the same effect on lowering this probability.
`In summary, from the above arguments, we conclude that weight-2
`
`9.3—2
`
`280
`
`

`

`Obviously, when we enter the zero-input loop from the all-zero state and
`when we leave this loop to go back to the all-zero state we would like the
`parity output to be equal to 1. From eq. 1 and 5 we require
`
`hi0 = 1
`
`hm,- = 1
`
`(6)
`
`With this condition we can enter the zero—input loop with a weight-1
`symbol at state Sl (D) and then leave this loop from state 32ml ‘1 (D) back
`to the all—zero state, for the same weight—l input. The parity—weight of the
`codeword corresponding to weight-2 data sequences is then 2’"!"‘ + 2,
`where the first term is the weight of the zero-input loop and the second
`term is due to the parity bit appearing when entering and leaving the loop.
`If b : l the proof is complete and the condition to achieve the upper
`bound is given by 6. For b = 2 we may enter the zero-input loop with
`u = 10 at state S‘(D) and leave the loop to the zero state with u = 01
`at some state S1 (D). If we can choose Sf (D) such that the output weight
`of the zero input loop from S 1(D) to Si (D) is exactly Zml’l/Z then the
`output weight of the zero-input loop from SJ”r 1 (D) to SZ‘ILl (D) is exactly
`2mi“/2, and the minimum weight of codewords corresponding to some
`weight-2 data sequences is
`
`
`zmj—l
`2
`
`,+2
`
`In general, for any b if we extend the procedure for b = 2, the minimum
`weight of the codewords corresponding to weight-2 data sequences is
`Mj—l
`
`l + 2
`
`(7)
`
`b
`
`L
`
`where [x] is the largest integer less than or equal to x. Clearly this is the
`best achievable weight for the minimum weight codeword corresponding
`to weight-2 data sequences. This upper bound can be achieved if the
`maximum run length of 1’s (mj) in the zero-input loop does not exceed
`
`[Zing—l j, where b is a power of 2.
`The run property of ML FSRs can help us in designing codes achieving
`this upper bound. Consider only runs of 1’s with length l, for 0 < l <
`m,- — 1, then there are 2mi‘2"’ runs of length I, no runs of length m ,-
`-— l,
`and only one run of length mi- For a more detailed proof and conditions
`when b is not a power of 2 see [14] D
`Corollary. For any r = b/ n recursive systematic convolutional code with
`b inputs, b systematic outputs and n — b parity output bits using a primitive
`feedback generator, we have
`
`d;25(n—b)[t
`
` 2mi—l
`b 1+2]
`
`(8)
`
`Proof. A trivial solution is to repeat the parity output of a rate bLH code.
`Then if this code achieves the upper bound so does a rate b/ n code. C]
`There is an advantage in using b > 1 since the bound in eq.(8) for rate
`b/bn codes is larger than the bound for rate 1 / n codes.
`Best Rate 2/3 Constituent Codes. We obtained the best rate
`2/3 codes as shown in Table 1, where (122 is simply denoted by d; and
`d2 = d; + 2. Minimum weight codewords corresponding to weight-3
`data sequences are denoted by (13, dmm is the minimum distance of the
`code, and k = mj + 1 in all tables. By “best” we only mean codes with
`large :1; for a given mi.
`Best Rate 4/5, Its—State Constituent Codes. All three codes
`found have four common generators ho = 23,111 = 35, h2 = 31, kg = 37,
`. plus an additional generator h4 = 27, or 11.. = 21, or h4 = 33, all yielding
`d2 = 5 and 11min = 4.
`flellis Termination for b/n codes with canonical realiza-
`tion. Trellis termination is performed (for b = 2) by setting the switch
`shown in Fig. 2 in position B. The tap coefficients a“), .
`.
`. ,a;,,,,1_1 for
`i = 1, 2, .
`.
`.
`, b can be obtained by repeated use of eq. (2), and by solving
`
`281
`
`9.3-3
`
`which is the period of maximal length (ML) linear feedback shift registers
`(FSR) with degree m j. The parity codeword weight of this loop is 2"” ‘1
`due to the balance property of ML sequences. This weight depends only
`on the degree of the primitive polynomial and is independent of 11,-(D) due
`to the invariance to initial conditions of ML FSR sequences. In general,
`the output of the encoder is a linear function of its input and current state.
`So, for any output we may consider, provided it depends at least on one
`component of the state and it is not h0(D), then the weight of a zero input
`loop is 2W", by the shift-and-add property of ML FSRs.
`
`
`
`Figure 2: Canonical representation of a rate bbil encoder (b = 2,
`mj=3).
`
`Consider the canonical representation of a rate b + l/b encoder as
`shown in Fig. 2, when the switch is in position A. Let S"(D) be the state
`of the encoder at time k with coefficients S6, Sf, .
`.
`.
`, Sin-1’ where the
`output of the encoder at time k is
`
`b
`
`X = 35,711+ Z ufhm,
`i=1
`
`The state transition for input “Ii ,
`[2
`
`.
`
`.
`
`.
`
`, u'b‘ at time k is given by
`
`Wm = [Zufh.(D) + DS"1(D):l mod hn(D)
`
`[:1
`
`(1)
`
`(2)
`
`From the all—zero state we can enter the zero—input loop with non-zero
`input symbols in, .
`.
`.
`. ub at state
`[2
`
`31(1)) = Z with-(D) mod ham
`[:1
`
`(3)
`
`From the same non-zero input symbol we leave exactly at state S2,,” ‘1 (D)
`back to the all zero state where S2 ’ ‘1(D) satisfies
`
`S1(D) -: DSZMJ'1(D) mod ho(D)
`
`(4)
`
`i.e., 52m} ‘1 (D) is the “predecessor” to state S ‘ (D) in the zero‘input loop.
`If the most significant bit of the predecessor state is zero, i.e., 33:1? = 0,
`then the branch output for the transition from $2,",- 4 (D) to S1 (D) is zero
`for zero input symbol. Now consider any weight 1 input symbol, i.e.,
`uj = l forj = i anduj =Oforj 75 i,j = l,2,...,b. Thequestion
`is: what are the conditions on the coefficients h,- (D) such that, if we enter
`with weight 1 input symbol into the zero-input loop at state S 1(D), the
`most significant bit of the “predecessor” state Szm’ “(0) be zero. Using
`eqs. 3 and 4 we can establish that
`
`hm + him; = 0
`
`(5)
`
`

`

`
`Code Generator
`
`
`
`h0:7h1= 3/1225
`
`h0213 h1=15 h2=17
`hn=23 h1=35 h2=27
`h0:23 [11:35 h;=33
`
`h0:45 h1=43 [12:61
`
`
`
`
`
`
`Table 2: Best rate 1/2 punctured constituent codes.
`
`the resulting equations. The trellis can be terminated in state zero with at
`least m, /b and at most m J- clock cycles (see [14] for details). When Fig. 3
`is extended to multiple input bits (b parallel feedback shift registers), a
`switch should be used for each input bit.
`Best Punctured Rate 1/2 Constituent Codes. A rate 2/3
`constituent code can be derived by puncturing the parity bit of a rate 1/2
`recursive systematic convolutional code. If the parity puncturing pattern is
`P = [10] or P = [01] then we show in [14] thatitis impossible to achieve
`the upper bound on d2 = (if + 2 for rate 2/3 codes. (A puncturing pattern
`P has zeros where symbols are removed. The best rate 1/2 constituent
`codes with puncturing pattem P = [10] are given in Table 2,
`Best Rate 1 / 3 Constituent Codes. For rate 1/ )1 codes the upper
`bound in eq. 7 for b = 1 reduces to
`
`d,’-’_, s (n ~1)<2mI—‘+2>
`
`This upper bound was originally derived in [5], where the best rate 1/2
`constituent codes meeting the bound were obtained. Here we present a
`simple proof based on our previous general result on rate b/n codes. Then
`we obtain the best rate 1/3 codes without parity repetition.
`In [14] we
`illustrate how parity repetition is undesirable for codes to be decoded with
`turbo decoders.
`
`
`
`
`Code Generator
`
`
`80: 3 81: 282:1
`go=7gi= 582: 3
`
`go=l3 g1=l7 g2=15
`gu=23 g1=33 g2=37
`
`
`80:23 g1=35 82:27
`
`Table 3: Best rate 1/3 constituent codes (without parity repetition).
`
`that g,-(D) ¢ g0(D), by the shift-and—add and balance properties of ML
`FSRs.
`If S(D) represents the state polynomial, then we can enter the
`zero input loop only at state S l (D) = 1 and leave the loop to the all—zero
`state at state 52“] ‘1 (D) = Dmi". The i ‘h parity output on the transition
`51(D) ~+ Szmj"(D) with zero input bit is
`
`I; = gio + gram,
`
`. .,n — l, the output weight of the
`If gio : l and gm]. : l fori : 1, .
`encoder for that transition is zero. The output weight when entering and
`leaving the zero—input loop is (n e 1) for each case. In addition, the output
`weight of the zero~input loop will be (It — 1)2’"I' “1. Thus we can achieve
`the upper bound.
`We obtained the best rate 1/3 codes without parity repetition as shown
`in Table 3, where d2 = d; + 2 represents the minimum output weight
`given by weight-2 data sequences. The best rate 1/2 constituent codes are
`given by g0 and g1 in this table. as was also reported in [5].
`VI. TURBO DECODING FOR MULTIPLE CODES
`In this section, we consider decoding algorithms for multiple—code turbo
`codes. In general, the advantage of using three or more constituent codes
`is that the corresponding two or more interleavers have a better chance
`to break sequences that were not broken by another interleaver. The
`disadvantage is that, for an overall desired code rate, each code must
`be punctured more, resulting in weaker constituent codes. Also shorter
`constraint length codes should be used for successful operation ofthe turbo
`decoder. In our experiments, we have used randomly selected interleavers
`and S-random interleavers.
`Let (4;, be a binary random variable taking values in {0, 1}, represent-
`ing the sequence of information bits u = (ul, -
`-
`-
`, uN). The MAP algo-
`rithm [1] provides the log likelihood ratio Lt, given the received symbols
`y".
`
`_
`Pruk=i|Xj
`L‘t —
`103 P(uk:()[y)
`_u 7 P(ylu)
`.7 P(“j)
`_
`: mg“- m” ., +log——:z::;:;
`u:uk:o (y
`['74
`“1
`"o
`
`<9>
`
` ENCODER 1
`
`Figure 3: Rate 1 /n code.
`
`Figure 4: Channel Model
`
`In this figure g0(D) is
`Consider a rate l/n code shown in Fig. 3.
`assumed to be a primitive polynomial. As discussed above, the output
`weight of the zero—input loop per parity bit is 2’"I"1 independent of the
`choice of g,~(D),
`i = l,2,...,n — 1, provided that gi(D) 7é 0 and
`
`For efficient computation of Eq. (9) when the a priori probabilities P(u,~)
`are nonuniform, the modified MAP algorithm in [15] is simpler to use
`than the version considered in [7]. Therefore, in this paper, we use the
`modified MAP algorithm of [15].
`
`9.3-4
`
`282
`
`

`

`If the rate b/n constituent code is not equivalent to a punctured rate
`1 / n’ code or if turbo trellis coded modulation is used, we can first use
`the symbol MAP algon‘thm [1] to compute the log-likelihood ratio of a
`symbol u r: m, uz, .
`.
`.
`, ub given the observation y as
`
`
`We can use Eqs. (11) and (12) again, but this time for i = 0, 1,3, to
`express Eq. (10) as
`
`=f(Y2.flo.I-4|,i3.k)+iok+i1k+1:3k
`
`(16)
`
`and similarly,
`
`LOU) =10g
`
`where 0 corresponds to the all-zero symbol. Then we obtain the log-
`likelihood ratios of the jth bit within the symbol by
`1(11)
`ZUIMj=l 9Au
`zu:u,-=oe( )
`In this way the turbo decoder operates on bits and bit, rather than symbol,
`interleaving is used.
`The channel model is shown in Fig. 4, where the nOk’s and the nlk’s
`are independent identically distributed (i.i.d.) zero-mean Gaussian ran-
`dom variables with unit variance, and p = a/ZrEb/Na is the SNR. The
`same model is used for each encoder. To explain the basic decoding con-
`cept, we restrict ourselves to three codes, but extension to several codes
`is straightforward. In order to simplify the notation, consider the combi-
`nation of permuter and encoder as a block code with input u and outputs
`xi, 1' = 0, 1, 2, 3(xu = u) and the corresponding received sequences y,,
`i = 0, 1,2, 3. The optimum bit decision metric on each bit is (for data
`with uniform a priori probabilities)
`
`L, = log
`
`EMF, P(yolu)P(ytlu)P(y2|u)P(yslu)
`EMF"P(Yo|11)P(Y1|U)P(Y2IU)P(Y3IU)
`but in practice, we cannot compute Eq. (10) for large N because the per-
`mutations 712, 7r; imply that yz and y; are no longer simple convolutional
`encodings of 11. Suppose that we evaluate P(y,- lu),i = 0, 2, 3 in Eq. (10)
`using Bayes’ rule and using the following approximation:
`N
`
`(10)
`
`P(uly.-) e H 13.041)
`k=l
`
`(11)
`
`Note that P(u|y,-) is not separable in general. However, for i = 0,
`P(u]yo) is separable; hence, Eq. (11) holds with equality.
`If such an
`approximation, ie., Eq. (11), can be obtained, we can use it in Eq. (10)
`
`for i=2 and i=3 (by Bayes rule) to complete theNalgorithm. A
`reasonable criterion for this approximation is to choose Uk_1 P (uk) such
`that it minimizes the Kullback distance or free energy [3, 16]. Define Lik
`by
`
`
`Pi(“k) = '
`._
`
`(12)
`
`where uk 6 {0, 1}. Then the Kullback distance is given by
`
`ik
`e
`e
`_
`k=| We
`k=luk 1']:
`EN 1:
`Z”
`i
`F(L,-) = Z —N————_— log W——_————-— (13)
`u [1,41 +eLik)
`n,=,(1+ebioP(uIy.->
`
`Minimizing F(it) involves forward and backward recursions analogous
`to the MAP decoding algorithm, but we have not attempted this approach
`in this work.
`Instead of using Eq. (13) to obtain {P,-} or, equivalently,
`{fa-k}, we use Eqs. (11) and ( 12) for i = 0,2, 3 (by Bayes’ rule) to
`express Eq. (10) as
`
`Lk =.f(Y3ti401flle:21k)+i0k+£1k+£2k
`
`(17)
`
`A solution to Eqs. (14), (16), and (17) is
`
`le = f(yi, I”40,132, £3.16)
`sz = f(Y2, 140,131: 14,")
`L31: = f(Y3yLoyL1, szk)
`
`(18)
`
`, N, provided that a solution to Eq. (18) does indeed exist.
`-
`-
`fork = l, 2, '
`The final decision is then based on
`
`=£tlk+£lk+i2k+£3k
`
`(19)
`
`which'1s passed through a hard limiter with zero threshold. We attempted
`to solve the nonlinear equations in Eq. (18) for L1, L2, and L; by using
`the iterative procedure
`
`it?” =a§“’f(y 111,110") Lek)
`
`(20)
`
`for k = 1, 2,- ,N iterating on m. Similar reeursions hold for L0") and
`13%;). The gain a,”should be equal to one, but we noticed experimentally
`that better convergence can be obtained by optimizing this gain for each
`iteration, starting from a value slightly less than one and increasing toward
`one with the iterations, as is often done1n simulated annealing methods.
`We start the recursion with theinitial condition2L(0)=Lm)=13?” = 1:0.
`For the computation of f ( ), we use the modified MAP algorithm as
`described in [9] with permuters (direct and inverse) where needed, as
`shown in Fig. 5. The MAP algorithm always starts and ends at the all~
`zero state since we always terminate the trellis as described in [9]. We
`assumed 7n = 1 identity; however, any 7n can be used. The overall
`decoder is composed of block decoders connected as in Fig. 5, which can
`be implemented as a pipeline or by feedback.
`In [11] we proposed an
`alternative version of the above decoder which is more appropriate for use
`in turbo trellis coded modulation, i.e., set [:0 = 0 and consider yo as part
`of y]. If the systematic bits are distributed among encoders, we use the
`same distribution for yo among the MAP decoders.
`At this point, further approximation for turbo decoding is possible if
`one term corresponding to a sequence u dominates other terms in the
`summation in the numerator and denominator of Eq. (15). Then the sum-
`mations in Eq. (15) can be replaced by “maximum” operations with the
`same indices, 1.e., replacing 2,1,”:.with u‘“:"_. forz = 0,1. A similar
`approximation can be used for L3,, and L3,.1n Eq. (18). This suboptimum
`decoder then corresponds to a turbo decoder that uses soft output Viterbi
`(SOVA)-type decoders rather than MAP decoders. Further approxima-
`tions, i.e., replacing 2 with max can also be used ’in the MAP algorithm.
`
`VII. MULTIPLE—CODE ALGORITHM APPLIED TO Two
`CODES
`For turbo codes with only two constituent codes, Eq. (20) reduces to
`
`= f(Y1. 130, 1:2. £13.16) + 1:0k + 1:2k + £31:
`
`(14)
`
`it?” = M’fryl, L?!»
`
`where [10,, = ZpyOk (for binary modulation) and
`
`f(y1,L0.L2,L3,k>=Iog
`
`Zumal [3(y1 I“) “h“: eul(illj+ill+i3i)
`211...:0Pmlu) ”#1: “Jammie“
`(15)
`
`a‘”"f(y2 Lo L§”’.k)
`L3?” =
`fork: 1 2, N andm = 1, 2. - -,,where for each iteration, 01‘") and
`do”) can be optimized (simulated annealing) or set to 1 for simplicity. The
`
`2Note that the components of the Li’s corresponding to the tail bits, i.e., 12“,,
`for k = N + l, -
`-
`-
`, N + M, are set to zero for all iterations.
`
`283
`
`9.3-5
`
`

`

`Charm! and 81 : (1)0ctal’ and K 2 5 With (gl/gO) Where g0 : (23)”«41
`and g, = (33),,“01 the required bit SNR was 0.85 dB. This is an example
`where the systematic bits are not transmitted. For rate 1/3, we used two
`K = 5 codes, (1. gi/gn) and (SI/g0) with go = (23).,mi and g1 =
`[I
`(33%“, and obtained bit SNR: 0.25 dB. For rate 1/4, we used two K
`5 codes with (1, 31/30. gz/go) and (gt/go) with go = (23km. g1 =
`(33)octa/ and g2 = (25),,le and obtained bit SNR = 0 dB. A fixed number
`of iterations m = 20 was used for all cases. Many of these codes may
`actually require a smaller number of iterations for BER=10’5 or below.
`The simulation performance of other codes reported in this paper is
`still in progress.
`1D ‘
`N=4096
`Code Retain
`
`K215, r414
`Galllea Code
`
`Three K=3 Codes
`"1:20
`
`BER
`
`Two Ki: Codes
`(Dlflerenl Rates)
`
`
`
`
`
`
`
`"1:10 D .
`
`1
`
`~02
`
`-o.1
`
`0.0
`
`02
`01
`‘ Eh/No, as '
`
`0,3
`
`0,4
`
`015
`
`Figure 5: Multiple Turbo Decoder Structure
`
`decoding configuration for two codes, according to the previous section,
`can be obtained from Fig. 5. In this special case, since the paths in Fig. 5
`are disjoint, the decoder structure can be reduced to a serial mode structure
`if desired
`
`If we optimize aim) and 04;"), our method for two codes is similar to the
`decoding method proposed in [7], which requires estimates of the vari-
`ances of ilk and iZk for each iteration in the presence of errors. In the
`method proposed in [15], the received “systematic” observation was sub-
`tracted from 1:”, which may result in performance degradation. In [18]
`the method proposed in [15] was used but the received “systematic” ob-
`servation was interleaved and provided to decoder 2.
`In [9], we argued
`that there is no need to interleave the received “systematic” observation
`and provide it to decoder 2, since 1:0,. does this job.
`It seems that our
`proposed method with 01:”) and of") equal to 1 is simple and achieves the
`same performance reported in [18] for rate 1/2 codes.
`VIII. PERFORMANCE AND SIMULATION RESULTS
`
`The bit error rate performance of these codes was evaluated by using
`transfer function bounds
`[6] [13].
`In [13] it was shown that transfer
`function bounds are very useful for signal—to-noise ratios above the cutoff
`rate threshold and that they cannot accurately predict performance in the
`region between cutoff rate and capacity. In this region, the performance
`was computed by simulation.
`Figure. 6 shows the performance of turbo codes with m iterations
`and the following generators: For two K = 5 constituent codes,
`(1,gi/go,g2/go) and (gr/go). With 80 = (37km, 81 = (33)ocml and
`g2 = (25),,m1; For three K = 3 codes, (Lgi/go) and (gI/gn) with
`go = (7)0ch and g] : (5)0m1; For three K = 4 codes, (1, gl/go) and
`(gt/go) with 30 = (17min and gt = (lumi-
`Further results at BER=10‘5 were obtained for two constituent codes
`with interleaving size N = 16384 as follows. For a rate 1/2 turbo code
`using two codes, K = 2 (differential encoder) with (gl/go) where g0 =
`
`Figure 6: Performance of turbo codes
`
`IX. TURBO TRELLIS CODED MODULATION
`
`A pragmatic approach for turbo codes with multilevel modulation was
`proposed in [8]. Here we propose a different approach that outperforms
`the results in [8] when M-QAM modulation is used. A straightforward
`method to use turbo codes for multilevel modulation is first to select a
`
`rate 2&7 constituent code where the outputs are mapped to a 2”“-level
`modulation based on Ungerboeck’s set partitioning method (i.e., we can
`use Ungerboeck’s codes with feedback). If MPSK modulation is used, for
`every 17 hits at the input of the turbo encoder we transmit two consecutive
`2“1 PSK signals, one per each encoder output. This results in a throughput
`of 17/2 bits/sec/Hz. If M—QAM modulation is used, we map the b + 1
`outputs of the first component code to the 2”+1 in—phase levels (I-channel)
`of a 22b+z-QAM signal set, and the b + 1 outputs of the second componen

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket