`
`A APPLICATION FOR
`
`UNITED STATES PATENT
`
`IN THE NAME OF
`
`Dariush Divsalar
`
`Fabrizio Pollara
`
`of
`
`Jet Propulsion Laboratory
`California Institute of Technology
`
`FOR
`
`HYBRID CONCATENATED CODES
`
`AND ITERATIVE DECODING
`
`Scott Harris & John Land
`FISH & RICHARDSON
`4225 Executive Square, Suite 1400
`La Jolla, CA 92037
`(619) 678—5070 VOIce
`(619) 678-5099 fax
`
`'t:
`fD
`D be
`LE 15,79
`I éheigby efefiilfy under 37 CF
`110 that this correspondence is being
`deposited with the United States Postal Service as “Express Mail Post Office
`To Addressec” with sufficient postage on the date indicated above and is
`addressed to the Commissioner ofPatents and Trademarks, Washington, DC.
`20231.
`
`,4;
`
`32M.»
`
`(AL: 60#b‘(r
`
`DOCKET NO. 0651; [0mm
`
`EXPRESS MAIL NO. EM ° 3‘ 3 L3? 3’51 ‘15
`
`ERICSSON EXHIBIT 1024
`
`
`
`
`
`751W64
`
`fl/,@V
`
`BUN/01m
`
`HYBRID CONCATENATED CODES AND ITERATIVE DECODING
`
`ORIGIN OF INVENTION
`
`The invention described herein was made in the performance of work under a NASA
`
`contract, and is subject to the provisions of Public Law 96-517 (35 USC 202) in which
`the Contractor has elected to retain title.
`
`BACKGROUND OF THE INVENTION
`
`1.
`
`Field ofthe Invention
`
`This invention relates to error correcting codes.
`
`2.
`
`Description ofRelated Art
`
`10
`
`15
`
`20
`
`Turbo codes are binary error-correcting codes built from the parallel concatenation of two
`
`recursive systematic convolutional codes and using a feedback decoder. Recently
`
`introduced be Berrou, et al. (“Near Shannon limit error-correcting coding and decoding:
`
`Turbo-codes”, [CC ’93, Conf. Rec. pp. [064-1070, Geneva, May 1993), the basics of such
`
`codes are described further in U.S. Patent Nos. 5,446,747 and 5,406,570.
`
`The reference and patents to Berrou describe a basic turbo code encoder architecture of
`
`the type shown in the block diagram in FIGURE 1. As described in Berrou '747, FIGURE
`
`1 shows a block diagram of a coder in an example where two distinct codes are used in
`
`parallel. Each source data element d to be coded is coupled to a first systematic coding
`
`module 11 and, through a temporal interleaving module 12, to a second systematic coding
`
`module 13. The coding modules 11 and 13 may be of any known systematic type, such
`
`as convolutional coders, that take into account at least one of the preceding source data
`
`
`
`-2-
`
`elements in order to code the source data element 0’. The codes implemented in coding
`modules 11 and 13 may be identical or different.
`
`The input information bits of feed the first coding module 11 and, after being scrambled
`by the interleaving module 12, enter the second coding module 13. A codeword of a
`
`parallel concatenated code consists of the information input bits to the first encoder
`followed by the parity check bits of both encoders.
`
`Under this architecture, there are at least two coded data elements Y1 and Y2, coming from
`distinct coders 11 and 13, associated with each source data element d. A data element X,
`equal to the source data element d, is also transmitted. This characteristic was described
`in Berrou '747 as “necessary for the making of the decoding modules”.
`
`10
`
`The transmitted coded data elements and source data element become received data
`elements at a decoder. The task of the decoder is to re-construct the original data source
`d bit stream from the received data elements, which may have been corrupted by noise.
`
`15
`
`Thus, an important aspect of prior art turbo code encoders is that they transmit a data
`element X equal to input source data element d.
`
`The present invention results from observation that the prior art fails to achieve a simpler
`architecture for the encoder, and fails to provide as robust encoding as is required or
`desired in certain environments, including low—power, constrained-bandwidth uses, such
`as deep space communications and personal communication devices, and high-noise
`environments.
`
`20
`
`
`
`
`
`SUMMARY OF THE INVENTION
`
`The present invention encompasses several improved turbo code apparatuses and
`
`methods. In a first class of turbo code encoders, a data source d is applied to two or more
`
`encoders with an interleaver between the data source and each of the second and
`
`subsequent encoders. Each of the encoders outputs a turbo code element which may be
`
`transmitted or stored. A parallel decoder provides the ability to decode the turbo code
`
`elements to derive the original source information d without use of a received data signal
`
`corresponding to d. The output of the turbo code encoder optionally may be coupled to
`
`a multilevel trellis—coded modulator that provides excellent performance.
`
`In a second class of turbo code encoders, a data source d is applied to two or more
`
`encoders with an interleaver between the data source and each of the second and
`
`subsequent encoders. Each of the encoders outputs a turbo code element. In addition, the
`
`original data source d is output from the encoder. All of the output elements are coupled
`to a multilevel trellis—coded modulator.
`
`In a third class of turbo code encoders, at least two data sources are applied to two or
`
`more encoders with an interleaver between each data source and each of the second and
`
`subsequent encoders. Each of the encoders outputs a plurality of turbo code elements
`
`which may be transmitted or stored. The output of the turbo code encoder optionally may
`
`be coupled to a multilevel trellis-coded modulator.
`
`In a fourth class of turbo code encoders, at least two data sources are applied to two or
`
`more encoders with at least two interleavers between each data source and each of the
`
`second and subsequent encoders. Each of the encoders outputs a plurality of turbo code
`
`elements which may be transmitted or stored. The output of the turbo code encoder
`
`optionally may be coupled to a multilevel trellis-coded modulator.
`
`10
`
`15
`
`20
`
`
`
`
`
`-4-
`
`In a fifth class of turbo code encoders, at least one data source is applied to one or more
`
`serially linked encoders through at least one interleaver.
`
`The invention also encompasses a novel method of terminating or resetting a turbo coder,
`
`and a general parallel decoder structure.
`
`The details of the preferred embodiments of the present invention are set forth in the
`
`accompanying drawings and the description below. Once the details of the invention are
`
`known, numerous additional innovations and changes will become obvious to one skilled
`inthe art.
`
`
`
`-5-
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`FIGURE 1 is a block diagram of a prior art turbo code encoder.
`
`FIGURE 2 is a block diagram of a general model of a turbo code encoder having three
`
`codes.
`
`FIGURE 3 is a matrix of a weight—4 sequence.
`
`‘ FIGURE 4 is a block diagram 'of an input trellis termination method for turbo code
`
`encoders in accordance with the present invention.
`
`FIGURE 5 is a block diagram of a turbo encoder showing output only of encoded parity
`elements.
`
`10
`
`15
`
`FIGURE 6A is a block diagram of a third embodiment of the present invention, showing
`
`output of multiple encoded elements derived from multiple input data sources, the use of
`
`single interleavers per data source, and an optional multilevel trellis-coded modulator.
`
`FIGURE 6B is a block diagram of a variation of the coder shown in FIGURE 6A,
`
`showing a self-concatenating code.
`
`FIGURE 6C is a block diagram of a fourth embodiment ofthe present invention, showing
`
`output of multiple encoded elements derived from multiple input data sources, the use of
`
`multiple interleavers per data source, and an optional multilevel trellis-coded modulator.
`
`FIGURE 7A is a block diagram of a serial encoder in accordance with the present
`invention.
`
`
`
`
`
`-6-
`
`FIGURE 7B is a block diagram of a parallel-serial encoder in accordance with the present
`invention.
`
`FIGURE 8 is a diagram showing the performance of various turbo codes.
`
`FIGURE 9 is a block diagram of a first rate 1/2 turbo coder in accordance with the present
`invention.
`
`FIGURE 10 is a block diagram of a second rate 1/2 turbo coder in accordance with the
`
`present invention.
`
`FIGURE 11 is a block diagram of a rate 1/3 turbocoder in accordance with the present
`invention.
`
`10
`
`’15
`
`FIGURE 12 is a block diagram of a rate 1/4 turbo coder in accordance with the present
`invention.
`
`FIGURE 13 is a diagram showing the performance of various rate 1/4 turbo codes.
`
`FIGURE 14 is a diagram showing the performance of various turbo codes with short
`block sizes.
`
`FIGURE 15 is a diagram showing the performance of various three code turbo codes.
`
`FIGURE 16 is a block diagram of a prior art decoder structure.
`
`FIGURE 17 is a block diagram of a parallel decoder structure in accordance with the
`
`present invention.
`
`
`
`
`
`-7-
`
`FIGURE 18 is a block diagram of a channel model.
`
`FIGURE 19 is a signal flow graph for extrinsic information in a decoder.
`
`FIGURE 20A is a block diagram of a single parallel block decoder in accordance with the
`
`present invention.
`
`FIGURE 20B is a block diagram showing a multiple turbo code decoder for a three code
`
`system, using three blocks similar to the decoder in FIGURE 20A.
`
`FIGURE 20C is a block diagram showing a multiple turbo code decoder for a three code
`
`system, using three blocks similar to the decoder in FIGURE 20A, and having a
`switchable serial decoder mode.
`
`10
`
`FIGURE 20D is a block diagram showing a decoder corresponding to the self-concatenat-
`
`ing coder of FIGURE 6B.
`
`FIGURE 20E is a block diagram showing a decoder corresponding to the serial coder of
`FIGURE 7A.
`
`15
`
`FIGURE 20F is a block diagram showing a decoder corresponding to the serial coder of
`FIGURE 7B.
`
`FIGURE 21 is a block diagram of a 16 QAM turbo trellis-coded modulation coder in
`
`accordance with the present invention.
`
`FIGURE 22 is a diagram showing the BER performance of the coder shown in FIGURE
`21.
`
`
`
`
`
`~8—
`
`FIGURE 23 is a block diagram of an 8 PSK turbo trellis—coded modulation coder in
`
`accordance with the present invention.
`
`FIGURE 24 is a diagram showing the BER performance ofthe coder shown in FIGURE
`23.
`
`FIGURE 25 is a block diagram of a 64 QAM turbo trellis-coded modulation coder in
`
`accordance with the present invention.
`
`FIGURE 26 is a diagram showing the BER performance of the coder shown in FIGURE
`25.
`
`10
`
`FIGURE 27 is a block diagram of general embodiment ofthe present invention, showing
`
`output of the source data element and encoded elements to a multilevel trellis-coded
`modulator.
`
`FIGURE 28 is a block diagram showing a general decoder for the TCM encoded output
`
`of, for example, FIGURES 21, 23, and 25 .
`
`Like reference numbers and designations in the various drawings indicate like elements.
`
`
`
`
`
`-9-
`
`DETAILED DESCRIPTION OF THE INVENTION
`
`Throughout this description, the preferred embodiment and examples shown should be
`
`considered as exemplars, rather than as limitations on the present invention.
`
`Overview
`
`Turbo codes are believed to be able to achieve near Shannon-limit error correction
`
`performance with relatively simple component codes and large interleavers. The present
`
`invention encompasses several novel designs for turbo code encoders and a corresponding
`decoder that is suitable for error correction in high noise or constrained-bandwidth, low
`
`10
`
`power uses, such as personal communications systems (PCS) applications, where lower
`rate codes can be used.
`
`For example, in multiple-access schemes like CDMA (Code Division Multiple Access),
`
`the capacity (maximum number of users per cell) can be expressed as:
`
`
`
`15
`
`20
`
`where T] is the processing gain and Eb/N, is the required signal-to-noise ratio to achieve
`
`a desired bit error rate (BER) performance (Eb: energy received per useful bit; ND:
`
`monolateral spectral density of noise). For a specified BER, a smaller required Eb/N,
`
`implies a larger capacity or cell size. Unfortunately, to reduce E,,/N,,, it is necessary to use
`
`very complex codes (e.g., large constraint length convolutional codes). However, the
`
`present invention includes turbo codes that are suitable for CDMA and PCS applications
`
`and which can achieve superior performance with limited complexity. For example, if a
`
`(7, 1/2) convolutional code is used at BER=10'3 , the capacity is C: 0.511. However, iftwo
`
`(5, 1/3) punctured convolutional codes or three (4, 1/3) punctured codes are used in a
`
`turbo encoder structure in accordance with the present’invention, the capacity can be
`
`increased to C = 0.8T] (with l92-bits and 256—bits interleavers, which correspond to
`
`
`
`
`
`
`
`-10-
`
`9.6 Kbps and 13 Kbps with roughly 20ms frames). Higher capacity can be obtained with
`
`larger interleavers. Note that low rate codes can be used for CDMA since an integer
`
`number of chips per coded symbol are used and bandwidth is defined mainly by chip rate.
`
`Implementation
`
`The invention may be implemented in hardware or software, or a combination of both.
`
`In the preferred embodiment, the functions of a turbo coder and decoder designed in
`
`conformance with the principals set forth herein are implemented as one or more
`
`integrated circuits using a suitable processing technology (e. g. , CMOS).
`
`As another example, the invention may be implemented in computer programs executing
`
`on programmable computers each comprising a processor, a data storage system
`
`(including volatile and non-volatile memory and/or storage elements), at least one input
`
`device, and at least one output device. Program code is applied to input data to perform
`
`the functions described herein and generate output information. The output information
`
`is applied to one or more output devices, in known fashion.
`
`Each such program may be implemented in a high level procedural or object oriented
`
`programming language to communicate with a computer system. However, the programs
`
`can be implemented in assembly or machine language, if desired. In any case, the
`
`language may be a compiled or interpreted language.
`
`Each such computer program is preferably stored on a storage media or device (e.g.,
`
`ROM or magnetic disk) readable by a general or special purpose programmable computer,
`
`for configuring and operating the computer when the storage media or device is read by
`
`the computer to perform the procedures described herein. The inventive system may also
`
`be considered to be implemented as a computer-readable storage medium, configured
`
`with a computer program, where the storage medium so configured causes a computer to
`
`10
`
`15
`
`20
`
`
`
`-11-
`
`operate in a specific and predefined manner to perform the functions described herein. An
`
`example of one such type of computer is a personal computer.
`
`Turbo Code Encoders
`
`Following is a discussion of several general considerations in designing turbo code
`
`encoders and decoders in accordance with the present invention. Since these consider—
`
`ations pertain to the novel designs described below as well as prior art designs in some
`
`cases, a simple 3-code encoder, as shown in FIGURE 2, will be used as an initial
`
`example.
`
`General Structure ofan Encoder
`
`10
`
`15
`
`20
`
`25
`
`In FIGURE 2, the turbo code encoder contains three recursive binary convolutional
`
`encoders, with M1, M2 and M3 memory cells (comprised of the delay gates D shown in
`
`each encoder) respectively. In general, the three component encoders may not be identical
`
`and may not have identical code rates. The information bit sequence u = (ul,
`
`, 11,) of
`
`length N is applied to the component Encoder 1 through interleaver TE] (normally set to
`
`the identify function), which outputs a sequence 11,. The component Encoder 1 produces
`
`two output sequences, x1, and x1p (the subscript i stands for “information” bits, while the
`
`subscript p stands for “parity” bits). The component Encoder 2 operates on a reordered
`
`sequence of information bits, uz, produced by an interleaver (also known as a permuter),
`
`1:2, of length N, and outputs the sequence xzp. The component Encoder 3 operates on a
`
`reordered sequence of information bits, u3, produced by an interleaver, 1:3, of length N,
`
`and outputs the sequence x3p. Similarly, subsequent component encoders operate on a
`
`reordered sequence of information bits, uj , produced by interleaver uj , and output the
`
`sequence xjp.
`
`In the preferred embodiment, each interleaver is a pseudo-random block scrambler
`
`defined by a permutation of N elements with no repetitions. That is, a complete block is
`
`read into an interleaver and read out in a specified (fixed) random order.
`
`
`
`-12-
`
`In general, a decoder (discussed more fully below) receives the transmitted sequences x1 ,-
`
`and xjp as received sequences y,.. As noted above, the task of the decoder is to re-construct
`
`the original data source d bit stream from the received data elements, which may have
`
`been corrupted by noise. In the present invention, the encoder does not need to transmit
`
`the original data sequence.
`
`FIGURE 2 shows an example where a rate r = l/n = 1/4 code is generated by three
`
`component codes with M, = M2 = M3 = M = 2, producing the outputs:
`
`x1, = u
`
`Xip = u ' gi/ga
`
`sz = U2 'gb/ga
`
`Xsp = ‘13 'gb/ga
`
`(n1 is assumed to be an identity, i. e., no permutation), where the generator polynomials
`
`g, and g, have octal representation (7),,,,,, and (5),,mh respectively. Note that various code
`
`rates can be obtained by proper puncturing or xlp, xzp, x31” and even x1,- if a decoder in
`
`accordance with the present invention is used (see below).
`
`Design ofPreferred Constituent Encoders
`
`A design for constituent convolutional codes, which are not necessarily optimum
`
`convolutional codes, was originally reported in S. Benedetto and G. Montorsi, “Design
`
`of Parallel Concatenated Convolutional Codes” (to be published in IEEE Transactions
`
`on Communications, 1996) for rate l/n codes. We extend those results to rate b/n codes.
`
`It has been suggested (without proof) that good random codes are obtained if g, is a
`
`primitive polynomial. This suggestion, used in the report cited above to obtain “good”
`
`rate 1/2 constituent codes, will be used in this article to obtain “good” rate 1/3, 2/3, 3/4,
`
`and 4/5 constituent codes. By “good” codes, we mean codes with a maximum effective
`
`free distance d,, that is, those codes that maximize the minimum output weight for
`
`1O
`
`15
`
`20
`
`25
`
`
`
`
`
`
`
`
`
`-13-
`
`weight-2 input sequences (because this weight tends to dominate the performance
`
`characteristics over the region of interest).
`
`Maximizing the weight of output codewords corresponding to weight—2 data sequences
`
`gives the best BER performance for a moderate bit signal-to-noise ratio (SNR) as the
`
`random interleaver size N gets large. In this region, the dominant term in the expression
`
`for bit error probability of a turbo code with q constituent encoders is:
`
`
`
`p Q 2rfl(zq:dp+2)
`P z
`LT Nq'l
`N0 i=1
`1.2
`
`where dj‘:
`
`is the minimum parity-weight (weight due to parity checks only) of the
`
`codewords at the output of thejth constituent codedue to weight-2 data sequences, and
`
`[3 is a constant independent of N. Define at},2 = d]; + 2 as the minimum output weight
`
`including parity and information bits, if thejth constituent code transmits the information
`
`(systematic) bits. Usually one constituent code transmits the information bits (i=1), and
`
`the information bits of other codes are punctured. Define def 2 27:1 dj’; + 2 as the
`effective free distance of the turbo code and l/Nq‘1 as the “interleaver’s gain.” We have
`
`the following bound on all}; for any constituent code:. for any r = b/(b+1) recursive
`
`systematic convolutional encoder with generator matrix:
`
`1O
`
`15
`
`
`h1(D)
`
`how)
`
`MD)
`
`G = 1m how)
`
`
`hbw)
`
`how)
`
`
`
`
`
`-14-
`
`where IN, is the identity matrix, deg[h,(D) s m, 11,03) e mm), 1‘ = 1,2,. . .,b, and mm) is
`
`a primitive polynomial of degree m, the following upper bound holds:
`
` m-l
`d2p5[2
`b
`
`J+2
`
`A corollary of this is that, for any r=b/n recursive systematic convolutional code with b
`
`inputs, b systematic outputs, and n-b parity output bits using a primitive feedback
`
`generator, we have:
`
`
`< (rt-b)?"1
`b
`
`]+2(n ~17)
`
`There is an advantage to using b > 1, since the bound in the above equation for rate b/bn
`
`codes is larger than the bound for rate l/n codes. Examples of codes that meet the upper
`
`bound for b/Im codes are set forth below.
`
`A. Best Rate [7/1) + 1 Constituent Codes
`
`10
`
`15
`
`We obtained the best rate 2/3 codes as shown in Table 1, where cl2 =dj; + 2. The
`
`minimum-weight codewords corresponding to weight-3 data sequences are denoted by
`
`:13, dmm is the minimum distance of the code, and k= m + l in all the tables. By “best” we
`
`mean only codes with a large d2 for a given m that result in a maximum effective free
`
`distance. We obtained the best rate 3/4 codes as shown in Table 2 and the best rate 4/5
`
`codes as shown in Table 3.
`
`
`
`-15-
`
`Table 1. Best rate 2/3 constituent codes.
`
`
`_——-
`
`I_——--—
`
`I——————
`
`fl-—-II——
`I”—=
`
`
`I—n—nn
`
`
`
`
`
`
`Table 2. Best rate 3/4 constituent codes.
`
`
`I__—1
`I--
`
`I———h
`
`I--—:
`
`I--:_
`
`
`l-
`
`l_-
`I“
`
`
`=1
`
`25
`
`--
`
`h=21
`
`
`
`
`
`—21
`
`
`
`Table 3. Best rate 4/5 constituent codes.
`
`
`
`
`
`
`
`
`
`
`
`min
`
`
`
`
`
`I I
`
`
`
`
`
`'—
`l--
`
`I——
`
`I_=-
`
`
`I—-
`
`
`
`
`
`10
`
`15
`
`
`
`~16-
`
`B. Best Punctured Rate 1/2 Constituent Codes
`
`A rate 2/3 constituent code can be derived by puncturing the parity bit of a rate 1/2
`
`recursive systematic convolutional code using, for example, a pattern P = [10]. A
`
`puncturing pattern P has zeros Where parity bits are removed.
`
`5
`
`Consider a rate 1/2 recursive systematic convolutional code (1,g,(D)/(g0(D)). For an input
`
`u(D), the parity output can be obtained as:
`
`”(D)g1(D)
`
`x(D) =
`
`, go(D)
`
`We would like to puncture the output x(D) using, for example, the puncturing pattern
`
`P[lO] (decimation by 2) and obtain the generator polynomials h0(D), h,(D), and h2(D) for
`
`the equivalent rate 2/3 code:
`
`
`h,(D>
`
`how)
`
`hzw)
`
`how)
`
`10
`
`We note that any polynomialf(D) = Z aiD ‘ , a, E GF(2), can be written as:
`
`flD) =f1(D2) + Df2(D2)
`
`wherefi(D2) corresponds to the even power terms offlD), and Df2(D2) corresponds to the
`
`odd power terms offlD). Now, if we use this approach and apply it to u(D), g1(D), and
`
`g0(D), then we can rewrite the equation for x(D) as:
`
`x10) 2) +Dx2(D 2) =
`
`
`(up 2) +Du2w 2>)(g“(D 2) +Dg1,(D 2))
`
`gala) 2) +Dg02(D 2)
`
`
`
`
`
`where x1(D) and x2(D) correspond to the punctured output x(D) using puncturing patterns
`P[10] and P[Ol], respectively. Ifwe multiply both sides ofthe above equation by (gm(D2)
`
`+ Dg02(D2) and equate the even and the odd power terms, we obtain two equations in two
`
`unknowns, namely x1(D) and x2(D). For example, solving for x1(D), we obtain:
`
`x10?) = 141(1))
`
`
`hlw) + u (mum)
`how)
`2
`how)
`
`5
`
`where h0(D) = go(D) and:
`
`111(0) =gu(D)go,(D) + Dg12(D)gm(D)
`
`112(1)) =g12(D)gm(D) + Dg1,(D)g02(D)
`
`From the second equation above, it is clear that h2’0 = O. A similar method can be used to
`
`show that for P[Ol] we get h1J” = 0. These imply that the conditions that hm=1 and hi,m:1
`
`will be violated. Thus, we have the following theorem: if the parity puncturing pattern is
`
`P = [10] or P = [01], then it is impossible to achieve the upper bound on d2 =djf’2 + 2 for
`
`10
`
`rate 2/3 codes derived by puncturing rate 1/2 codes.
`
`The best rate 1/2 constituent codes with puncturing pattern P = [10] that achieve the
`
`largest d2 are given in Table 4.
`
`
`
`-18-
`
`Table 4. Best rate 1/2 punctured constituent codes.
`
`
`
`C.
`
`Best Rate I/n Constituent Codes
`
`‘
`
`As known in the art, for rate l/n codes, the upper bound for b=1 reduces to:
`
`dif; s (n-1)(2"‘_1+2)
`
`,
`
`Based on this condition, we have obtained the best rate 1/3 and 1/4 codes without parity
`
`repetition, as shown in Tables 5 and 6, where d2 =djf’2 + 2 represents the minimum output
`
`weight given by weight—2 data sequences. The best rate 1/2 constituent codes are given
`
`by g0 and g in Table 5, as has been reported by S. Benedetto et al., supra.
`
`Table 5. Best rate 1/3 constituent codes.
`
`
`
`
`l_--
`l-In-
`
`
`
`
`
`10
`
`15
`
`
`
`
`
`-19-
`
`Table 6. Best rate 1/4 constituent codes.
`
`Code generator
`
`General Interleaver Design Considerations
`
`In order to estimate the performance of a code, it is necessary to have information about
`its minimum distance, weight distribution, or actual code geometry, depending on the
`
`accuracy required for the bounds or approximations. The challenge is in finding the
`
`pairing of codewords from each individual encoder, induced by a particular set of
`
`interleavers. We have found that it is best to avoid joining low-weight codewords from
`
`one encoder with low—weight words from the other encoders. In the example of FIGURE
`
`2, the component codes have minimum distances 5, 2, and 2. This will produce a worst—
`
`case minimum distance of 9 for the overall code. Note that this would be unavoidable if
`
`the encoders were not recursive since, in this case, the minimum weight word for all three
`
`encoders is generated by the input sequence u = (00. . .0000100. . .000) with a single “1”,
`
`which will appear again in the other encoders, for any choice of interleavers. This
`
`motivates the use of recursive encoders, where the key ingredient is the recursiveness and
`
`not the fact that the encoders are systematic. For this example, the input sequence 11 =
`
`(00...00100100...000) generates a low-weight codeword with weight 6 for the first
`
`encoder. If the interleavers do not “break” this input pattern, the resulting codeword’s
`
`weight Will be 14. In general, weight-2 sequences with 2+3t zeros separating the 1’s
`
`would result in a total weight of 14 + 6t if there were no permutations. By contrast, if the
`
`number of zeros between the ones is not of this form,
`
`the encoded output
`
`is
`
`10
`
`15
`
`20
`
`
`
`
`
`-20-
`
`nonterminating until the end of the block, and its encoded weight is very large unless the
`
`sequence occurs near the end of the block.
`
`With permutations before the second and third encoders, avweight-Z sequence with its 1’s
`
`separated by 2 + 3t, zeros will be permuted into two other weight-2 sequences with 1’s
`
`separated by 2 + 31‘, zeros, where z' = 2, 3, and where each I, is defined as a multiple of 1/3.
`
`If any I, is not an integer, the corresponding encoded output will have a high weight
`
`because then the convolutional code output is nonterminating (until the end of the block).
`
`If all 1,75 are integers, the total encoded weight will be 14 + 2 211 t... Thus, one of the
`
`considerations in designing the’interleaver is to avoid integer triplets (II, t2, t3) that are
`
`simultaneously small in all three components. In fact, it would be nice to design an
`
`interleaver to guarantee that the smallest value of 2:11 t, (for integer ti) grows with the
`block size N.
`
`For comparison, consider the same encoder structure in FIGURE 2, except with the roles
`
`of g, and g, reversed. Now the minimum distances of the three component codes are 5,
`
`3, and 3, producing an overall minimum distance of 11 for the total code without any
`
`permutations. This is apparently a better code, but it turns out to be inferior as a turbo
`
`code. This paradox is explained by again considering the critical weight-2 data sequences.
`
`For this code, weight-2 sequences with 1 + 2t, zeros separating the two 1’s produce self-
`
`terminating output and, hence, low-weight encoded words. In the turbo encoder, such
`
`sequences will be permuted to have separations 1 + 2th where i = 2, 3, for the second and
`
`third encoders, but each I,- is now defined as a multiple of 1/2. Now the total encoded
`
`weight for integer triplets (t,, t2, t3) is 11 + 2 2,1, ti. Notice that this weight grows only
`
`half as fast wichil ti as the previously calculated weight for the original code. If
`
`2:] t.- can be made to grow with block size by the proper choice of an interleaver, then
`
`clearly it is important to choose component codes that cause the overall weight to grow
`
`as fast as possible with the individual separations t... This consideration outweighs the
`
`10
`
`15
`
`20
`
`25
`
`
`
`-21-
`
`criterion of selecting component codes that would produce the highest minimum distance
`if unpermuted.
`
`There are also many weight-n, n = 3, 4, 5,
`
`..., data sequences that produce self-
`
`terminating output and, hence, low encoded weight. However, as argued below, these
`
`sequences are much more likely to be broken up by random interleavers than the weight-2
`
`sequences and are, therefore, likely to produce nonterrninating output from at least one
`
`of the encoders. Thus, turbo code structures that would have low minimum distances if
`
`unpermuted can still perform well if the low-weight codewords of the component codes
`
`are produced by input sequences with weight higher than two.
`
`We briefly examine the issue of whether one or more random interleavers can avoid
`
`matching small separations between the 1’s of a weight-2 data sequence with equally
`
`small separations between the 1’s of its permuted version(s). Consider for example a
`
`particular weight-2 data sequence (. ..001001000.. .) which correSponds to a low weight
`codeword in each of the encoders of FIGURE '2. If we randomly select an interleaver of
`
`size N, the probability that this sequence will be permuted into another sequence of the
`
`same form is roughly 2/N (assuming that N is large, and ignoring minor edge effects). The
`
`probability that such an unfortunate pairing happens for at least one possible position of
`
`the original sequence within the block size ofN, is approximately 1 — (1 — 2/N)” z 1 —
`
`e‘z. This implies that the minimum distance of a two-code turbo code constructed with a
`
`random permutation is not likely to be much higher than the encoded weight of such an
`
`unpermuted weight—2 data sequence, e.g., 14 for the code in FIGURE 2. (For the worst
`case permutations, the dmin ofthe code is still 9, but these permutations are highly unlikely
`
`if chosen randomly). By contrast, if we use three codes and two different interleavers, the
`
`probability that the particular sequence above will be reproduced by both interleavers is
`
`only (2/N)2. Now the probability offinding such an unfortunate data sequence somewhere
`
`within the block of size N is roughly 1 - [l — (2/N)2]N = 4/N. Thus it is probable that a
`
`three-code turbo code using two random interleavers will see an increase in its minimtun
`
`10
`
`15
`
`20
`
`25
`
`
`
`
`
`-22-
`
`distance beyond the encoded weight of an unpermuted weight-2 data sequence. This
`
`argument can be extended to account for other weight-2 data sequences which may also
`
`produce low weight codewords, e. g., (...00100(000)’1000. . .), for the code in FIGURE
`2.
`
`For comparison, consider a weight-3 data sequence such as (. .. 0011100...), which for
`
`our example corresponds to the minimum distance of the code (using no permutations).
`
`The probability that this sequence is reproduced with one random interleaver is roughly
`
`6/N2 , and the probability that some sequence of that form is paired with another of the
`
`same form is 1 - (1 — 6/Nz)N = 6N. Thus, for large block sizes, bad Weight-3 data
`
`sequences have a small probability of being matched with bad weight-3 permuted data
`
`sequences, even in a two-code system.
`
`For a turbo code using q codes and q-l random interleavers, this probability is even
`
`smaller,
`
`1 — [1 — (6/N)“1'1]” = 6/N (6/N2)"'2. This implies that the minimum distance
`
`codeword of the turbo code in FIGURE 2 is more likely to result from a weight-2 data
`
`sequence ofthe form (. . .001001000. . .) than from the weight—3 sequence (...0011100...)
`
`that produces the minimum distance in the unpennuted version of the same code. Higher
`
`weight sequences have an even smaller probability of reproducing themselves after being
`
`passed through a random interleaver.
`
`For a turbo code using q codes and 41-1 interleavers, the probability that a weight-n data
`
`sequence will be reproduced somewhere within the block by all q—l permutations is of
`
`the form 1 — [1 — (B/M’1)9“]N, where [5 is a number that depends on the weight-n data
`
`sequence but does not increase with block size N. For large N, this probability is
`
`proportional to (UN )"q‘"“" , which falls off rapidly with N, when n and q are greater than
`
`two. Furthermore, the symmetry of this expression indicates that increasing either the
`
`weight of the data sequence 71 or the number of codes q has roughly the same effect on
`
`lowering this probability.
`
`1D
`
`15
`
`20
`
`25
`
`
`
`-23-
`
`In summary, from the above arguments, we conclude that weight—2 data sequences are an
`
`important factor in the design of the component codes, and that higher weight sequences
`
`have successively decreasing importance. Also, increasing the number of codes and,
`
`correspondingly, the number of interleavers, makes it more and more likely that bad input
`
`sequences will be broken up by one or more of the permutations.
`
`The minimum distance is not the most important characteristic of the turbo code, except
`
`for its asymptotic performance, at very high Eb/No. At moderate signal-to-noise ratios
`
`(SNRs), the weight distribution. for the first sever