throbber
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 16, NO. 2, FEBRUARY 1998
`
`231
`
`Analysis, Design, and Iterative Decoding of Double
`Serially Concatenated Codes with Interleavers
`
`Sergio Benedetto, Fellow, IEEE, Dariush Divsalar, Fellow, IEEE, Guido Montorsi, Member, IEEE,
`and Fabrizio Pollara, Member, IEEE
`
`Abstract—A double serially concatenated code with two inter-
`leavers consists of the cascade of an outer encoder, an interleaver
`permuting the outer codeword bits, a middle encoder, another
`interleaver permuting the middle codeword bits, and an inner
`encoder whose input words are the permuted middle codewords.
`The construction can be generalized to h cascaded encoders
`separated by h 1 interleavers, where h > 3: We obtain upper
`bounds to the average maximum likelihood bit-error probability
`of double serially concatenated block and convolutional coding
`schemes. Then, we derive design guidelines for the outer, middle,
`and inner codes that maximize the interleaver gain and the
`asymptotic slope of the error probability curves. Finally, we pro-
`pose a low-complexity iterative decoding algorithm. Comparisons
`with parallel concatenated convolutional codes, known as “turbo
`codes,” and with the recently proposed serially concatenated
`convolutional codes are also presented, showing that in some
`cases, the new schemes offer better performance.
`
`Index Terms— Code design, concatenated codes, iterative de-
`coding.
`
`ACRONYMS
`a posteriori probability.
`APP
`Bahl, Cocke, Jelinek, Raviv.
`BCJR
`Constituent code.
`CC
`Conditional weight enumerating function.
`CWEF
`DPCCC Double parallel
`concatenated convolutional
`code.
`Double serially concatenated code.
`DSCC
`DSCBC Double serially concatenated block code.
`DSCCC Double
`serially
`concatenated
`convolutional
`code.
`Input–output weight enumerating function.
`IOWEF
`Logarithm of the probability density function.
`LPDF
`Maximum likelihood.
`ML
`MPCCC Multiple parallel concatenated convolutional
`code.
`Serially concatenated code.
`Serially concatenated block code.
`Serially concatenated convolutional code.
`
`SCC
`SCBC
`SCCC
`
`Manuscript received October 16, 1996; revised April 17, 1997. This work
`was supported in part by NATO under Research Grant CRG 951208, and
`by QUALCOMM, Inc. The work of S. Benedetto and G. Montorsi was
`supported by the Italian Space Agency (ASI). The research described in this
`paper was performed in part at the Jet Propulsion Laboratory, California
`Institute of Technology under contract with the National Aeronautics and
`Space Administration (NASA).
`S. Benedetto and G. Montorsi are with Dipartimento di Elettronica, Politec-
`nico di Torino, 10129 Torino, Italy.
`D. Divsalar and F. Pollara are with the Jet Propulsion Laboratory, California
`Institute of Technology, Pasadena, CA 91109 USA.
`Publisher Item Identifier S 0733-8716(98)00161-9.
`
`of error decreased exponentially at rates less than capac-
`ity, while decoding complexity increased only algebraically,
`Forney [1] arrived at a solution consisting of the multilevel
`coding structure known as concatenated code. It consists of
`the cascade of an inner code and an outer code. In most of the
`applications, the inner code is a relatively short code (typically,
`a convolutional code) admitting simple maximum likelihood
`decoding, whereas the outer code is a long high-rate algebraic
`(typically, a nonbinary Reed–Solomon code) equipped with a
`powerful algebraic error-correction algorithm.
`Concatenated codes have found a great number of system
`applications in those cases where very high coding gains
`are needed, such as, for example, (deep-)space applications.
`Alternative solutions for concatenation have also been studied,
`such as using a trellis-coded modulation scheme as inner code
`[2], or concatenating two convolutional codes [3]. In the latter
`case, the inner Viterbi decoder employs a soft-output decoding
`algorithm to provide soft-input decisions to the outer Viterbi
`decoder. An interleaver was also proposed between the two
`encoders to separate bursts of errors produced by the inner
`decoder.
`A “classical” concatenated coding scheme thus contains the
`main ingredients that formed the basis for the invention of
`“turbo codes” [4], namely, two, or more, constituent codes
`(CC’s) and an interleaver. The novelty of turbo codes, how-
`ever, consists of the way they use the interleaver, which is
`embedded into the code structure to form an overall concate-
`nated code with very large block length, and in the proposal of
`a parallel concatenation to achieve a higher rate for given rates
`of CC’s. The latter advantage is obtained using systematic
`CC’s and not transmitting the information bits entering the
`second encoder. The idea of parallel concatenation of two
`codes was extended to multiple ( 2) codes, MPCCC,
`in
`[5] and [6]. The codes obtained in [6] have been shown to
`yield very high coding gains at low bit-error probabilities; in
`particular, low bit-error probabilities can be obtained at rates
`well beyond the channel cutoff rate. As an example, a rate-
`1/4 MPCCC using three eight-state convolutional CC’s, an
`interleaver with length 4096, and 20 iterations of the decoding
`algorithm yields a bit-error probability of 10
`at
`of
`0.2 dB. Quite remarkably, this performance can be achieved
`by a relatively simple iterative decoding technique whose
`0733–8716/98$10.00 ª
`
`PCC
`PCCC
`
`Parallel concatenated code.
`Parallel concatenated convolutional code.
`
`I. INTRODUCTION
`
`IN HIS attempt to find a class of codes whose probability
`
`1998 IEEE
`
`Exhibit IV 2017
`IPR2014-01149
`
`

`

`232
`
`IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 16, NO. 2, FEBRUARY 1998
`
`computational complexity is comparable to that needed to
`decode the three CC’s.
`Recently, in [7], serially concatenated block and convolu-
`tional codes (SCBC and SCCC) with interleavers have been
`proposed. Their average maximum likelihood performance,
`evaluated through an upper bound to the bit-error probability,
`show an interleaver gain, i.e., the decrease of bit-error prob-
`ability with increasing interleaver length, significantly higher
`than for turbo codes. In [7], techniques for designing the CC’s
`and an iterative decoding algorithm were also illustrated.
`In this paper, we extend the results of serial concatenation to
`the case of three interleaver codes, a scheme denoted by double
`serially concatenated code (DSCC), called double serially con-
`catenated block code (DSCBC) or double serially concatenated
`convolutional code (DSCCC) according to the nature of CC’s.
`For this class of codes, we obtain analytical upper bounds
`to the performance of a maximum likelihood (ML) decoder,
`propose design guidelines leading to the optimal choice of
`CC’s that maximize the interleaver gain and the asymptotic
`code performance, and present an iterative decoding algorithm
`that generalizes that presented in [7]. Comparisons with turbo
`codes and serially concatenated codes of the same complexity
`and decoding delay are also performed.
`In Section III, we derive analytical upper bounds to the
`bit-error probability of both DSCBC’s and DSCCC’s, using
`the concept of “uniform interleaver” that decouples the output
`of the outer encoder from the input of the middle encoder,
`and the output of the middle encoder from the input of the
`inner encoder. In Section IV, we propose design rules for
`DSCCC’s through an asymptotic approximation of the bit-
`error probability bound assuming long interleavers or large
`signal-to-noise ratios. In Section V, we compare double and
`simple serial concatenations of block and convolutional codes
`in terms of maximum likelihood analytical upper bounds.
`Section VI is devoted to the presentation of an iterative
`decoding algorithm, derived from the one introduced in [7],
`and to its application to some significant codes.
`
`II. ANALYTICAL BOUNDS TO THE PERFORMANCE
`OF DOUBLE SERIALLY CONCATENATED CODES
`For simplicity of presentation, we begin considering double
`serially concatenated block codes (DSCBC’s).
`
`A. Double Serially Concatenated Block Codes
`The scheme of a double serially concatenated block code
`is shown in Fig. 1. It is composed of three cascaded CC’s,
`the outer
`code
`with rate
`the middle
`with rate
`and the inner
`code
`with rate
`linked by two inter-
`code
`leavers of lengths
`and
`The overall DSCBC is then an
`code, and we will refer to it as the
`code
`also including the interleaver lengths. In the following,
`we will derive an upper bound to the ML performance of
`the overall code
`We assume that the CC’s are linear,
`so that
`the DSCBC is also linear and the uniform error
`property applies, i.e., the bit-error probability can be evaluated
`assuming that the all-zero codeword has been transmitted.
`As in [8], [9], and [7], a crucial step in the analysis consists
`of replacing the actual interleaver that performs a permutation
`
`Fig. 1. Double serially concatenated (n; k; N1; N2) block code.
`
`Fig. 2. Action of a uniform interleaver of length 4 on sequences of weight 2.
`
`input bits with an abstract interleaver called the
`of the
`uniform interleaver, defined as a probabilistic device that maps
`a given input word of weight
`into all distinct
`permu-
`/
`(see Fig. 2),
`tations of it with equal probability
`so that the input and output weight is preserved. Use of the
`uniform interleaver permits the computation of the “average”
`performance of DSCBC’s, intended as the expectation of the
`performance of DSCBC’s using the same CC’s, taken over
`the ensemble of all interleavers of given lengths. A theorem
`proved in [9] guarantees the meaningfulness of the average
`performance, in the sense that there will always be, for each
`value of the signal-to-noise ratio, at least a set of two particular
`interleavers yielding performance better than or equal to those
`of the two uniform interleavers.
`Let us define the input–output weight enumerating function
`(IOWEF) of the DSCBC
`as
`
`(1)
`
`is the number of codewords of the DSCBC with
`where
`weight
`associated to an input word of weight
`We also define the conditional weight enumerating function
`(CWEF)
`of the DSCBC as the weight distribution
`of codewords of the DSCBC which have input word weight
`It is related to the IOWEF by
`
`(2)
`
`With the knowledge of the CWEF, an upper bound to
`the bit-error probability of the maximum likelihood decoded
`DSCBC can be obtained in the form [9]
`
`(3)
`
`and
`
`is the signal-
`
`is the rate of
`where
`to-noise ratio per bit.
`The problem thus consists of the evaluation of the CWEF
`of the DSCBC from the knowledge of the CWEF’s of
`the outer, the middle, and the inner codes, which we call
`and
`To do this, we
`exploit the properties of the uniform interleavers. The first
`interleaver transforms a codeword of weight
`at the output
`of the outer encoder into all of its distinct
`permutations.
`Similarly, the second interleaver transforms a codeword of
`
`

`

`This implies
`for all other
`
`so that
`
`233
`
`and
`
`BENEDETTO et al.: ANALYSIS, DESIGN, AND DECODING OF DSCC
`
`at the output of the middle encoder into all of its
`weight
`distinct
`permutations. As a consequence, each codeword
`of the outer code
`of weight
`through the action of the
`first uniform interleaver, enters the middle encoder generating
`codewords of the middle code
`and each codeword
`of the middle code
`of weight
`through the action of the
`second uniform interleaver, enters the inner encoder generating
`codewords of the inner code
`Thus, the number
`of codewords of the DSCBC of weight
`associated with an
`is given by
`input word of weight
`
`Through (6), we then obtain
`
`(4)
`
`From (4), we derive the expressions of the IOWEF and
`CWEF of the DSCBC
`
`(5)
`
`(6)
`
`where
`is the conditional weight distribution of
`the input words that generate codewords of the outer code
`of weight
`Example 1: Consider the (7, 2) DSCBC code obtained by
`concatenating a (3, 2) parity check code to another (4, 3) parity
`check code, and finally, to a (7, 4) systematic Hamming code
`and
`The
`through two interleavers of lengths
`of the
`IOWEF
`outer, middle, and inner codes are
`
`and
`
`so that
`
`Previous results (5) and (6) can be easily generalized to the
`which is
`case of two interleavers, the first with length
`an integer multiple of the length of the outer codewords,
`and the second with length
`which is the same integer
`multiple of the length of the middle codewords. Denoting by
`the IOWEF of the new
`outer code, by
`the IOWEF of the new
`middle code,
`and finally, by
`the IOWEF of the new
`inner code, it is straightforward to obtain
`
`(7)
`
`From the IOWEF’s (7), through (2), we obtain the CWEF’s
`and
`of the new CC’s, and
`finally, the IOWEF and CWEF of the new
`DSCBC
`
`(8)
`
`(9)
`
`Example 2: We consider a DSCBC composed by a (4,3)
`parity-check code as outer code, a (7,4) Hamming code as
`middle code, and a (15,7) BCH code as inner code. These
`and
`CC’s are linked by interleavers of length
`Using (8) and (3), upper bounds to the bit-error
`probability are obtained and plotted in Fig. 3 for various
`values of the integer
`The curves show the interleaver gain,
`defined as the factor by which the bit-error probability is
`decreased with the interleaver’s length. Contrary to the case of
`parallel concatenated block codes [9], the curves do not exhibit
`
`

`

`234
`
`IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 16, NO. 2, FEBRUARY 1998
`
`
`
`Fig. 3. Analytical bound to the ML bit-error probability for the DSCBC of Example 2. The values of q are consecutive powers of 2, i.e., q = 2l; l = 0; ; 10:
`
`the interleaver gain saturation, i.e., a phenomenon in which
`the interleaver gain progressively decreases while increasing
`the interleaver length, up to a point in which increasing the
`interleaver length does not yield any further gain (examples
`of gain saturation have been reported in [9]. Rather, for
`sufficiently high signal-to-noise ratios, the bit-error probability
`seems to decrease regularly with as
`We will explain this
`behavior in Section V.
`
`B. Double Serially Concatenated Convolutional Codes
`The structure of a DSCCC is shown in Fig. 4. It refers to
`the case of three convolutional CC’s, the outer code
`with
`rate
`the middle code
`with rate
`joined by
`and the inner code code
`with rate
`two interleavers of length
`bits, generating a DSCCC
`with rate
`(In Fig. 4,
`and
`Note that
`is be an integer multiple of
`.1 In addition, the middle
`must be an integer multiple of
`code rate imposes the constraint
`so
`that the input block length is
`We assume, as before, that
`the convolutional CC’s are linear, so that the DSCCC is linear
`as well, and the uniform error property applies.
`The exact analysis of this scheme can be performed by
`appropriate modifications of that described in [9] for PCCC’s.
`It requires the use of a hyper-trellis having as hyper-states set
`of states of outer, middle, and inner codes. The hyper-states
`and
`are joined by a hyper-branch that consists of all
`pairs of paths with length
`that join states
`of the inner
`code, states
`of the middle code, states
`of the outer
`code, respectively. Each hyper-branch is thus an equivalent
`
`1 Actually, this constraint is not necessary. We can choose, in fact, inner,
`middle, and outer codes of any rates Ri
`
`c = ki=ni; Rmc = km=nm; and
`c = ko=no; constraining the interleaver lengths to be an integer multiple
`Ro
`of the minimum common multiple of no and km; and of the minimum
`common multiple of nm and ki; i.e., N1 = K1 mcm(no; km ) and
`N2 = K2 mcm(nm; ki) such that N1=N2 = Rmc : This generalization,
`
`though, leads to more complicated expressions, and is not considered in the
`following.
`
`Fig. 4. Double serially concatenated (n; k; N1; N2) convolutional code.
`
`DSCBC labeled with an IOWEF that can be evaluated as
`explained in the previous subsection. From the hyper-trellis,
`the upper bound to the bit-error probability can be obtained
`through the standard transfer function technique employed for
`convolutional codes [10]. As proved in [9] for the case of two
`parallel concatenated convolutional codes, when the length of
`the interleaver is significantly greater than the constraint length
`of the CC’s, an accurate approximation of the exact upper
`bound consists of retaining only the branch of the hyper-trellis
`joining the hyper-states
`In the following, we will
`always use this approximation.
`
`III. DESIGN OF DOUBLE SERIALLY CONCATENATED CODES
`In the previous section, we have presented an analytical
`bounding technique to find the ML performance of DSCBC
`and DSCCC. For practical applications, DSCCC’s are to
`be preferred to DSCBC’s. One reason is that trellis-based
`maximum a posteriori algorithms like the BCJR algorithm
`[11], [12] are less complex for convolutional than for block
`codes since the trellis is time invariant for convolutional codes
`and time varying for block codes, and the second is that
`the interleaver gain can be greater for convolutional CC’s,
`provided they are suitably designed [9]. Hence, we deal mainly
`with the design of DSCCC’s, extending our conclusions to
`DSCBC’s when appropriate.
`Consider the DSCCC depicted in Fig. 4. Its performance can
`be approximated by that of an equivalent block code whose
`IOWEF labels the branch of the hyper-trellis joining the zero
`states of outer and inner codes. Denoting by
`the
`CWEF of this equivalent block code, we can rewrite the upper
`
`

`

`BENEDETTO et al.: ANALYSIS, DESIGN, AND DECODING OF DSCC
`
`235
`
`bound (3) as2
`
`(10)
`
`is the minimum weight of an input sequence
`where
`generating an error event of the outer code, and
`is the
`minimum weight3 of the codewords of
`By error event
`of a convolutional code, we mean a sequence diverging from
`the zero state at time zero and merging into the zero state
`at some discrete time
`For constituent block codes, an
`error event is simply a codeword.
`The coefficients
`of the equivalent block code can
`be obtained from (4), once the quantities
`and
`of the CC’s are known. To evaluate them, consider a
`convolutional code
`with memory
`and its
`rate
`equivalent
`block code whose codewords are
`all sequences of length
`bits of the convolutional code
`starting from and ending at the zero state. By definition, the
`codewords of the equivalent block code are concatenations of
`error events of the convolutional codes. Let
`
`(11)
`
`be the weight enumerating function of sequences of the con-
`volutional code that concatenate
`error events with total input
`weight
`(see Fig. 5), where
`is the number of sequences
`of weight
`input weight
`and number of concatenated
`error events
`For
`much larger than the memory of the
`convolutional code, the coefficient
`of the equivalent block
`code can be approximated by4
`
`(12)
`
`the largest number of error events concatenated
`where
`in a codeword of weight
`and generated by a weight
`input
`sequence, is a function of
`and that depends on the encoder,
`as we will see later.
`to the
`Let us return now to the block code equivalent
`DSCCC. Using the previous result (12) with
`for the
`inner code,
`for the middle code, and the analogous
`for the outer code,5 and noting that
`one
`
`2 In the following, a subscript “m” will denote “minimum,” and a subscript
`“M” will denote “maximum.” Note that superscript m is also used to denote
`the middle code.
`3 Since the input sequences of the inner code are not unconstrained i.i.d.
`binary sequences, but, instead, codewords of the outer code, hm can be greater
`than the inner code free distance di
`f :
`4 This assumption permits neglecting the length of error events compared
`to N; and assuming that the number of ways j input sequences producing j
`error events can be arranged in a register of length N is N=p
`The ratio N=p
`j:
`derives from the fact that the code has rate p=n; and thus N bits corresponds
`to N=p input words or, equivalently, trellis steps.
`5 In the following, superscripts “o,” “m,” and “i” will refer to quantities
`pertaining to outer, middle, and inner code, respectively.
`
`Fig. 5. Code sequence with parameters l; h; j:
`
`we obtain for the outer code
`
`(13)
`
`For middle and inner codes, similar expressions can be ob-
`tained. Then, substituting them into (4), we obtain the coef-
`ficient
`of the double serially concatenated block code
`equivalent to the DSCCC in the form
`
`(14)
`is the
`is the free distance of the outer code, and
`where
`minimum weight of the sequences of the middle code due to
`sequences of the outer code with weight
`By
`free distance
`we mean the minimum Hamming weight
`of error events for convolutional CC’s, and the minimum
`Hamming weight of codewords for block CC’s.
`We are interested in large interleaver lengths, and thus use
`for the binomial coefficient the asymptotic approximation
`
`Substitution of this approximation into (14) yields
`
`Finally, substituting (15) into (10) yields the bit-error proba-
`bility bound in the form
`
`(15)
`
`where the coefficients
`
`have been defined as
`
`having used the result (15) for the
`
`(16)
`
`(17)
`
`

`

`236
`
`IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 16, NO. 2, FEBRUARY 1998
`
`Using expressions (16) and (17) as the starting point, we
`will obtain some important design considerations.
`The bound (16) to the bit-error probability is obtained by
`adding terms of the summation with respect to the DSCCC
`weights
`From (17), the coefficients
`of the ex-
`ponentials in
`depend, among other parameter, on
`For
`large
`and for a given
`the dominant coefficient of the
`exponentials in
`is the one for which the exponent of
`is
`maximum. Define this maximum exponent as
`
`(18)
`
`in general is not possible without specifying
`Evaluating
`the CC’s. Thus, we will consider two important cases, for
`which general expressions can be found.
`
`A. The Exponent of
`for the Minimum Weight
`the performance of the DSCC
`For large values of
`is dominated by the first term of the summation in
`corre-
`sponding to the minimum value
`Remembering that,
`by definition,
`and
`are the maximum number of
`concatenated error events in codewords of the inner, middle,
`and outer code of weights
`and
`respectively, the
`following inequalities hold true:
`
`(19)
`
`(20)
`
`(21)
`
`and
`
`corresponding
`The result (23) shows that the exponent of
`to the minimum weight of DSCCC codewords is always less
`than
`for
`and
`thus yielding an interleaver
`gain at high
`Substitution of the exponent
`into
`(16) truncated to the first term of the summation in
`yields
`
`(24)
`
`is as in (25), shown at the bottom of
`where the constant
`the page, and
`is the set of input weights
`that generate
`codewords of the outer code with weight
`Expression (24) suggests the following conclusions.
`• For the values of
`and
`where the DSCCC
`performance is dominated by its free distance
`increasing the interleaver length yields a gain in
`performance.
`• To increase the interleaver gain, one should choose an
`outer code, and a middle code with large
`and
`respectively.
`• To improve the performance with
`one should
`choose an inner, middle, and outer code combination such
`that
`is large.
`These conclusions do not depend on the structure of the
`CC’s, and thus they yield for both recursive and nonrecursive
`encoder.
`The curves of Fig. 3 showing the performance of the various
`DSCBC’s of Example 2 with increasing interleaver length,
`however, also show a different phenomenon: for a given
`there seems to be a minimum value of
`that forces
`the bound to diverge. In other words,
`there seem to be
`coefficients of the exponents in
`for
`that increase
`with
`To investigate this phenomenon, we will evaluate the largest
`exponent of
`defined as
`
`(22)
`
`B. The Maximum Exponent of
`For any
`
`the following inequality holds:
`
`(26)
`
`of codewords of the
`is the minimum weight
`where
`middle code yielding a codeword of weight
`of the inner
`code,
`is the minimum weight
`of codewords
`of the outer code yielding a codeword of weight
`of
`the inner code, and
`means “integer part of
`In most cases,6
`so that
`
`and
`so that (22) becomes
`
`”
`
`(23)
`6 This will be seen in the examples that follow, and corresponds to the most
`favorable situation.
`
`so that
`
`(27)
`
`(28)
`
`(25)
`
`

`

`BENEDETTO et al.: ANALYSIS, DESIGN, AND DECODING OF DSCC
`
`237
`
`and the following upper bound to
`
`in (26) holds:
`
`Finally, since for any
`we obtain
`
`we can write
`
`(29)
`
`For
`even, the weight
`exponent of
`is given by
`
`(32)
`associated to the highest
`
`Since now
`
`we can write the inequality
`
`and finally
`
`is weight of sequences of the inner code generated
`where
`by input sequences of weight. In fact,
`is the weight
`of an inner code sequence formed by
`error events, each
`generated by a weight 1 input sequence. On the other hand,
`is the weight of a middle code sequence that concatenates
`error events with weight
`odd, the value of
`
`is given by
`
`For
`
`(30)
`
`is the minimum weight of the sequences of the
`where
`middle code generated by a weight 3 input sequence. In this
`case, in fact, we have
`
`(33)
`
`Starting from (30), we will evaluate the upper bound to
`for all possible configurations.
`1) Block Encoders and Nonrecursive Convolutional Inner
`and Middle Encoders: For nonrecursive inner and middle
`encoders, we have
`and
`In fact, every
`input sequence with weight one generates a finite-weight error
`event, so that an input sequence with weight will generate,
`at most,
`error events corresponding to the concatenation of
`error events of input weight 1. Since the uniform interleavers
`generate all possible permutations of their input sequences,
`this event will certainly occur. Substituting these values into
`(30), we obtain
`
`(31)
`
`and interleaving gain is not allowed. This conclusion holds true
`for both DSCCC employing nonrecursive inner and middle en-
`coders and for all DSCBC’s since block codes have codewords
`corresponding to input words with weight equal to 1.
`For those DSCC’s, we always have, for some
`coefficients
`of the exponential in
`of (16) that increase with
`and this
`explains the divergence of the bound arising, for each
`when the coefficients increasing with
`become dominant.
`2) Nonrecursive Inner, Recursive Middle Encoders: Since
`the inner encoder is nonrecursive, we have
`so that
`(30) becomes
`
`owing to the recursiveness of the middle
`For any
`code, the following inequality holds:
`
`(34) becomes
`
`so that
`
`generated by
`concatenated error events, of which
`weight 2 input sequences and one generated by a weight 3
`input sequence. Note that the interleaving gain in this case is
`similar to the one obtainable with SCCC’s in [7]; in the case
`of DSCCC’s, however,
`can be made larger.
`3) Recursive Inner Encoder: When the inner encoder is
`recursive, we obtain a value for
`that holds for both
`recursive and nonrecursive middle encoders.
`We start replacing
`into (30) since the inner encoder
`is recursive, obtaining
`
`For any
`
`the following inequality holds:
`
`so that
`
`Moreover, taking into account that, for any
`write
`
`(34)
`
`we can
`
`(35)
`
`

`

`238
`
`IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 16, NO. 2, FEBRUARY 1998
`
`Fig. 6. Comparison between analytical upper bounds to the bit-error probability for the DSCBC of Example 2 and for an SCBC obtained by concatenating a
`(7,3) code with the (15,7) BCH code. The parameter q represents, for both DSCBC and SCBC, the code latency expressed in terms of the number of input words.
`
`(35) simplifies to
`It is interesting to note that when
`the same result obtained for the serial concatenation of two
`codes (SCCC) in [7].
`The weight
`associated to the highest exponent of
`satisfies the following inequality
`
`predicted by the design guidelines. Moreover, we compare the
`analytical upper bounds to the bit-error probability for block
`and convolutional SCC’s and DSCC’s having the same code
`rate.
`
`for
`
`even, and
`
`odd.
`for
`The following design considerations can be drawn from
`(31), (32), and (35).
`• The choice of a nonrecursive encoder for both middle and
`inner CC’s should be avoided, as it leads [see (31)] to at least
`one coefficient of the exponential in
`that increases with
`thus preventing the possibility of obtaining an interleaver gain
`for large
`• Since at least one between middle and inner encoders
`must be recursive, we can have three different cases, which
`all guarantee a certain interleaver gain. The worst is the one
`in which the middle encoder is recursive and the inner is
`nonrecursive. In this case, in fact, the value of
`given by
`(32) is the highest. The best choice is to have recursive the
`inner encoder, no matter how the middle is. In this case, in
`fact, the value of
`is given by (35), which yield the lowest
`exponent of
`and thus the largest interleaver gain.
`
`IV. COMPARISON BETWEEN SIMPLE AND
`DOUBLE SERIALLY CONCATENATED CODES
`To confirm the design rules obtained asymptotically, i.e.,
`for large signal-to-noise ratio and large interleaver lengths
`we analyze block and convolutional DSCC’s, with different
`interleaver lengths, and compare their performance with those
`
`A. Serially Concatenated Block Codes
`Consider first the DSCBC of Example 2. The predicted
`value of
`is given by (23). In our case, the minimum
`distance of the outer code is 2 and that of the middle code
`3. As a consequence,
`Looking at the upper
`bounds to the bit-error probability shown in Fig. 3, it is easily
`verified that the interleaver gain, for a fixed and sufficiently
`large signal-to-noise ratio, goes as
`as predicted.
`To compare simple and double serial block code concatena-
`tions, we have constructed two rate-3/15 codes. The first is the
`DSCBC of Example 2, and the second is an SCBC obtained
`by concatenating the (7,3) code whose eight codewords are the
`even-weight codewords of the (7,4) Hamming code with the
`(15,7) BCH code. The interleavers for the two concatenated
`codes have been chosen so as to yield the same latency,
`expressed with the parameter
`in terms of the number of input
`words. The curves of the bit-error probability bounds reported
`in Fig. 6 show the superior performance of the DSCBC for
`low–medium signal-to-noise ratios. In fact, for
`the
`performance is the same, whereas for larger values of
`the
`DSCBC behaves better, owing to the larger interleaving gain
`(at
`the gain is 2 dB for
`For
`sufficiently large
`the curves corresponding to the same
`value of merge, owing to the fact that the two codes have
`the same minimum distance.
`
`B. Serially Concatenated Convolutional Codes
`We consider several rate-1/4 DSCCC’s formed by an outer
`four-state convolutional code with rate 1/2, a middle four-
`state convolutional code with rate 2/3, and an inner four-state
`
`

`

`BENEDETTO et al.: ANALYSIS, DESIGN, AND DECODING OF DSCC
`
`239
`
`Fig. 7. Coefficients B(h) versus h for the three DSCCC’s of Table II.
`
`TABLE I
`CONSTITUENT CONVOLUTIONAL ENCODERS USED FOR
`CONSTRUCTING DOUBLE SERIAL CONCATENATED CODES
`
`TABLE II
`THREE DOUBLE SERIAL RATE-1/4 CONCATENATED CONVOLUTIONAL
`CODES; NUMBERS IN PARENTHESES ARE VALUES OF THE
`PARAMETERS OBTAINED USING THE BOUNDS OF SECTION IV
`
`joined by two uniform
`
`convolutional code with rate 3/4,
`and
`interleavers of length
`The main parameters of the employed CC’s are described in
`Table I. In building the DSCCC’s, we keep as outer encoder
`a nonrecursive encoder, whereas for the middle and inner
`encoders, we use three different combinations. For the outer
`encoder, there are no design rules: it can be either recursive
`or nonrecursive. Our choice has the only reason for restricting
`the number of cases to be simulated. The middle and the inner
`encoders are both nonrecursive for the first code, DSCCC1;
`for the second code, DSCCC2,
`the middle is a recursive
`encoder and the inner is nonrecursive; finally, for the third
`code, DSCCC3, both middle and inner are recursive encoders.
`In Table II, the main design parameters of the three DSCCC’s
`are reported.
`and
`To check the accuracy of the bounds on
`we have evaluated the coefficients
`defined in (17) for
`Then,
`two large values of
`
`and
`
`we have computed the coefficients
`
`defined through
`
`(36)
`
`If the asymptotic (for large
`analysis of Section IV is true,
`then the coefficients
`based on their definition (36) and
`on the definition (17) of
`should provide a good
`estimate of the exponent
`defined in (18). To check
`of
`this, we have reported the coefficients
`versus
`in Fig. 7
`for the three codes DSCCC1, DSCCC2, and DSCCC3.
`Consider first the code DSCCC1 (continuous curve). The
`values of
`keep on increasing with
`according to the
`value
`for
`predicted by (31). On the other
`hand, the value
`is equal to
`in agreement with
`the result (23) which stated
`Passing to code
`DSCCC2 (dashed curve), we find the same value of
`whereas the largest value
`yielding an
`interleaver gain. The result (32) predicted
`which is
`in perfect agreement with our finding. Finally, the dotted curve
`of code DSCCC3 yields
`also in agreement
`with (35) that stated
`To compare simple and double serial concatenation of
`convolutional codes, we have constructed two rate-1/4 DSCCC
`
`

`

`240
`
`IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 16, NO. 2, FEBRUARY 1998
`
`A functional diagram of the iterative decoding algorithm for
`DSCCC’s is presented in Fig. 9, where we also show a double
`turbo encoder, using three CC’s and two interleavers, and its
`iterative decoder to enlighten analogies as well as differences.
`Let us explain how the algorithm works, according to the
`blocks of Fig. 9. The blocks labeled “A

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket