`
`APPLICATION TO PARALLEL CONCATENATION
`
`Punya THITIMAJSHIMA
`
`Telecommunications Department, Faculty of Engineering
`King Mongkut's Institute of Technology Ladkrabang, Bangkok 10520, Thailand
`
`Abstract — This paper deals with a new class of convolutional
`codes called Recursive Systematic Convolutional
`(RSC)
`codes. These codes are very attractive when compared with
`the best known convolutional codes at low signal to noise
`ratio (SNR) and high code rate. A comparison with the best
`known convolutional codes is performed in terms of weight
`spectrum and bit error rate (BER) on a memoryless gaussian
`channel. A new design of parallel concatenation is proposed,
`well matched to RSC codes and much more attractive than
`
`serial concatenation for it needs only one clock timing
`recovery for the decoding.
`
`K—l
`
`dk = 29/, ak_,- mod.2
`—o1_
`
`(3)
`
`Taking into account Xk = dk or Yk = dk, the RSC
`encoder codeword Ck: (Xk, Yk) has exactly the same
`expression as the NSC encoder ouputs if g: g,- :1 and
`by replacing dk by ak in relations (1 1 — 1 2). In Fig. 1, two
`RSC encoders are depicted with memory M = 2 obtained
`from an NSC encoder defined by generators G1 = 5, G2 = 7.
`
`I. INTRODUCTION
`
`dk
`
`Let 11s consider a binary rate R = 1/2 convolutional
`encoder overall constraint length K and memory M Z K—l.
`The input to the encoder at time k is a binary data bit dk and
`the corresponding codeword Ck is the binary couple (Xk, Yk)
`with :
`
`xi = Zelda.- mod-2
`1:0
`K~1
`
`g} = 0,1
`
`yk = Zia-idles
`1:0
`
`”20612
`
`g? = 0,1
`
`(1.1)
`
`(1.2)
`
`Where G1: {g3}, G2:{g,-2 } are the two encoder generators,
`generally expressed in octal.
`
`It is well known, that the BER of a Non Systematic
`Convolutional (NSC) code is lower than that of a systematic
`code with the same memory M at large SNR [1]. At low
`SNR, it is in general the other way round. The new class of
`RSC code, proposed in this paper, is always better than the
`best NSC code at any SNR for high code rate.
`
`A binary rate R = 1/2 RSC code is obtained from a NSC
`code by using a feedback loop and setting one of the two
`outputs Xk or Yk equal to the input data bit dk. For an RSC
`code, the shift register (memory) input is no longer the data
`bit dkbut is a new binary variable ak_ IfXk = dk (respectively
`= dk), the ouput Yk (resp. Xk) is equal to equation (1b)
`(resp. 1a) by replacing dk by ak and variable ak is
`recursively calculated as :
`K—l
`
`a,C = dk + 27, a“ mod.2
`i=1
`
`(2)
`
`is respectively equal to g].1 if Xk = dk and to g?
`Where 7,.
`if Yk = dk. Equation (2) can be rewritten as :
`
`1- a Non Systematic Convolutional (NSC) encoder
`
`yk
`
`1- b Recursive Systematic Convolutional (RSC) encoder with X k —
`
`E
`
`
`1- c Recursive Systematic Convolutional (RSC) encoder with yk = d k
`
`Fig.1 NSC encoder and the two associated RSC encoder
`with M=2 and generators G1 = 5, G2 = 7
`
`Generally, we assume input data bit dk takes values 0
`or 1 with the same probability. From equation (2), we can
`show that variable ak exhibits the same statistical property.
`Thus the trellis structure is identical for the RSC code and
`
`0-7803-2509-5/95 US$4.00 © 1995 IEEE
`
`2267
`
`ERICSSON EXHIBIT 1012
`
`
`
`the NSC code and these two codes have the same free
`
`distance df However the two output sequences {Xk} and
`{Yk } do not correspond to the same input data sequence
`{dk} for RSC and NSC codes. This is the main difference
`between the two codes.
`
`When punctured code is considered, some output bits
`Xk or Yk are deleted according to a chosen perforation
`pattern defined by a matrix P. For instance, starting from a
`rate R = 1/2 code, the matrix P of rate 2/3 punctured code is :
`
`P:
`
`[11]
`t10l
`For the punctured RSC code, data bit dk must be emitted at
`each time k. This is obviously realized if all the elements
`belonging to the first or second row of matrix P are equal to
`1. When the best perforation pattern for a punctured NSC
`code is such that outputs Xk, Yk are both punctured (at
`different time k), the previous design RSC encoder (Fig.1)
`cannot be used with this perforation pattern. In order to use
`the same matrix P for both RSC and NSC codes, the RSC
`encoder is now defined by the equation (3) and :
`
`xk — dk
`K71
`
`yk : 22, aka mac/.2
`i=0
`
`(4)
`
`(5)
`
`Where 2coefficients 9/, and A are respectively equal to g,
`and g, when the element p1.(l < j < n) of matrix P
`equals 1 and to g? and g} when p]1— 0 For example let
`us consider the matrix P equal to:
`
`P :
`
`[101]
`[110]
`Then, according to equations (2), (4) and (5), the punctured
`RSC encoder design for a memoryM = 2 and generators
`G1=—5, G2=7 is
`the one lon Fig. 2. When the encoder
`coefficients change (from g} to g, ) the trellis caracteristics
`change too and the decoding rule is modified The decoding
`complexity is not increased by this switching operation for a
`punctured code already needs a synchronisation between the
`encoder and the decoder.
`
`II—RSC CODES PERFORMANCE
`
`already searched the best
`authors have
`Several
`punctured convolutional codes by using the free distance
`criterion and more recently by determining the weight
`spectrum of the punctured code and then calculating an
`upper bound of the BER [2]. The punctured RSC codes
`considered in this paper, are obtained from the best
`punctured NSC codes given by [3]. To compare the
`performances of these two kinds of codes, we shall use the
`weight spectrum and the BER criteria.
`
`Before presenting the different results. let us recall the
`definitions of the weight spectrum and the expression of the
`upper bound of BER. The weight spectrum of a code is made
`
`
`
`
`
`
`s = 1 (swflch put on) it anJ -1
`s = a (switch put off) If a”:
`Fig.2 Punctured RSC encoder with M: 2, generators
`01:5, G127, and R = 3/4
`
`up of two sets of coefficients n(d) and W(d) obtained from
`two series expansions related to the code transfer function
`T(D,N) [4] :
`
`
`
`= Zn(a’)Dd
`dzd,
`
`(6)
`
`d
`
`7
`.
`co
`_
`6T(D,N)
`
`( )
`———5N w _ di(d)D
`Where dfis the free distance of the code, I’l(d) is the number
`of paths at Hamming distance d from the “null” path and
`W(d) is the total number of input bits equal to 1 in all paths
`at distance d from the “null” path. In general, it is not easy
`to calculate the transfer function of a punctured code, that is
`why the first coefficients n(d) and W(d) are directly obtained
`from the trellis by using an algorithm derived from [5]. From
`coefficients W(d), a tight upper bound of BER, Pg can be
`calculated for large SNR [4] :
`
`Zwrdmd)
`d=df
`
`(8)
`
`For a rnerrroryless gaussian channel with binary modulation,
`P(d) is equal to :
`
`d ~
`P( )
`~ ~2-erfc
`
`0? Eb
`RNw-O-
`
`(9)
`
`to noise
`where Eb/NO is the energy per information bit
`density ratio and R is the code rate. For low SNR, this upper
`bound is not accurate enough, therefore we used a simulation
`(Viterbi algorithm) in order to evaluate the exact BER at low
`SNR.
`
`We present in Table 1 some examples of RSC codes
`having a good weight spectrum for rate R : 2/3, 3/4, 4/5, 5/6
`and for memory M = 4, 5, 6. Coefficients ”(61) are the same
`for RSC and NSC codes. We can notice that the {WrSC(d)} of
`RSC code increase more slowly than the {Wnsc(d)} of NSC
`code at any rate R and whatever the memory N].
`
`the
`For a given memory M, the greater the rate R,
`better the RSC codes are, compared to the NSC codes. When
`rate R is greater than 3/4, the RSC codes are always better
`than the NSC codes except for R = 3/4 and M = 4 where
`’,-Sc(df)
`is
`twice Wmddf). The RSC code behaviour,
`compared to NSC code,
`is illustrated on Fig.3, where the
`ratio PVrsc(d)/Wmc(d) is plotted in function of distance d.
`From Fig.3, we can see that RSC codes are very attractive,
`especially for low SNR and large rate R.
`
`2268
`
`
`
`Table I Weight spectra. of punctured RSC and NSC codes : memory M, rate R and generators G1, G2.
`
`Punctured
`
`( ”(C0,
`
`1 Wrsc(d),
`
`d = df. df+l, df+2, ..... )
`(1': df, dfel, df+2, ..... )
`
`d = df, df I 1, df'tZ, ..... )
`
`( Wrsc1d7/W/nscid)
`
`d = df, de, df+2, ..... )
`(1,0, 27,0, 345,0,4515,0.59058.0)
`(3,0,106.0,1841,0,30027,0,471718,0)
`(1 ,0,124,0.2721 05065908584360)
`
`1(
`
`19,0.220.0,3089.0,42725,0,586592,0)
`82.0.1260.0.21530.0,354931.0,5643947.0)
`96.0.1904,0.35936,0,637895,0,10640725.0)
`0.8
`
`30802.17043853)
`(3,70.285,1276.6160,27128,1 17019.498835,2103480.8781268.29981597)
`1.000,1.086.0.944,0.753,0.696.0.665.0.634.0.608,0.588,0.573.0.569
`(1 ,2,23,124,576.2847,14147,69954,346050.171 1749)
`(2,6,86,584,3086,17278,96394,528024.2865512.15430036)
`(1.7,125.936.5915,36580,216612.1246685,7035254.39092197)
`
`1(
`
`1,15,65,321.1661,8388,42560.215586.1091757,5533847)
`(3.59.300.1700.10129,57470,322813.1796614,9916116.54393318)
`(3.85490319820557123312.724657.4177616,23720134.133193880)
`
`( (
`
`
`
`(
`
`I 8,31.160,892,4512,23297.120976.624304,3229885,16721329)
`32,143,832,5413.30849,176138.1004766.5654154,31667215.176434204)
`42,201 ,1 492,10469,62935.379546.2252394,13064540,75080308,427474864)
`0762,0711 0.558 0.517 0.490 0.464 O.446,0.433,0.422.0.413
`(3,16,103.673,3951 ,24169.145921 .886841.5367926,32564008)
`(8.50.421 .3290,22488,155980,1058726.7128484.47398486.31313)
`(1 1.78,753.6890.51597.384985,2729430.19106443,130719110.88972639)
`
`(754.307.2005. 1 2962.831 1 1 532859.341708521921778)
`(22,217,1465,1 1 199,82844597496.4256276.30025829,210156840)
`(40.381.325127123.213366,1619872,1198628287121461,624743990)
`
`(3.24.172,1158,7408,48706,319563.2094852,13737566)
`(10,96.848.6552,47789.353697,2575710,18559164.132683604)
`(12,188,1732,15256.121367.945395.716758453348314.391598110)
`0.833.0511.0.490,0.430,0.394,0.374,0.359,0.348,0.339
`(5.37.309.2276,16553.121552.893147,6560388.48185069)
`(18,173.1838,16691,144316.1226467,1023608884183609584403061)
`(20,265.3248.32299,297308.2629391 ,22591098.190034783,1572790875)
`.
`
`(64.730:631 1 56329504412.440868938045121)
`(100,1592,17441,166331,1591180,14610169.130823755)
`
`4306689)
`(50.
`((92.528,8694,79453,791795,7369828.67809347.609896348)
`0.544,0.568,0.385037403400326.0312.0.302
`
`On Fig.4, 5 and 6, we have plotted the BER for
`different values of rate R and memory M. When memory M
`is greater or equal to 5 the RSC codes are better than the
`NSC codes at any SNR but the best coding gain is however
`obtained for low SNR and large rate R. For instance at
`Pe=10_l and it! = 5, the coding gain is approximately 0.8 dB
`at R = 2/3, whereas atR = 5/6 it achieves 2 dB.
`
`Punctured RSC codes
`are
`concatenation of two eonvolutional
`
`for
`very
`attractive
`codes.
`Indeed for
`
`concatenation design, the two codes must be very efficient at
`low SNR. With large SNR, the concatenation design with
`interleaving is any way very efficient and the BER is lower
`than with a single encoder. Therefore, even if the first
`coefficients Wrsc(a') of the RSC code are greater than the first
`coefficients Wmc(d) of the NSC code, punctured RSC codes
`make it possible for concatenation,
`to achieve better
`performances at any SNR.
`
`2269
`
`
`
`
`
`
`O
`
`2
`
`4
`
`6
`
`8
`
`10
`
`14
`12
`e'JEstcmce d
`
`1G
`
`Fig. 3-3 Memory M: 4, generators G1=23, G2=35.
`
`
`
`0
`
`2
`
`4
`
`6
`
`8
`
`10
`
`14
`12
`mistance a
`
`16
`
`Fig. 3-b MemoryM = 5, generators G1=75, G2=53
`
`
`
`16
`
`1distanced
`Fig. 3-c MemoryM= 6, generators G1 =133, G1:171.
`
`Fig.3 Wrsc(d)/ Wnsc(d) of the punctured RSC and
`NSC codes from Table 1.
`
`Fig. 4 Fe of punctured RSC and NSC codes R =2/3,3/4
`and 5/6 withM = 4, generators G =23, G2235.
`
`EbWoidB)
`
`
`
`
`Fig. 5 Fe of punctured RSC and NSC codes R =2/3,3/4
`and 5/6 WithM = 5, generators G =75, G2=53.
`
`E bt'i‘kzgzi B)
`
`2270
`
`
`
`
`
`{Eh/NOMB}
`
` INTERLEAVIN 9
`
`
`
`Fig. 6 P6 of punctured RSC and NSC codes R =2/3,3/4
`and 5/6 with M= 6, generators G =l33, G2=l71.
`
`Fig. 7 Example of encoder with parallel concatenation
`of two punctured RSC codes
`
`HI — PARALLEL CONCATENATION OF RSC CODES
`
`In general, serial concatenation with interleaving is
`considered to improve the BER at
`large SNR. With
`systematic codes, a new concatenation design called parallel
`concatenation can be used. In Fig.7, we have represented the
`encoder schema with parallel concatenation design. The rate
`R = 1/2 encoder is obtained from two identical RSC encoders
`
`built from a NSC encoder as previously described.
`
`For an input data bit sequence {dk}, outputs Xk and Yk
`are respectively equal to dk and y11C (encoder C1 output) n1
`times over, and to Yk2
`(encoder C2 output) n; times over
`according to a particular rule, The encoder C1 rate R1 and
`encoder C2 rate R; are equal to :
`
`The decoder DECI yields soft decision to decoder
`DEC; and uses an optimal decoding rule according to a
`The input decoder information he and Qk are respectively
`equal to:
`
`1k:
`
`Qk:
`
`(ZXk — 1) + 51k
`(2Yk — 1) + BQk
`
`(11.1)
`
`(11.2)
`
`are two independent noises with same
`where B[k and BQk
`variance 02. The redundant information Qk is demultliplexed
`and sent toward decoder DECl2 when Yk Z Yk1 (Q) and
`2
`decoder DEC; when Y,C = Yk (Qk ).
`
`The decoder DECl yields soft decision to decoder
`DECz and uses an optimal decoding rule according to a
`modified optimal decoding of linear codes for minimizing
`symbol error rate algorithm [6]. The second decoder DEC1
`simply uses a Viterbi algorithm. In Fig.9, we have presented
`one example of BER for parallel concatenation using two
`RSC encoders with the same memory M = 4, rates R1 = 4/7;
`R2 : 4/5, generators G1, G; (23,35) and (37,25) and with
`matrix perforation respectively equal to :
`
`P1:
`
`l1 1
`1 1l
`F1111l
`: [0001]
`P2
`and
`[1110i
`In Fig. 9, we have also plotted for comparaison the BER for
`two simple NSC codes (M 2 4, 61:23, 62:35) and (M=6,
`G1=133, G2=171).
`
`R1 =
`
`R2 =
`
`(10)
`
`nI +2112
`2111+}?2 ’
`For instance with n2 = l, 2, 3 and 711 = 1, couple rate
`(R1, R2) are respectively equal to (2/3/l2/3), (3/4//3/5) and
`(4/5/l4/7) (the double slash meaning parallel concatenation).
`A delay L1 is inserted in the encoder in order to take the
`decoder DECI delay into account.
`
`The decoder is made up with two elementary decoders
`according to a serial concatenation design (Fig.8). The first
`decoder DEC; is associated to the lower rate R1 encoder C1
`and yields a soft decision. The error bursts at the decoder
`DECl output are scattered with the interleaving.
`
`2271
`
`
`
`INTERLEAVING
`
`DEINTER -
`
`LEAVING
`
`decoded output
`
`Fig. 8 Decoder for parallel concatenation of two punctured RSC codes.
`the first
`With concatenation of two convolutional codes,
`decoder must
`yield soft
`decision and the decoding
`algorithm given by [6] is fairly complex. A simpler and real-
`time decoding algorithm has worked out
`[8]
`to get
`the
`concatenation scheme implementable on silicon.
`
`ACKNOWLEDGEMENT
`
`The authors wish to thank Prof. A. Glavieux and Prof.
`
`C. Berrou of the “Ecole Nationale Snpérieure des Te’lé
`communications de Bretagne, France”
`for their effective
`support of development.
`REFERENCES
`
`DECODER C1 DECODER C2
`
`
`[1]
`
`[2]
`
`Ehmmdfl)
`
`Fig. 9 Compared P6 of parallel concatenation of RSC
`and NSC codes with the same rate R = 1/2.
`
`It is well known that concatenation design is attractive
`for large SNK Fig.9 shows that parallel concatenation comes
`to the same conclusion but parallel concatenation is easier to
`implement than classic concatenation, which is its main
`advantage.
`Indeed with parallel concatenation,
`the two
`decoders use the same clock thus the timing clock recovery is
`simpler than classic concatenation. The decoder from Fig.8
`is not optimal. After a first decoding step, it is possible to
`improve the BER by making a multi—step decoding [7].
`
`IV — CONCLUSION
`
`In this paper, we have presented a new class of
`convolutional codes derived from classical NSC codes. Using
`a computer search, we obtained the weight spectrum of these
`codes and we have shown that RSC code is better than NSC
`
`code at low SNR and at high rate. For large SNR, these two
`classes of code are equivalent Finally we proposed a new
`concatenation design which is easier to implement
`than
`'classic concatenation.
`
`[4]
`
`l5]
`
`[6]
`
`[7]
`
`l8]
`
`2272
`
`A. J. Viterbi and J.K. Omura. Principles of digital
`communication and coding. New York : Mac Graw-
`Hill, 1979.
`“High—rate punctured
`D. Haccoun and G. Begin,
`convolutional codes
`for Viterbi and
`sequential
`decoding,” IEEE Trans. on Commun, vol. COM-37,
`pp. 1113-1125, Nov. 1989.
`G. Begin, D. Haccoun and C. Paquin.," Further
`results on high—rate punctured convolutional codes for
`Viterbi and sequential decoding,” IEEE Trans. on
`Commun, vol. COM-38, pp. 1922-1928, Nov. 1990.
`A.
`J. Viterbi,
`“Convolutional
`codes
`and their
`performance
`in communication systems,”
`IEEE
`Trans. on Commun .Technol., vol. COM-19, pp 751—
`722, Oct. 1971.
`M. Cedervall and R. Johannesson. “A fast algorithm
`for computing distance spectrum of convolutional
`codes,” IEEE Trans. Inform. Theory, vol. IT-35, pp.
`720 - 738, Nov. 1970.
`Jelinck, and J. Raviv,
`LR. Bahl,
`J. Cocke, F.
`“Optimal decoding of linear codes for minimizing
`symbol error rate,” IEEE Trans. InformTheory, vol.
`IT—20, pp. 284 - 287. March 1974.
`C. Berrou, A. Glavicux and P.Thitimajshima, “Near
`Shannon limit error correcting coding and decoding :
`turbo-codes,” in Proc. ICC’93, Geneva, Switzerland,
`May 1993, pp. 1064-1070.
`C. Benou, P. Adde. E. Angui and S. Faudeil, “A low
`complexity soft—output Viterbi decoder architecture,”
`in Proc. 1CC’93, Geneva, Switzerland, May 1993, pp.
`737-740.
`
`