`CODING AND DECODING : TURBO-CODES (1)
`
`Claude Berrou, Alain Glavieux and Punya Thitimajshima
`
`Claude Berrou, Integrated Circuits for Telecommunication Laboratory
`
`Alain Glavieux and Punya Thitimajshima, Digital Communication Laboratory
`
`Ecole Nationale Supérieure des Telecommunications de Bretagne, France
`
`(1) Patents N° 9105279 (France), N° 92460011.7 (Europc). N° 07/870,483 (USA)
`
`P,{ak =0/a, =£1,..a,_1=£,,_1}=P,{d,, =5} =1/2 (4)
`with sis equal to
`K-1
`
`e=zy,—e,.
`[=1
`
`mod.2
`
`6
`
`Thus the trellis structure is identical for the RSC
`code and the NSC code and these two codes have the same
`free distance df. However, the two output sequences {Xk} and
`{Yk } do not correspond to the same input sequence {dkl for
`RSC and NSC codes. This is the main difference between the
`two codes.
`
`When punctured code is considered, some output
`bits Xk or Yk are deleted according to a chosen puncturing
`pattern defined by a matrix P. For instance, starting from a
`rate R=l/2 code, the matrix P of rate 2/3 punctured code is
`
`l‘ ‘l
`
`Abgmt - This paper deals with a new class of convolutional
`codes called Turbo-codes, whose performances in terms of
`Bit Error Rate (BER) are close to the SHANNON limit. The
`Turbo-Code encoder is built using a parallel concatenation of
`two Recursive Systematic Convolutional codes and the
`associated decoder, using a feedback decoding rule,
`is
`implemented as P pipelined identical elementary decoders.
`
`I - INTRODUCTION
`Consider a binary rate R=1/2 convolutional encoder with
`constraint length K and memory M=K-1. The input to the
`encoder at time k is a bit dk and the corresponding codeword
`Ck is the binary couple (Xk, Yk) with
`K-1
`
`X,,= Zgl,-d,,_,»
`i=0
`K-1
`
`m0d.2 gl,-=O,l
`
`(la)
`
`Kc: Eg2idk—i
`i=0
`
`mod-2 32:0-1
`
`(lb)
`
`where G1: {gm}, G2: {ggg } are the two encoder generators,
`generally expressed in octal form.
`It is well known, that the BER of a classical Non
`Systematic Convolutional (NSC) code is lower than that of a
`classical Systematic code with the same memory M at large
`SNR. At low SNR, it is in general the other way round. The
`new class of Recursive Systematic Convolutional (RSC)
`codes, proposed in this paper, can be better than the best NSC
`code at any SNR for high code rates.
`A binary rate R=1/2 RSC code is obtained from a
`NSC code by using a feedback loop and setting one of the
`two outputs X}, or Yk equal to the input bit dk. For an RSC
`code, the shift register (memory) input is no longer the bit dk
`but is a new binary variable ak. If Xk=dk (respectively
`Yk=dk), the output Yk (resp. Xk) is equal to equation (lb)
`(resp. la) by substituting ak for dk and the variable ak is
`recursively calculated asK-1
`ak =dk + Zy,-a,C_,—
`i=1
`
`mod.2
`
`(2)
`
`where 7; is respectively equal to g1,- if Xk=dk and to gg; if
`Yk=dk. Equation (2) can be rewritten as
`K-1
`
`dk = _)_',;)}',rak_,-
`One RSC encoder with memory M=4 obtained from an NSC
`encoder defined by generators G1=37, G2=2l is depicted in
`Fig.1.
`
`Generally, we assume that the input bit dk takes
`values 0 or 1 with the same probability. From equation (2),
`we can show that variable ak exhibits the same statistical
`property
`
`0—7803—0950—2/93/$3.00©l993IEEE
`
`1064
`
`Fig. 1b Recursive Systematic code.
`
`ERICSSON EXHIBIT 1017
`
`
`
`II - PARALLEL CONCATENATION OF RSC CODES
`With RSC codes, a new concatenation scheme,
`called parallel concatenation can be used. In Fig. 2, an
`example of two identical RSC codes with parallel
`concatenation is shown. Both elementary encoder (C1 and
`C2) inputs use the same bit dk but according to a different
`sequence due to the presence of an interleaver. For an input
`bit sequence {dk}. encoder outputs Xk and Yk at time k are
`respectively equal to dk (systematic encoder) and to encoder
`C1 output Y 1k, or to encoder C2 output Y2k. If the coded
`outputs (Y1k, Y2k) of encoders C1 and C 2 are used
`respectively rt; times and n2 times and so on. the encoder C1
`rate R1 and encoder C2 rate R2 are equal to
`
`R —
`__"ifl2_
`2n1+n2
`'
`
`—
`R __"tflz_
`2
`2n2+n,
`
`.
`
`6
`()
`
`given encoder (C1 or C2) is not emitted, the corresponding
`decoder input is set to zero. This is performed by the
`DEMUX/INSERTION block.
`It is well known that soft decoding is better than
`hard decoding, therefore the first decoder DEC; must deliver
`to the second decoder DEC2 a weighted (soft) decision. The
`Logarithm of Likelihood Ratio (LLR), A[(dk ) associated
`with each decoded hit dk by the first decoder DEC1 is a
`relevant piece of information for the second decoder DEC2
`P, {d,, = 1/observation}
`P, {d, = O/observation
`where P,{dk =i /observation},i = 0, 1 is the a posleriori
`probability (APP) of the data bitdk.
`
`a serial concatenation scheme.
`
`Fig. 3a Principle of the decxader according to
`
`INSEFITION
`
`Fig. 2 Recursive Systematic codes
`with parallel concatenation.
`
`The decoder DEC depicted in Fig. 3a, is made up of two
`elementary decoders (DEC1 and DEC2)
`in a serial
`concatenation scheme. The first elementary decoder DEC1 is
`associated with the lower rate R1 encoder C1 and yields a
`soft (weighted) decision. The error bursts at the decoder
`DEC1 output are scattered by the interleaver and the encoder
`delay L1 is inserted to take the decoder DEC1 delay into
`account. Parallel concatenation is a very attractive scheme
`because both elementary encoder and decoder use a single
`frequency clock.
`For a discrete memoryless gaussian channel and a
`binary modulation, the decoder DEC input
`is made up of a
`couple Rk of two random variables xk and yk, at time k
`x,,=(2a'k—l)+ik
`(7a)
`yk=(2Y,, ——1)+q,:,
`(717)
`where ik and qk are two independent noises with the same
`variance 0'2. The redundant information yk is demultiplexed
`and sent to decoder DEC1 when Yk =Y1k and toward decoder
`DEC2 when Yk =Y2k. When the redundant information of a
`
`1065
`
`III - OPTIMAL DECODING OF RSC CODES WITH
`WEIGHTED DECISION
`
`The VITERBI algorithm is an optimal decoding
`method which minimizes the probability of sequence error
`for convolutional codes. Unfonunately this algorithm is not
`able to yield the APP for each decoded bit. A relevant
`algorithm for this purpose has been proposed by BAHL er al.
`[1]. This algorithm minimizes the bit error probability in
`decoding linear block and convolutional codes and yields the
`APP for each decoded bit. For RSC codes, the BAHL et al.
`algorithm must be modified in order to take into account their
`recursive character.
`
`III - 1 Modified BAHL et al. algorithm for RSC codes
`Consider a RSC code with constraint length K; at
`time k the encoder state Sk is represented by a K-uple
`
`(9)
`S2 = {ak,a,,_, ..... ..a,‘_,“,).
`Also suppose that the information bit sequence (dkl is made
`up of N independent bits dk. taking values 0 and 1 with equal
`probability and that the encoder initial state So and final state
`SN are both equal to zero, is
`(10)
`S0=SN= (0, O......0)=0.
`encoder output
`codeword sequence, noted
`The
`C1” = {C1 ..... .. C,,...... ..CN}is
`the input
`to a discrete
`gaussian memoryless channel whose output is the sequence
`R,” = {R, ...... ..R,, ...... ..R,,,} where Rk =(xk,yk) is defined by
`relations (7a) and (7b).
`
`
`
`The APP of a decoded data bit dk can be derived
`
`from the joint probability 2.; (m) defined by
`(11)
`,t',,(m)=P,{d,,=i,s,=m/R,"’}
`and thus. the APP of a decoded data bit dk is equal to
`P,{d,, =i/R,"} =§ /1§,(m),i=o,i
`(12)
`From relations (8) and (12), the LLR A(d,,) associated with
`a decoded bit dk can be written as
`
`2 ii (m)
`= Z0 "4.
`gzlgm)
`
`A(dk)
`
`13
`
`(
`
`)
`
`Finally the decoder can make a decision by comparing
`A(d,,)
`to a threshold equal to zero
`
`d,=1
`
`.1 A(dk)>0
`
`(14)
`if A(dk)<0.
`ci,,=o
`In order to compute the probability li( m), let us introduce
`
`the probability functions 05,‘; (m), fi,‘(m) and y,(Rk, m’, m)
`P,.{dk :i,Sk
`P,{R{‘}
`
`P,{d,c = 1,3,, = m/Rf} (15)
`
`a,';(mi=
`
`,/i,,(m)=
`
`P,{R,f’,.,/S,‘ =m}
`P,{R.f’,i/Rf}
`(17)
`y,-(R,t,m’,m) = P,{d,, =i,R,,s, =m/S,‘_1=m')}.
`The joint probability 1'}, (m) can be rewritten using BAYES
`rule
`
`(16)
`
`W"
`
`Thus we obtain
`
`P,{d,, =z,s,, =m,R,",R,§V,,}
`»{Rr.R,:u}
`
`. (18)
`
`/'L},(m)=
`
`P,{d,, =i,s, =m,R,"} p,{R,f',, /4,, =i,s,, =m,R1"}
`P,{Rf}
`P,{R:ii/Rf}
`
`Taking into account
`
`(19)
`that events after time k are not
`
`influenced by observation Rf‘ and bit dk if state Sk is known,
`
`the probability /'L'},(m) is equal
`
`1L(m)= a:;(m)ti,,(m).
`
`(20)
`
`The probabilities a;;(m) and mm) can be recursively
`calculated from probability 7,-( Rk, m’, m). From annex I, we
`obtain
`l
`
`E E7.-(R;..m’,m)a,i_,(m’)
`ai(m)= (21)
`E220 _EO7i(Rie» "'1 ’")ai—1(’"’)
`m m l= }=
`
`and
`‘B
`
`1
`Z:_Z;]7i(RIc+l»m»m’)BI¢+l(ml)
`;(m)= . (22)
`2; _i‘0Yi(Rk+l'ml*m)aIi(m/)
`m m K= }=
`The probability y,<(R,,,m’,m)
`can be determined from
`transition probabilities of the discrete gaussian memoryless
`
`1066
`
`channel and transition probabilities of the encoder trellis.
`From relation (17), Y;(Rk, m’, m) is given by
`
`y,.(R,,,m’,m)=p(R,, /d,, =i,S,, =m,S,_1=m’)
`
`(23)
`q(d,, =i/S,, =m,S,,_1= ’)rr(S,, =m/S,,_1=m’)
`where p(./.) is the transition probability of the discrete
`gaussian memoryless
`channel. Conditionally to
`(dk = i, S}, = m, Sk-1 = m’), xk and yk are two uncorrelated
`gaussian variables and thus we obtain
`p(Rk /d,‘ =i,Sk = m,S,_1= m’) =
`
`p(xk /d,, = i,S,, = m, S,,_, :m')
`
`(24)
`p(y,, /dk =i,S,, =m,Sk_1=m’).
`Since the convolutional encoder is a deterministic machine,
`q(d,, =i/S,‘ =m,Sk_, =m’)
`is equal
`to 0 or
`1. The
`transition state probabilities 7r(S, =m/S,,_1 =
`’) of the
`trellis are defined by the encoder
`input statistic.
`Generally,P, {d,, =1} = P, {dk =0} = 1/2 and since there
`are
`two
`possible
`transitions
`from each
`state,7z(S,‘=m/S,(_,=m’)=l/2
`for
`each of
`these
`transitions.
`
`Different steps of modified BAHL et al. algorithm
`-Step 0 : Probabilities a(‘,(m) and /3N(m) are
`initialized according to relation (12)
`ag(0)=1 a§,(m)=o Vm¢0, i=0,1
`
`(25a)
`
`(2517)
`[3N(0)=l fiN(m)=0 Vmaeo.
`-Step 1 : For each observation Rk, the probabilities
`oz,‘;(m) and 7,-(R,, m’, m) are computed using relations (21)
`and (23) respectively.
`
`-Step 2 : When the sequence R," has been
`completely received, probabilities fik (m) are computed using
`
`relation (22), and probabilities a},(m) and fik(m) are
`multiplied in order to obtain l'},(m). Finally the LLR
`associated with each decoded bit dk is computed from
`relation (1 3).
`
`IV- THE EXTRINSIC INFORMATION OF THE RSC
`DECODER
`
`In this chapter, we will show that the LLR A(dk)
`associated with each decoded bit dk , is the sum of the LLR
`of dk at the decoder input and of another information called
`extrinsic information, generated by the decoder.
`Using the LLR A(dk) definition (13) and relations (20) and
`(21), we obtain
`2; ir1(R,,,m’.m)a,{_1(m’)i6,,(m)
`A(dk) =mg _
`£2 E y0(Rk:m,: "flair,-1
`m m’j=O
`
`(26)
`
`the transition
`Since the encoder is systematic (Xk = die),
`probability p(x,, /d,, = AS,‘ = m,S,,_, = m’) in expression
`yi(R,‘,m’,m) is independent of state values Sk and Sk_1.
`Therefore we can factorize this transition probability in the
`numerator and in the denominator of relation (26)
`
`
`
`feedback loo o
`
`1 6-STATE
`DECODE R
`
`inter-
`
`1 6-STATE
`DECODEFI
`
`DEC?
`
`deinter-
`
`deinter
`
`pi ;
`
`DEMUXI
`INSEHTION
`
`Fig. 3b Feedback decoder (under 0 Internal delay assumptlon).
`
`leaving
`
`decoded output
`
`A d
`
`k
`
`d
`:10 p(x,,/d,,=l)
`/U 1:)
`8“‘Fxk/dk=0)+
`2; ir,(y,..m',m)a,{_,(m')B,.(m)
`Log mm j-=0
`:2 ismi.m'.m)ai-1(m')fi..tm)'
`m m ’ j=0
`
`Conditionally to dk =1 (resp. dk :0), variables xk are
`gaussian with mean 1 (resp. -1) and variance 62, thus the
`LLR A (dk) is still equal to
`2
`A(d,‘)=;3-xk+W,¢
`
`(28)
`
`where
`
`W: =A(dk) l.,-., =
`2: $71 ()’k-ma m)0‘I':-1(”"’)fik(m)
`wg , (29)
`E}: E To (yin '71,» m)al:—l(m,)fik ('71)
`m m’j=0
`Wk is a function of the redundant information introduced by
`the encoder. In general Wk has the same sign as dk; therefore
`Wk may improve the LLR associated with each decoded data
`bit dk. This quantity represents the extrinsic information
`supplied by the decoder and does not depend on decoder
`input xk. This property will be used for decoding the two
`parallel concatenated encoders.
`
`V - DECODING SCHEME OF PARALLEL
`CONCATENATION CODES
`
`In the decoding scheme represented in Fig. 3a,
`decoder DEC1 computes ‘LLR A1(dk) for each transmitted
`bit dk from sequences {xk} and [y;_.}, then the decoder DEC;
`performs the decoding of sequence{dk} from sequences
`(A1(dk)} and {yk}. Decoder DEC1 uses the modified BAHL
`et al. algorithm and decoder DEC; may use the VITERBI
`algorithm. The global decoding mle is not optimal because
`the first decoder uses only a fraction of the available
`redundant information. Therefore it is possible to improve the
`performance of this serial decoder by using a feedback loop.
`
`1067
`
`V-1 Decoding with a feedback loop
`We consider now that both decoders DEC1 and
`DEC; use the modified BAHL et al. algorithm. We have
`seen in section IV that the LLR at the decoder output can be
`expressed as a sum of two terms if the decoder inputs were
`independent. Hence if the decoder DEC; inputs A1(dk) and
`ygk are independent, the LLR A;(dk) at the decoder DEC;
`output can be written as
`/\;(d;,)=f(A1(d,‘))+W;;,
`
`(30)
`
`with
`
`(31)
`A1(d,,)=%x,,+VV1,,
`From relation (29), we can see that the decoder DEC;
`extrinsic information W;k is a function of the sequence
`{A1(d,,)]m,k. Since A 1(d,,) depends on observationR1N,
`extrinsic information W;k is correlated with observations xk
`and y1k. Nevertheless from relation (29), the greater In-k I is,
`the less correlated are A1(d,,) and observations xk, yk. Thus,
`due to the presence of interleaving between decoders DEC1
`and DEC;, extrinsic information W;k and observations xi»,
`ylk are weakly correlated. Therefore extrinsic information
`Wgk and observations xk . ylk can be jointly used for carrying
`out a new decoding of bit dig, the extrinsic information
`zk = W;/,, acting as a diversity effect in an iterative process.
`In Fig. 3b, we have depicted a new decoding scheme
`using the exu'insic information Wzk generated by decoder
`DEC; in a feedback loop. This decoder does not take into
`account the different delays introduced by decoder DEC1
`and DEC; and a more realistic decoding structure will be
`presented later.
`The first decoder DEC 1 now has three data inputs,
`
`andfi,,,(m) are
`(xk, ylk, zk) and probabilities a{,(m)
`computed in substituting Rk ={xk, y1 k} by R], =(xk, y1 k, 2;‘) in
`relations (21) and (22). Taking into account that 2;;
`is weakly
`correlated with xk and ytk and supposing that zk can be
`
`approximated by a gaussian variable with variance of ¢ 0'2,
`the transition probability of the discrete gaussian memoryless
`channel can be now factored in three terms
`
`
`
`p(R,, /d,, = i,S,, =m,S,,_, = m’) = p(x,,/. )p(yk/.)p(z,,/.) (32)
`The encoder C; with initial rate R1, through the feedback
`loop, is now equivalent to a rate R'1 encoder with
`
`The first decoder obtains an additional redundant information
`with zk that may significantly improve its performances; the
`term Turbo-codes is given for this iterative decoder scheme
`with reference to the turbo engine principle.
`With the feedback decoder, the LLR A1(dk) generated by
`decoder DEC1 is now equal
`to
`2
`2
`(34)
`A1(dk)='6_7xk +?Zk +W1k
`Z
`where W”, depends on sequence {zninsqc As indicated
`above, information zk has been built by decoder DEC; at the
`previous decoding step. Therefore 2;, must not be used as
`input information for decoder DEC2. Thus decoder DEC;
`input sequences at step p (p22) will be sequences
`{zittd.)}and {ml with
`(35)
`/i1(d,,)= A1(d,,)zn=0.
`extrinsic
`Finally from relation (30), decoder DEC;
`information zk = Wgk, after deinterleaving, can be written as
`
`/{l(dl)=0
`zk :vV2k:A2(dk)|
`and the decision at the decoder DEC output is
`
`(37)
`d,, = sign[A2(d,, i].
`The decoding delays
`introduced by decoder DE C
`(DEC=DEC1+DEC2), the interleaver and the deinterleaver
`imply that the feedback information zk must be used through
`an iterative process as represented in Fig. 4a, 4b. In fact, the
`global decoder circuit is composed of P pipelined identical
`elementary decoders (Fig. 4a). The pth decoder DEC
`(Fig. 4b) input, is made up of demodulator output sequences
`(x)p and 01);, through a delay line and of extrinsic information
`(1),, generated by the (p—1)th decoder DEC. Note that the
`variance of of the extrinsic information and the variance of
`/l1(d,, ) must be estimated at each decoding step p.
`
`V-2 [nterleaving
`The interleaver uses a square matrix and bits {dk}
`are written row by row and read pseudo-randomly. This non-
`uniform reading rule is able to spread the residual error
`blocks of rectangular form, that may set up in the interleaver
`located behind the first decoder DEC1, and to give the
`greater free distance as possible to the concatenated (parallel)
`code.
`
`VI - RESULTS
`
`For a rate R=1/2 encoder with constraint length K=5,
`generators G1=37, G2=21 and parallel concatenation
`(R1=R2=2/3), we have computed the Bit Error Rate (BER)
`after each decoding step using the Monte Carlo method. as a
`function of signal to noise ratio Eb/N0 where Eb is the
`energy received per information bit dk and No is the noise
`monolateral power spectral density. The interleaver consists
`of a 256x256 matrix and the modified BAHL et al. algorithm
`has been used with length data block of N=65536 bits. In
`
`From
`demodulator
`
`Fig. 4a Modular plpelined decoder, corresponding to an
`iterative processus of the teedback decoding.
`
`(Z) pi
`
`16-STATE
`IEO
`
`Fig. 4b Decoding module (level p).
`
`order to evaluate a BER equal to 1O'5, we have considered
`128 data blocks i.e. approximatively 8 x106 bits dk. The BER
`versus E1,/No, for different values of p is plotted in Fig. 5.
`For any given signal to noise ratio greater than 0 dB, the BER
`decreases as a function of the decoding step p. The coding
`gain is fairly high for the first values of p (p=1,2,3) and
`carries on increasing for the subsequent values of p. For p=18
`for instance, the BER is lower than 1O'5 at Eb/Ng= 0,7 dB.
`Remember that the Shannon limit for a binary modulation
`with R=1/2, is P, = 0 (several authors take Pe =10’5 as a
`reference) for Eb/N0= 0 dB. With parallel concatenation of
`RSC convolutional codes and feedback decoding,
`the
`performances are at 0,7 dB from Shannon's limit.
`The influence of the constraint length on the BER
`has also been examined. For K greater
`than 5, at
`Eb/N0: 0,7 dB, the BER is slightly worst at the first (p=l)
`decoding step and the feedback decoding is inefficient to
`improve the final BER. For K smaller
`than 5, at
`Eb/N0: 0,7 dB,
`the BER is slightly better at the first
`decoding step than for K equal to 5, but the correction
`capacity of encoders C1 and C2 is too weak to improve the
`BER with
`feedback decoding. For K=4 (i.e. 8-state
`elementary decoders) and after iteration 18, a BER of 10'5 is
`achieved at Eb/N0 = 0,9 dB. For K equal to 5, we have tested
`several generators (G1, G2 ) and the best results were
`achieved with G1=37, G2=21.
`
`1068
`
`
`
`uncoded
`
`iteration #1
`
`5 Eb/No (dB)
`
`Ot
`
`heoretical
`limit
`
`Fig. 5 Bl nary error rate given by Iterative decoding (p=1....,18)
`of code oi fig. 2 (rateri/2): interleaving 256x256.
`
`For low signal to noise ratios, we have sometimes noticed
`that BER could increase during the iterative decoding
`process. In order to overcome this effect, we have divided the
`extrinsic information zk by [ l+ 9|/ll (dk)| ]with 0 = 0,15.
`
`In Fig. 6, the histogram of extrinsic information (z)p
`has been drawn for several values of iteration p, with all data
`bits equal
`to 1 and for a low signal
`to noise ratio
`(Eb/No: 0,8 dB). For p=l
`(first
`iteration), extrinsic
`information (2), is very poor about bit dk, furthermore the
`gaussian hypothesis made above for extrinsic information
`(z)p, is not satisfied! Nevertheless when iteration p increases,
`the histogram merges towards a gaussian law with a mean
`equal to 1. For instance, for p=l3, extrinsic information (2),,
`becomes relevant information concerning data hits.
`
`VII CONCLUSION
`
`In this paper, we have presented a new class of
`eonvolutional codes called Turbo-codes whose performances
`in terms of BER are very close to Sl-lANNON's limit. The
`decoder is made up of P pipelined identical elementary
`modules and rank p elementary module uses the data
`information coming from the demodulator and the extrinsic
`information generated by the rank (p-1) module. Each
`elementary module uses a modified BAHL er al. algorithm
`which is rather complex. A much simpler algorithm yielding
`weighted (soft) decisions has also been investigated for
`Turbo-codes decoding [2], whose complexity is only twice
`the complexity of the VITERBI algorithm, and with
`performances which are very close to those of the BAHL et
`al. algorithm. This new algorithm will enable encoders and
`
`1069
`
`
`
`decoders to be integrated in silicon with error correcting
`performances unmatched at the present time.
`
`.
`'
`.
`1
`'
`k—1
`Pr{R,,/R1 }=):z 2 Zy,~(R,,,m m)a,{_,(m ). (A6)
`i=0 j=0
`m m'
`
`Finally probability 01,’; (m) can be expressed from probability
`
`a,‘,_, (m) by the following relation
`z:i1’r'(RIum"")0‘I{—t(m')
` , (A7)
`XX X Z’t’;(Rt.m'm)0l;}'—1(m')
`m m’
`i=0 j=0
`
`a;;(m) =
`
`In the same way, probability Bk (m) can be recursively
`calculated from probability Bk“ (m). From relation (16), we
`have
`
`P,{R:”+r /S. = m} _
`fik(m)= —
`
`P, {R121 /Rf}
`By using BAYES rule, the numerator is equal to
`I:{R£“+r/St :"1}:%:lE1_'r;)Pr{RlI'v+2/Sk+l="1'}
`Pr{dk+1=i'Slc+l = ”'-'Rk+1/St = m}’
`(A9)
`By taking into account expressions of 'y;(Rk+1,m,m') and
`Bk“ (m), we can write
`E i‘1’r(Rk+1-m:m')fik+l(m')
`P,{R;+r/Rik}
`
`W"
`
`(A10)
`
`In substituting k by (k+1) in relation (A6), the denominator
`of (A10) is equal to
`
`1
`1
`.
`Pr{R,(+1/R1"}=EE 20 ZOyi(R,H,,m’m)a,fi(m’). (A11)
`m m’
`.'=
`j=
`Finally probability Bk (m)can be expressed from probability
`Bk“ (m'), by the following relation
`E 5 Yr'(Rk+1»m: m')fik+r(m')
`fik(m)= . (A12)
`2; Q) A>:0y.«(R,‘,.1,m'm)a,:tm')
`mm l=
`}=
`REFERENCES
`
`[1] L.R. Baht, J. Cocke, F. Jeinek and J. Raviv,
`"Optimal decoding of linear codes for minimizing symbol
`error rate", IEEE Trans. Inform Theory, vol. IT-20, pp. 248-
`287, March 1974.
`
`[2] C. Berrou, P. Adde, E. Angui and S. Faudeil, "A
`low complexity soft-output Viterbi decoder architecture", to
`appear at ICC’ 93.
`
`iteration #1
`
`E,,_..r.~«—~
`0°/.
`NORMALIZED
`.1
`SAMPLES
`
`o
`
`
`
`Fig. 6 Histograms of extrinsic information z after
`iterations 31.4.13 at EbINo = 0.8 dB;
`all information bits d=1.
`
`ANNEX I : EVALUATION OF PROBABILITIES 11,‘; (m)
`AND [3,(m) .
`
`From relation (15) probability a},(m) is equal to
`Pr{d,, = r,s,, = m,R,"",R,,}
`Pr{R1"'1,Rk}
`
`aim) =
`
`(A1)
`
`Pr{dk =1’, s,, = m,Rk/R1"'1}
`‘
`Pr{R,, /R,"“}
`The numerator of a,i(m) can be expressed from state Sk_1
`and bit dk -1.
`pr{d,, =i,S,, =m,Rk/R,"'1} =
`1
`2 zP,{d,, =i,d,,_, =j,s, = m. S,‘_1=m',R,,/RIM} (A2)
`m‘ ,'=o
`By using BAYES rule, we can write
`Pr{d,, =1", 5,, = m, R,, /R,'‘"} =
`
`r Pr{d,,_1=j,S,,_,= m’,R1"“}
`§j:‘:tJ
`P,{Rl'"l}
`P,{d,, :1, S,‘ = m,R,, /d,_, = j, s,,_, = m’, R,"“}. (A3)
`By taking into account that events after time (k-1) are not
`
`influenced by observation R,"'1 and bit dk_1 if state Sk_1 is
`known and from relation (17) we obtain
`Pr{d, =1‘, 5,, = m,R,,/Rf“) =
`r
`
`.
`
`(A4)
`Z207.-(R.,m’m)a,£_,(m’).
`m’ }'=
`The denominator can be also expressed from bit dk and state
`5k
`
`I
`P,{R,,/R,"’1}=E_)j0 pr{d,, =1‘, s,, =m, 12,, /R,"“} (A5)
`M l:
`and from relation (A4), we can write :
`
`1070